ploeh blog danish software design
String Calculator kata with Autofixture exercise 4
This is the fourth post in a series of posts about the String Calculator kata done with AutoFixture.
This screencast implements the requirement of the kata's exercise 4.
If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.
String Calculator kata with Autofixture exercise 3
This is the third post in a series of posts about the String Calculator kata done with AutoFixture.
This screencast implements the requirement of the kata's exercise 3.
If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.
String Calculator kata with Autofixture exercise 2
This is the second post in a series of posts about the String Calculator kata done with AutoFixture.
This screencast implements the requirement of the kata's exercise 2.
If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.
String Calculator kata with AutoFixture exercise 1
This is the first post in a series of posts about the String Calculator kata done with AutoFixture.
This screencast sets up the Visual Studio projects and completes the first exercise of the kata.
If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.
Comments
String Calculator kata with AutoFixture
This post introduces the String Calculator kata done with AutoFixture.
A couple of weeks ago, at the Warm Crocodile conference in Copenhagen, Roy Osherove and I talked about AutoFixture and his String Calculator kata, and I decided to do the kata with AutoFixture 3 and make a series of screencasts out of it.
This series makes no particular attempt at explaining what AutoFixture is, so you might want to first acquaint yourself with some basics, such as the theory of Anonymous Variables, Derived Values, Equivalence Classes, and Constrained Non-Determinism. It might also be a good idea to understand how AutoFixture integrates with xUnit.net.
The following screencasts are available:
As a general note, I didn't focus much on refactoring in this exercise, as I didn't feel the complexity of the solution required it.
If you like these screencasts, you may also like my Pluralsight course Outside-In Test-Driven Development.
Comments
Beware of Productivity Tools
This article discusses developer productivity tools.
Once in a while I get into a heated discussion about the merits and demerits of ReSharper. As these discussions usually happen on Twitter, the 140 character limit isn't conducive to a nuanced debate. That's what I want to start here, not a rant.
This is not going to be an attack on ReSharper. In fact, I don't have a stronger opinion on ReSharper than any other 'productivity tool', but I more often get dragged into discussions about ReSharper than, say, JustCode or CodeRush. I guess it's because more people feel passionate about ReSharper.
In fact, I'm going to expand this discussion to a wider range of 'productivity tools', such as (but not limited to)
- ReSharper
- JustCode
- CodeRush
- Productivity Power Tools
- Visual Studio
Why are we even having this discussion? #
The only 'productivity tool' I currently use is Visual Studio 2012, and even that makes me uneasy. That's just my personal preference, you might say, and there's a partial truth in that. However, I'm not writing this to defend myself. Rather, I'm writing because I think you need to be aware of the issues presented here. It might make you a better developer if I can get you to actively and consciously consider a choice you may have taken for granted.
How do I even get dragged into these Twitter flame fests? Why do people even care whether or not I use a particular 'productivity tool'? First of all, I can't claim myself innocent of occasionally trolling - I just get a kick out of yanking that particular chain. There's a reason for that, and it's not just to be mischievous. I want you to reflect on your choice of tools. Don't just drink the Cool Aid.
Still, there's a deeper, more rational reason why some people care what I do: I do give a lot of presentations about code, and during those presentations I write a lot of code. Whenever I give a talk where I code live, I always rehearse a lot and use specialized code snippets in order not to bore the audience with trivial coding. Here's an example where during the talk, someone tweeted to complain that I didn't use ReSharper. However, the purpose of giving a talk about code isn't to produce the code in the fastest possible time. The purpose is to teach code. If I code too slowly, the audience may fall asleep, but if I go too fast, no one is going to learn anything. I'm just not convinced that in this particular case, the use of a 'productivity tool' is inherently better.
Can you even live without this or that productivity tool? #
The most common reaction I get whenever people hear that I don't use their favorite 'productivity tool' is one of disbelief.
- omg why? How can do without #resharper?
- You're seriously missing out. Visual Studio is completely useless without ReSharper, imho.
- but how will you work then? Bare bone vs is soo poor compared to just about any ide
- Without #resharper my productivity drops by 50%, I'm amazed that you can manage without it
- I don't understand why you avoid ReSharper.
- hmm.. R# is good for navigating around. Shortcuts to follow application flow, up and down. How do you do that without r#?
What's my beef with productivity tools? It's much deeper than a dislike for any particular tool. Charles Petzold already described his concern about Visual Studio in 2005 in a great talk titled Does Visual Studio Rot the Mind?. It's a long read, but definitely worth your while. You should go read it now.
In case you didn't want to take the time to read that article (but then: you're already reading this lengthy article), here's the gist of it: Via IntelliSense, code generation, Wizards and drag and drop, Visual Studio assists us, but it also pushes us towards writing (or not writing) code in a particular way. It railroads us.
Does it make us more productive? I don't even know how to measure developer productivity, so I can't answer that. Do we learn while coding like that? Not much, I'd say.
While Visual Studio is, in many ways, an impressive and extremely useful piece of software, it also concerns me that I'm so dependent on it. To learn new techniques, I try to follow what's going on outside the .NET/Microsoft ecosystem. Clojure looks like a very interesting language. Erlang seems to solve some hard problems in an easy way. Storm seems to be way ahead of anything Microsoft can currently offer. Ruby developers have claimed high productivity for years. Git is a better source control system than anything Microsoft offers.
However, I feel constrained by my reliance on Visual Studio. I want to learn and use those other technologies as well as .NET, so I'm certainly not looking for tools that will further strengthen my bond with Visual Studio. Using plain vanilla Visual Studio is the least I can do to broaden my horizons.
Productivity boosts #
A common argument for a 'productivity tool' is that it makes you more productive. "Without #resharper my productivity drops by 50%, I'm amazed that you can manage without it". That's an interesting statement. How do you even measure productivity?
For the sake of argument, let's for a moment pretend that programmer productivity is measured by lines of code written. There's this myth going around that a professional programmer only writes 10 lines of code per day. This is probably not true, but even so, how many lines of code do you produce on average per day? 100? 200? Are you seriously going to claim that your productivity bottleneck is determined by how fast you can type? Really? Then learn to type faster.
Consider that most code is read a lot more than it's written. Code should be optimized for reading, not writing. Thus, productivity, if it can be measured at all, should be measured by how quickly programmers can read and understand a piece of code - not by how fast it can be written.
Furthermore, if you believe that Pair Programming is one good and productive way to produce software, you must also realize that at every given moment, at least one person isn't typing at all. As Martin Fowler puts it: "that [Pair-Programming halves the productivity of developers] would be true if the hardest part of programming was typing". In my experience, this is not the case. Thus, I'm not convinced that 'productivity tools' make anyone more productive.
If you've ever looked beyond the Microsoft echo chamber in the last decade, you will have heard a particular group of developers boast unmatched productivity. Those would be Ruby on Rails developers. Lately, it seems to me that many alpha geeks gravitate towards JavaScript (and particularly Node.js). And what about Python or Clojure? In all cases it seems that the reason why cutting edge programmers are leaving .NET, in favor of other languages and platforms, is because of better productivity. What do these languages have in common? Well, the preferred development environment certainly isn't Visual Studio. These programmers 'get by' with Vim, Emacs, Sublime Text, and many other editors. Apparently, it's possible to be 'crazy productive' without Visual Studio and a 'productivity tool'.
Railroading #
As Charles Petzold points out in his excellent article, Visual Studio enforces a certain bottom-up style of programming that isn't particularly aligned with business needs. Visual Studio (with or without 'productivity tools') makes it hard (but not impossible) to do Outside-In development.
My feeling is that whenever a tool helps us in a certain way, it closes a lot of other doors on us. We may not even be aware of what we aren't being shown, but if we can shake off the helping hand, we may also be able to see other options.
I don't mind being helped by a tool once in a while, but at other times, I'd rather make an informed decision by myself. At least I think it's important to realize that being helped means that decisions are being made for me. It's not a win-win situation. I may be able to finish a task quickly, but I lose the opportunity to learn. Not only that, but the more I rely on a tool for assistance, the more dependent do I become of it. The're a word for that. It's called Vendor lock-in.
Final thoughts #
All of this is highly subjective and personal. My personal style is to be very deliberate and patient. I go slowly in order to move fast.
In order to demonstrate just how slowly I go, I recorded half an hour of a TDD session. There's nothing special about this TDD session. I didn't pick it to impress anyone. I didn't pick the 'best' from a pool of a dozen candidates. I just recorded how I work and uploaded it. I dare you to watch it all the way through. It will be boring. You will see much think time and long periods of inactivity. This is actually a typical depiction of how I work. Yet, somehow, I still manage to produce software in such a quality that people keep coming back to me to pay me to do more.
If you watch just five minutes of that video, it should be clear to you that a 'productivity tool' wouldn't be of any help to me. It might not slow me down either, once I'd learned to use it, but it wouldn't make me more 'productive' either, so why should I bother with it?
This is just my opinion of 'productivity tools.' It's not a one-size-fits-all judgment. If you feel that you benefit from using your favorite 'productivity tool' I'm not going to tell you to change your ways. Likewise, don't judge me because I don't use a particular tool. Some programmers that I really respect use ReSharper. I respect them not because of, but rather despite of, that.
Comments
It helped me to see stupid mistakes, like possible Null references etc.
As I get more experienced I rely less on ReSharper and mainly use it out of convenience and I think I could do with out it.
I don't use Resharper so much for productivity enhancement but rather to ease the cognitive requirement when navigating a big codebase. The fuzzyfind ability to go to type, symbol or file really makes it a lot easier on the brain because you don't have to remember exactly where to find the particular method that you remember part of the name of.
Same functionality can of cause be had in vim and emacs but in a much more light-weight approach with small modules doing one thing great. I don't know what it is with Windows and the obsession with big monolithic applications (and tools) compared to unix which is much more about small well-integrated but pluggable modules.
You mention it is hard to measure programmer productivity, yet you argue a lot from a very narrow view of programmer productivity, namely lines of code written.
Productivity boosts
"Consider that most code is read a lot more than it's written. Code should be optimized for reading, not writing. Thus, productivity, if it can be measured at all, should be measured by how quickly programmers can read and understand a piece of code - not by how fast it can be written."
Productivity tools not only focus on writing lines of code, but also aid exactly in faster navigation of the code, and optimized representations. This is exactly what one of the tweeters mentioned: "hmm.. R# is good for navigating around. Shortcuts to follow application flow, up and down. How do you do that without r#?" These features can also be found in Vim/Emacs. I haven't used these text editors myself, but if you read a bit about it they are simplistic, yet very powerful.
Productivity Power Tools offers vertical tabs, allowing to have more tabs open at once, and even color-coding them. ReSharper allows more fine-grained color coding, e.g. extension methods can be given a different color allowing to more easily identify them. All of these enhancements have nothing to do with writing code.
Railroading
I think it's important to differentiate between different types of railroading here. I am not a big fan of "automated processes" either, e.g. code snippets or wizards. They indeed tend to close doors for you, preventing you from making more informed decisions. In fact I argued before code snippets actually promote bad design: http://whathecode.wordpress.com/2010/11/03/why-code-snippets-promote-bad-design/
But what about the hint system of ReSharper? It makes you aware of potential problems in your code, or alternate approaches. That's what makes it even a good learning tool as Martin commented before, or can just remind you of things you didn't immediately think about (e.g. access to modified closures).
To summarize, I think in order to have a more honest discussion about productivity tools, you should at least also look into where they can help you, instead of generalizing you don't like them because they have a few particular features you dislike.
P.s.: I tried using some markup HTML, but none seems to work. :-(
All of these things can be tedious to do 'manually', and can be significantly less tedious with a productivity tool like resharper. As long as what *you* want the code to look like matches what *resharper* wants the code to look like, that is.
When I worked through the book Seven Languages in Seven Weeks, the lack of IntelliSense was painful. I typically had a reference web page open all the time, and had to keep looking away from the code to look up a function. With experience, the time for this is reduced, but it's not eliminated. I remember the same sort of back-and-forth from my xBase days.
I've seen you code on Pluralsight (an experience I'd recommend to others). You use the Visual Studio refactoring tools for outside-in development, writing a new method name and having the IDE create the stub. You also use NuGet extensively; I'd call that a major productivity tool.
Martin Fowler wrote Refactoring thirteen years ago. I got the impression that he hoped much of it would be automated, and this has come to pass; computers are good at repetitive tasks. My IDE can Extract Method more quickly, accurately, and safely than I can.
ReSharper has more and better refactorings than Visual Studio, in my opinion.
Navigating complex code bases, including legacy code bases, is an area in which ReSharper shines. Anything that can help me understand the code is a boon.
I agree that developers have to learn how to do these things by hand. After that, I don't feel that endlessly repeating the same routine tasks is a good use of my or my employer's time.
The code generation of R# isn't the same thing as the code generation for, let's say, a WYSIWYG like Dreamweaver. It's not that R# will tell me "I need a class here" or "this should be a field that is filled in by a ctor parameter." Instead, if _I_ decide I need a new class, or _I_ decide I want to pull up some public members of a class into an interface, I can do those tasks with a simple keystroke. Sometimes the code generation "hints" get a bit annoying - for example, although I am an avid LINQ user and functional programming fan, there are many times where the readability would suffer if I let R# turn a foreach loop into a single two-thousand character expression. But that choice is still up to me - and the times where R# does suggest I use code that ends up being cleaner than what I had wrote, I take advantage of that.
So no, R# doesn't hide your code from you, it just removes some of the brain-to-code barrier that is present in all languages. Saying R# is hindering my understanding of code is like saying that you should always use C++ over C# because C# makes you more productive - or it's like suggesting that a digital painter who uses Photoshop and a drawing tablet doesn't still need to understand lighting and materials in the same way that a traditional painter would.
R# isn't going to make you a better coder, but it will make you faster and make the experience of writing and navigating code much more pleasant.
I'm agree with you that embracing Visual Studio and R# stimulates a sort of addiction that prevent developers to really deepen non-Microsoft technology (in sense of development, because .NET/C# are from long time standardized and can run also on *nix; but this is off-topic).
The point is that tools (or productivity tools in this case) are neutral. They can be used to enforce theoretically correct techniques or to spread the code with "code smells".
So if a developer lacks solid (in all senses, acronym included) basis, he will produce a bad design with any tool he will use.
Regards, Giacomo
Antoher example is that in almost every other language your project match with a folder.. or some friendly configuration file (package.json whatever), but in .Net you have project files, like csproj which is not human readable/editable.. so you are forced to use this IDE. Visual studio is a gigantic IDE where you can do things like integration tests, managing a database and of course write some code.
People often give me uphill for that; I get where you are coming from.
ReSharper or most other refactoring tools slow my IDE down to the point where its actually catching up with my typing (except CodeRush, it isn't *too* bad). That's a problem. When I hit a key I had better see it on the monitor, immediately. It has an effect from the 3000 LOC projects I work on here to the 50 000 LOC projects I work on here (to a much larger degree, obviously). Ever wondered why Visual Studio only comes with basic refactorings? Likely because the folks on the Windows team use Visual Studio and anything more elaborate would bog them down too much.
I work in a VM so just throwing more hardware at the problem won't fix it, and I'll take the productivity boost of VMs over a refactoring tool any day.
I guess ReSharper is why a lot of people are complaining about how slow VS2012 is, because for most people when they talk about Visual Studio they actually mean Visual Studio + ReSharper. It's probably why I don't have the same problem as them and personally think it's the fastest Visual Studio released; the ReSharper plugin must just be a little new/unoptimised at the moment.
And things like JustCode are a waste of time.
But intellisense is the most wonderful thing ever!
The day this was added, my programming skills increased by a huge step. I could explore and discover all the libraries. A hobby of mine, is now to create empty winforms, add a button, a timer, and just start coding... "Dim foo as System.... Hmm what am I gonna do today...? Let's explore this library.."
Just to say, intellisense made me a better and MUCH faster coder. It's perfect for people like me who will never ever open a paper book again, and would rather learn by doing.
> How do you even measure productivity?
There are some standard to measure our productivity:
http://it-cisq.org/omg-adopts-automated-function-point-specification/
And some tools are able to automatically compute function point based on this standard.
On my side, I'm much more efficient using Resharper, event to read code.
Some feature like 'Goto Implementation' really save time which can be used ... to implement something useful :) .
Gilles
Last year, I was hired on a private software developer company, which used R#, and since then, i have been loving it. But i dont let it be what defines my productivity. I learned a lot from the code snippets and suggestions it makes, and nowadays, i hardly recieve a suggestion from it, i had learnt mostly all of its trick to improve the readability of the code. To this days, i use it mostly to fill in code that we had designed on code snippets from our standards, but we can work withouth it just right
Another productivity tool that i use along side R# is power tools, mostly for it search and localization of variables along the code. Selecting a variable and pressing ctrl+shit+up/down to navigate through the code is pretty awesome!
As far as learning about the language I'm using, I have to disagree with you there. I often use Intellisense to find available methods to call on a given object and the documentation on those methods gets displayed so it's convenient for me if I'm working with a framework or portion of .Net I'm unfamiliar with. I also found ReSharper's ability to refactor loops into Linq code to be very educational when I was starting out with Linq. It would condense loops to Linq when I couldn't see how to express it and would sometimes highlight methods that I did not realize existed, helping me understand what Linq could do and how to think about using it.
In the end the basic truth is that your mileage will vary. What you get out of a tool will depend on how you approach it's use. Productivity tools can help explore existing code and how it works and it can help you learn how to use unfamiliar frameworks but you have to be thinking about what the tool is doing.
IMO, the most important "ility" is READABILITY. These productivity tools write the most unreadability code I have ever seen. Your code should be written in a way that a Jr. Developer that knows a different language can read your code (without comments) at 2am half sleep.
All of that being said, now that I have it, there are some conveniences, mainly the renaming stuff (yes, I know VS has that, but I'd really like a dialog that allowed the rename to happen instead of trying to find the fiddly little drop-down; I'm not a fan of ReSharper's inline renamer, largely because it doesn't play well with ViEmu, but I digress), "find all usages" (which is a little more honed than global find, but I still use the latter quite a bit) and "Refactor method" which, I have to admit, /when it gets it right/, can do the job faster than I can. The only problem with that is that I've found it to only be about 80% correct, producing really wonky refactors at other times -- but I can quickly undo and hand-craft as required. A colleague of mine tried to force me to rather re-write parts of code to "help" the refactor to take place -- I'm sure I don't have to point out how little sense that makes in the face of just refactoring by hand.
And I do use the ReSharper test runner, because Ctrl-U,Ctrl-L is the quickest way I've found to re-run all tests before a checkin. Ctrl-U,Ctrl-R is also well handy for running the current test or fixture (scope dependent), and Ctrl-U,Ctrl-D is a quick way to step into a test with a debugger. I also prefer the ReSharper test runner UI over the NUnit one -- but that's a minor thing, really.
At the end of the day though, I can quite easily live without these things. I don't think any of these tools are *perfect*, but what is? The parts of ReSharper which work for me shave seconds off of my work day here and there -- and that all adds up, so I appreciate them where then help. But I like to stick to the idea of a tool for a task: if it enhances my work without detracting from it, then it's useful. And if it doesn't work for you, don't use it. I could more easily part with ReSharper than ViEmu, to be totally honest.
I tried Resharper--for about one week. The myriad of options regarding what to flag as "wrong" coding style made my head spin. So I figured I'd use the defaults--BIG mistake. It seemed everything I was writing was wrong. I spent more time trying to "fix" (and research) my Resharper-flagged "mistakes" than actually writing real code. I happily uninstalled it.
It doesn't matter what you use, the problem comes when the editor/IDE does work you dont know about and dont understand well, then you are dependent since it does something you couldnt do without it, I like visual studio very much, especially the debugger and intellisense, but I dont need it to produce quality code, it just helps do things faster. or helps me learn new libraries without going through the reference.
Ruby and Javascript are different since they are dynamic languages so the need for static typing and reference tracking isnt there as much. but those languages on the other hand are tougher to debug and I have seen a debugger for those languages as good as the visual studio one for .NET
Sebastian
I totally agree with the author. I've tried ReSharper and couple of other "productivity tools", and found that they just distract me.
Software Engineer is paid for solving problems, not for writing code.
My motto is: if you can deliver, I don't care how you did that. If I want you move my staff somewhere in a specified amount of time, I don't care whether you use a truck, a ferrari, or you did the job on foot. It is really whatever suit to any of us! You are not getting better by hand-made refactoring all the time, rather having a tool do that for you.
This argument remind me back then, when linux had no decent (if at all) user interface but some people used that just to feel "wizards" :) Writing code is not a static thing. Tools, languages etc evolve. We have to evolve as well. Some will say "I don't use Linq. Looks cheesy and I am an old schooler", "I don't use .NET. I am a C++ guy that is used to have control of my resources". As we evolve, arguments evolve. Now, even me am using .net (yes was that guy that wanted to write compact c/c++ code with full control) and here we are talking about... productivity tools! :)
I do not use any of these tools (apart from VS of course) not because I am a purist. I just haven't ever looked for it. Now that I saw what R# can do, I am really tempted!
In fact, it does help you learn if you observ what change it has made to your code. I believe, after few observations you will start writing code in better manner so that you dont leverage such tools for same reason anymore.
I would say, such Productivity Tools help you make good code for you intentions, but it will not help you improve your intentions in most case. For example, if you are writing Foreach loop to get some value then it will suggest you to get it converted in LinQ but it will not help you to implement Parallel.Foreach which might improve performance in some cases by introducing complexity. So such tool helps you enhance code for your intentions but will not enhnace your intentions for better code.
Outside-In Test-Driven Development Pluralsight course
In case you missed the Tweet, the Google+ share or the announcement, my first course for Pluralsight, called Outside-In Test-Driven Development, is now available.
Comments
It is a very good course although some of the latter modules were a bit too quick for me. It must be due to my lack of experience.
On another separate note, I know that you are going to be giving a course on Dependency injection in London with SkillsMatters. Will it be covering the material in Outside in TDD? Would it be possible for you to provide more details that what's on the site?
TIA,
David
Partial Type Name Role Hint
This article describes how object roles can be indicated by parts of a type name.
In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is to let a part of a class' name convey that information. This is often very useful when using Convention over Configuration.
While a class can have an elaborate and concise name, a part of that name can communicate a particular role. If the name is complex enough, it can hint at multiple roles.
Example: Selecting a shipping Strategy #
As an example, consider the shipping Strategy selection problem from a previous post. Does attribute-based metadata like this really add any value?
[HandlesShippingMethod(ShippingMethod.Express)] public class ExpressShippingCostCalculator : IBasketCalculator
Notice how the term Express appears twice on two lines of code. You could successfully argue that the DRY principle is being violated here. This becomes even more apparent when considering the detached metadata example. Here's a static way to populate the map (in F# just because I can):
let map = Dictionary<ShippingMethod, IBasketCalculator>()
map.Add(ShippingMethod.Standard,
StandardShippingCostCalculator())
map.Add(ShippingMethod.Express,
ExpressShippingCostCalculator())
map.Add(ShippingMethod.PriceSaver,
PriceSaverShippingCostCalculator())
This code snippet uses some slightly unorthodox (but still valid) formatting to highlight the problem. Instead of a single statement per shipping method, you could just as well write an algorithm that populates this map based on the first part of each calculator's name. Follow that train of thought to its logical conclusion, and you don't even need the map:
public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory { private readonly IEnumerable<IBasketCalculator> candidates; public ShippingCostCalculatorFactory( IEnumerable<IBasketCalculator> candidates) { this.candidates = candidates; } public IBasketCalculator GetCalculator(ShippingMethod shippingMethod) { return (from c in this.candidates let t = c.GetType() where t.Name.StartsWith(shippingMethod.ToString()) select c).First(); } }
This implementation uses the start of the type name of each candidate as a Role Hint. The ExpressShippingCostCalculator already effectively indicates that it calculates shipping cost for the Express shipping method.
This is an example of Convention over Configuration. Follow a simple naming convention, and things just work. This isn't an irreversible decision. If, in the future, you discover that you need a more elaborate selection algorithm, you can always modify the ShippingCostCalculatorFactory class (or, if you wish to adhere to the Open/Closed Principle, add an alternative implementation of IShippingCostCalculatorFactory).
Example: ASP.NET MVC Controllers #
The default routing algorithm in ASP.NET MVC works this way. An incoming request to /basket/1234 is handled by a BasketController instance, /product/3457 by a ProductController instance, and so on.
The ASP.NET Web API works the same way, too.
Summary #
Using a part of a type name as a Role Hint is very common when using Convention over Configuration. Many developers react strongly against this approach because they feel that the loss of type safety and the use of Reflection makes this a bit 'too magical' for their tastes. However, even when using attributes, you can easily forget to add an attribute to a class, so in the end you must rely on testing to be sure that everything works. The type safety of attributes is often an illusion.
The great benefit of Convention over Configuration is that it significantly cuts down on the number of moving parts. It also 'forces' you (and your team) to write more consistent code, because the overall application is simply not going to work if you don't follow the conventions.
Comments
I've used MVP pattern in Unity3D game development, where I had like MosterView and MonsterPresenter autowired by assembly scanning. As result I have IPresenter as input into View and IoC container that discover and inject correct Presenter implementation into View. I also wrote additional test, where I assert if every view has corresponding presenter, such that I would discover convention violation not at run-time, but at tests running step. Just reduces feedback time a little bit.
This idea came after watching your "Conventions: Make your code consistent" presentation. Thanks.
Role Interface Role Hint
This article describes how object roles can be indicated by the use of Role Interfaces.
In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is by implementing one or more Role Interfaces. As the name implies, a Role Interface describes a role an object can play. Classes can implement more than one Role Interface.
Example: Selecting a shipping Strategy #
As an example, consider the shipping Strategy selection problem from the previous post. That example seemed to suffer from the Feature Envy smell because the attribute had to expose the handled shipping method as a property in order to enable the selection mechanism to pick the right Strategy.
Another alternative is to define a Role Interface for matching objects to shipping methods:
public interface IHandleShippingMethod { bool CanHandle(ShippingMethod shippingMethod); }
A shipping cost calculator can implement the IHandleShippingMethod interface to participate in the selection process:
public class ExpressShippingCostCalculator : IBasketCalculator, IHandleShippingMethod { public int CalculatePrice(ShoppingBasket basket) { /* Perform some complicated price calculation based on the * basket argument here. */ return 1337; } public bool CanHandle(ShippingMethod shippingMethod) { return shippingMethod == ShippingMethod.Express; } }
An ExpressShippingCostCalculator object can play one of two roles: It can calculate basket prices and it can handle basket calculations related to shipping methods. It doesn't have to expose as a property the shipping method it handles, which enables some more sophisticated scenarios like handling more than one shipping method, or handling a certain shipping method only if some other set of conditions are also met.
You can implement the selection algorithm like this:
public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory { private readonly IEnumerable<IBasketCalculator> candidates; public ShippingCostCalculatorFactory( IEnumerable<IBasketCalculator> candidates) { this.candidates = candidates; } public IBasketCalculator GetCalculator(ShippingMethod shippingMethod) { return (from c in this.candidates let handles = c as IHandleShippingMethod where handles != null && handles.CanHandle(shippingMethod) select c).First(); } }
Notice that because the implementation of CanHandle can be more sophisticated and conditional on the context, more than one of the candidates may be able to handle a given shipping method. This means that the order of the candidates matters. Instead of selecting a Single item from the candidates, the implementation now selects the First. This provides a fall-through mechanism where a preferred, but specialized candidate is asked before less preferred, general-purpose candidates.
This particular definition of the IHandleShippingMethod interface suffers from the same tight coupling to the ShippingMethod enum as the previous example. One fix may be to define the shipping method as a string, but you could still successfully argue that even implementing an interface such as IHandleShippingMethod in a Domain Model object mixes architectural concerns. Detached metadata might still be a better option.
Summary #
As the name implies, a Role Interface can be used as a Role Hint. However, you must be wary of pulling in disconnected architectural concerns. Thus, while a class can implement several Role Interfaces, it should only implement interfaces defined in appropriate layers. (The word 'layer' here is used in a loose sense, but the same considerations apply for Ports and Adapters: don't mix Port types and Adapter types.)
Comments
Could you expand upon how you might implement the fall-through mechanism you mentioned in your IShippingCostCalculatorFactory implementation, where more than one candidate can handle the shippingMethod?
How would you sort your IEnumerable<IBasketCalculator> candidates in GetCalculator() so that the candidate returned by First() is the one specifically meant to handle the ShippingMethod when one exists, and the default implementation when a specific one doesn't exist?
I considered using FirstOrDefault(), then returning the default implementation if the result of the query was nothing, but my default IHandleShippingMethod implementation always returns True from CanHandle() - I don't know what other value it could return.
You could have super-specialized IBasketCalculator implementations, that (e.g.) are only active certain times of day, and you could have the ones I've shown here, and then you could have a fallback that implements IHandleShippingMethod.CanHandle by simply returning true no matter what the input is. If you put this fallback implementations last in the injected candidates, it's only going to picked up by the First method if no other candidate (before it) returns true from CanHandle.
Thus, there no reason to sort the candidates within GetCalculator - in fact, it would be an error to do so. As I wrote above, "the order of the candidates matters."
Metadata Role Hint
This article describes how object roles can be indicated by metadata.
In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is by leveraging metadata. In .NET that would often take the form of attributes, but you can also maintain the metadata in a separate data structure, such as a dictionary.
Metadata can provide useful Role Hints when there are many potential objects to choose from.
Example: Selecting a shipping Strategy #
Consider a web shop. When you take your shopping basket to checkout you are presented with a choice of shipping methods, e.g. Standard, Express, and Price Saver. Since this is a web application, the user's choice must be communicated to the application code as some sort of primitive type. In this example, assume an enum:
public enum ShippingMethod { Standard = 0, Express, PriceSaver }
Calculating the shipping cost for a basket may be a complex operation that involves the total weight and size of the basket, as well as the distance it has to travel. If the web shop has geographically distributed warehouses, it may be cheaper to ship from a warehouse closer to the customer. However, if the closest warehouse doesn't have all items in stock, there may be a better way to optimize profit. Again, that calculation likely depends on the shipping method chosen by the customer. Thus, a common solution is to select a shipping cost calculation Strategy based on the user's selection. A simplified example may look like this:
public class BasketCostCalculator { private readonly IShippingCostCalculatorFactory shippingFactory; public BasketCostCalculator( IShippingCostCalculatorFactory shippingCostCalculatorFactory) { this.shippingFactory = shippingCostCalculatorFactory; } public int CalculatePrice( ShoppingBasket basket, ShippingMethod shippingMethod) { var shippingCalculator = this.shippingFactory.GetCalculator(shippingMethod); return shippingCalculator.CalculatePrice(basket) + basket.Total; } }
A naïve attempt at an implementation of the IShippingCostCalculatorFactory may involve a switch statement:
public IBasketCalculator GetCalculator(ShippingMethod shippingMethod) { switch (shippingMethod) { case ShippingMethod.Express: return new ExpressShippingCostCalculator(); case ShippingMethod.PriceSaver: return new PriceSaverShippingCostCalculator(); case ShippingMethod.Standard: default: return new StandardShippingCostCalculator(); } }
Now, before you pull Refactoring at me and tell me to replace the enum with a polymorphic type, I must remind you that at the boundaries, applications aren't object-oriented. At some place, close to the application boundary, the application must translate an incoming primitive to a polymorphic type. That's the responsibility of something like the ShippingCostCalculatorFactory.
There are several problems with the above implementation of IShippingCostCalculatorFactory. In the simplified example code, all three implementations of IBasketCalculator have default constructors, but that's not likely to be the case. Recall that calculating shipping cost involves complicated business rules. Not only are those classes likely to need all sorts of configuration data to determine price per weight range, etc. but they might even need to perform lookups against external system - such as getting a qoute from an external carrier. In other words, the ExpressShippingCostCalculator, PriceSaverShippingCostCalculator, and StandardShippingCostCalculator are unlikely to have default constructors.
There are various ways to implement such an Abstract Factory, but none of them may fit perfectly. Another option is to associate metadata with each implementation. Using attributes for such purpose is the classic .NET solution:
public class HandlesShippingMethodAttribute : Attribute { private readonly ShippingMethod shippingMethod; public HandlesShippingMethodAttribute(ShippingMethod shippingMethod) { this.shippingMethod = shippingMethod; } public ShippingMethod ShippingMethod { get { return this.shippingMethod; } } }
You can now adorn each IBasketCalculator implementation with this attribute:
[HandlesShippingMethod(ShippingMethod.Express)] public class ExpressShippingCostCalculator : IBasketCalculator { public int CalculatePrice(ShoppingBasket basket) { /* Perform some complicated price calculation based on the * basket argument here. */ return 1337; } }
Obviously, the other IBasketCalculator implementations get a similar attribute, only with a different ShippingMethod value. This effectively provides a hint about the role of each IBasketCalculator implementation. Not only is the ExpressShippingCostCalculator a basket calculator; it specifically handles the Express shipping method.
You can now implement IShippingCostCalculatorFactory using these Role Hints:
public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory { private readonly IEnumerable<IBasketCalculator> candidates; public ShippingCostCalculatorFactory( IEnumerable<IBasketCalculator> candidates) { this.candidates = candidates; } public IBasketCalculator GetCalculator(ShippingMethod shippingMethod) { return (from c in this.candidates let handledMethod = c .GetType() .GetCustomAttributes<HandlesShippingMethodAttribute>() .SingleOrDefault() where handledMethod != null && handledMethod.ShippingMethod == shippingMethod select c).Single(); } }
This implementation is created with a sequence of IBasketCalculator candidates and then selects the matching candidate upon each GetCalculator method call. (Notice that I decided that candidates was a better Role Hint than e.g. calculators.) To find a match, the method looks through the candidates and examines each candidate's [HandlesShippingMethod] attribute.
You may think that this is horribly inefficient because it virtually guarantees that the majority of the injected candidates are never going to be used in this method call, but that's not a concern.
Example: detached metadata #
The use of attributes has been a long-standing design principle in .NET, but seems to me to offer a poor combination of tight coupling and Primitive Obsession. To be fair, it looks like even in the BCL, the more modern APIs are less based on attributes, and more based on other alternatives. In other posts in this series on Role Hints I'll describe other ways to match objects, but even when working with metadata there are other alternatives.
One alternative is to decouple the metadata from the type itself. There's no particular reason the metadata should be compiled into each type (and sometimes, if you don't own the type in question, you may not be able to do this). Instead, you can define the metadata in a simple map. Remember, 'metadata' simply means 'data about data'.
public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory { private readonly IDictionary<ShippingMethod, IBasketCalculator> map; public ShippingCostCalculatorFactory( IDictionary<ShippingMethod, IBasketCalculator> map) { this.map = map; } public IBasketCalculator GetCalculator(ShippingMethod shippingMethod) { return this.map[shippingMethod]; } }
That's a much simpler implementation than before, and only requires that you supply the appropriately populated dictionary in the application's Composition Root.
Summary #
Metadata, such as attributes, can be used to indicate the role played by an object. This is particularly useful when you have to select among several candidates that all have the same type. However, while the Design Guidelines for Developing Class Libraries still seems to favor the use of attributes for metadata, this isn't the most modern approach. One of the problems is that it is likely to tightly couple the resulting code. In the above example, the ShippingMethod enum is a pure boundary concern. It may be defined in the UI layer (or module) of the application, while you might prefer the shipping cost calculators to be implemented in the Domain Model (since they contain complicated business logic). However, using the ShippingMethod enum in an attribute and placing that attribute on each implementation couples the Domain Model to the user interface.
One remedy for the problem of tight coupling is to express the selected shipping method as a more general type, such as a string or integer. Another is to use detached metadata, as in the second example.
Comments
Bottom line: I've landed on just using the container directly inside of these factories. I am completely on board with the idea of a single call to Resolve() on the container, but this is one case where I have decided it's "easier" to consciously violate that principle. I feel like it's a bit more explicit--developers at least know to look in the registration code to figure out which implementation will be returned. Using a map just creates one more layer that essentially accomplishes the same thing.
A switch statement is even better and more explicit (as an aside, I don't know why the switch statement gets such a bad rap). But as you noted, a switch statement won't work if the calculator itself has dependencies that need resolution. In the past, I have actually injected the needed dependencies into the factory, and supplied them to the implementations via new().
Anyway, I'm enjoying this series and getting some nice insights from it.
So an attribute on the implementation class can serve two important purposes here -- first, it does indeed make the intended role of the class explicit, and second, it can give your IoC container registration hints, particularly helpful when using assembly scanning / convention-based registrations.
The detached metadata "injecting mapping dictionary from composition root" sample is great!
In my practice I did such mapping-thing with inharitance and the result is a lot of factories all-over the code, that knows about DI-container (because of lazy-initialization with many dependencies inside each factory)
With some modifications, like generic-parameters and lazy-initialization, injection dictionary over constructor inside such universal one-universal-factory-class really could be great solution for the most cases.
Comments
Also I wonder what if you have error in summing? Error will show sth. like: Expected 34212, was 19857. How do you know what numbers where generated by int generator?
Unit tests that leverage AutoFixture tends to be terser and more declarative in nature than more traditional unit tests, which tend to look more imperative. Like every other new thing, it takes some time getting used to.
That said, there are things about the String Calculator kata that makes it a less than ideal fit for AutoFixture. The problem with the kata is that it's essentially just a bunch of more or less arbitrary rules (throw on negative numbers, ignore numbers bigger than 1000, support custom delimiter strings, etc.) There's no real domain being modelled here.
If there had been a more proper domain, a refactoring phase would probably have prompted me to redesign the API and introduced a custom Delimiter Value Object that I could have requested in the test, instead of requesting a Generator. That would have made the test even terser, but also, IMO, more readable. However, I didn't want to do that in this screencast, as I was concerned it would be too much of a digression.