String Calculator kata with AutoFixture exercise 1

Wednesday, 06 February 2013 14:05:30 UTC

This is the first post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast sets up the Visual Studio projects and completes the first exercise of the kata.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


Comments

Gary McLean Hall #
Any chance you could dive a bit deeper into the AutoDataAttribute subclass that you create? In the examples here and on a screencast of yours that I watched, you stop at customizing it for AutoMoq. Great examples though, really quick way of getting up to speed with AutoFixture.
2013-02-15 21:37 UTC
Gary, does this help?
2013-02-16 08:31 UTC

String Calculator kata with AutoFixture

Wednesday, 06 February 2013 14:04:43 UTC

This post introduces the String Calculator kata done with AutoFixture.

A couple of weeks ago, at the Warm Crocodile conference in Copenhagen, Roy Osherove and I talked about AutoFixture and his String Calculator kata, and I decided to do the kata with AutoFixture 3 and make a series of screencasts out of it.

This series makes no particular attempt at explaining what AutoFixture is, so you might want to first acquaint yourself with some basics, such as the theory of Anonymous Variables, Derived Values, Equivalence Classes, and Constrained Non-Determinism. It might also be a good idea to understand how AutoFixture integrates with xUnit.net.

The following screencasts are available:

  1. Exercise 1
  2. Exercise 2
  3. Exercise 3
  4. Exercise 4
  5. Exercise 5
  6. Exercise 6
  7. Exercise 7
  8. Exercise 8

As a general note, I didn't focus much on refactoring in this exercise, as I didn't feel the complexity of the solution required it.

If you like these screencasts, you may also like my Pluralsight course Outside-In Test-Driven Development.


Comments

Thank you for these screencasts and for taking the friction out of Unit Testing! I have one question: how can you run all tests so quickly? I also use TestDriven.Net and have a keyboard shortcut attached to Run Tests. However, this command is contextual and only runs the current method / methods of the current class. I'm not aware of a Run all tests command (or something similar). I see you running all tests in the project even when you're outside the test class.
2013-07-31 14:43 UTC
Hi Cristian, thank you for writing. The way I use TestDriven.Net is that I have a keyboard shortcut (Shift+Alt+q, simply because that combination was available, and adequately fits my hand) assigned to TestDriven.NET.RunAllTestsInSolution, which Jamie Cansdale courteously added upon my request; I don't exactly remember at which version that command was added, but I'm currently running TestDriven.Net 3.4 Professional build 2808.
2013-07-31 18:26 UTC
Just installed the latest TestDriven.Net and I also have the RunAllTestsInSolution option now. Thanks! This is very handy.
2013-07-31 19:54 UTC

Beware of Productivity Tools

Monday, 04 February 2013 09:49:12 UTC

This article discusses developer productivity tools.

Once in a while I get into a heated discussion about the merits and demerits of ReSharper. As these discussions usually happen on Twitter, the 140 character limit isn't conducive to a nuanced debate. That's what I want to start here, not a rant.

This is not going to be an attack on ReSharper. In fact, I don't have a stronger opinion on ReSharper than any other 'productivity tool', but I more often get dragged into discussions about ReSharper than, say, JustCode or CodeRush. I guess it's because more people feel passionate about ReSharper.

In fact, I'm going to expand this discussion to a wider range of 'productivity tools', such as (but not limited to)

Why are we even having this discussion? #

The only 'productivity tool' I currently use is Visual Studio 2012, and even that makes me uneasy. That's just my personal preference, you might say, and there's a partial truth in that. However, I'm not writing this to defend myself. Rather, I'm writing because I think you need to be aware of the issues presented here. It might make you a better developer if I can get you to actively and consciously consider a choice you may have taken for granted.

How do I even get dragged into these Twitter flame fests? Why do people even care whether or not I use a particular 'productivity tool'? First of all, I can't claim myself innocent of occasionally trolling - I just get a kick out of yanking that particular chain. There's a reason for that, and it's not just to be mischievous. I want you to reflect on your choice of tools. Don't just drink the Cool Aid.

Still, there's a deeper, more rational reason why some people care what I do: I do give a lot of presentations about code, and during those presentations I write a lot of code. Whenever I give a talk where I code live, I always rehearse a lot and use specialized code snippets in order not to bore the audience with trivial coding. Here's an example where during the talk, someone tweeted to complain that I didn't use ReSharper. However, the purpose of giving a talk about code isn't to produce the code in the fastest possible time. The purpose is to teach code. If I code too slowly, the audience may fall asleep, but if I go too fast, no one is going to learn anything. I'm just not convinced that in this particular case, the use of a 'productivity tool' is inherently better.

Can you even live without this or that productivity tool? #

The most common reaction I get whenever people hear that I don't use their favorite 'productivity tool' is one of disbelief.

What's my beef with productivity tools? It's much deeper than a dislike for any particular tool. Charles Petzold already described his concern about Visual Studio in 2005 in a great talk titled Does Visual Studio Rot the Mind?. It's a long read, but definitely worth your while. You should go read it now.

In case you didn't want to take the time to read that article (but then: you're already reading this lengthy article), here's the gist of it: Via IntelliSense, code generation, Wizards and drag and drop, Visual Studio assists us, but it also pushes us towards writing (or not writing) code in a particular way. It railroads us.

Does it make us more productive? I don't even know how to measure developer productivity, so I can't answer that. Do we learn while coding like that? Not much, I'd say.

While Visual Studio is, in many ways, an impressive and extremely useful piece of software, it also concerns me that I'm so dependent on it. To learn new techniques, I try to follow what's going on outside the .NET/Microsoft ecosystem. Clojure looks like a very interesting language. Erlang seems to solve some hard problems in an easy way. Storm seems to be way ahead of anything Microsoft can currently offer. Ruby developers have claimed high productivity for years. Git is a better source control system than anything Microsoft offers.

However, I feel constrained by my reliance on Visual Studio. I want to learn and use those other technologies as well as .NET, so I'm certainly not looking for tools that will further strengthen my bond with Visual Studio. Using plain vanilla Visual Studio is the least I can do to broaden my horizons.

Productivity boosts #

A common argument for a 'productivity tool' is that it makes you more productive. "Without #resharper my productivity drops by 50%, I'm amazed that you can manage without it". That's an interesting statement. How do you even measure productivity?

For the sake of argument, let's for a moment pretend that programmer productivity is measured by lines of code written. There's this myth going around that a professional programmer only writes 10 lines of code per day. This is probably not true, but even so, how many lines of code do you produce on average per day? 100? 200? Are you seriously going to claim that your productivity bottleneck is determined by how fast you can type? Really? Then learn to type faster.

Consider that most code is read a lot more than it's written. Code should be optimized for reading, not writing. Thus, productivity, if it can be measured at all, should be measured by how quickly programmers can read and understand a piece of code - not by how fast it can be written.

Furthermore, if you believe that Pair Programming is one good and productive way to produce software, you must also realize that at every given moment, at least one person isn't typing at all. As Martin Fowler puts it: "that [Pair-Programming halves the productivity of developers] would be true if the hardest part of programming was typing". In my experience, this is not the case. Thus, I'm not convinced that 'productivity tools' make anyone more productive.

If you've ever looked beyond the Microsoft echo chamber in the last decade, you will have heard a particular group of developers boast unmatched productivity. Those would be Ruby on Rails developers. Lately, it seems to me that many alpha geeks gravitate towards JavaScript (and particularly Node.js). And what about Python or Clojure? In all cases it seems that the reason why cutting edge programmers are leaving .NET, in favor of other languages and platforms, is because of better productivity. What do these languages have in common? Well, the preferred development environment certainly isn't Visual Studio. These programmers 'get by' with Vim, Emacs, Sublime Text, and many other editors. Apparently, it's possible to be 'crazy productive' without Visual Studio and a 'productivity tool'.

Railroading #

As Charles Petzold points out in his excellent article, Visual Studio enforces a certain bottom-up style of programming that isn't particularly aligned with business needs. Visual Studio (with or without 'productivity tools') makes it hard (but not impossible) to do Outside-In development.

My feeling is that whenever a tool helps us in a certain way, it closes a lot of other doors on us. We may not even be aware of what we aren't being shown, but if we can shake off the helping hand, we may also be able to see other options.

I don't mind being helped by a tool once in a while, but at other times, I'd rather make an informed decision by myself. At least I think it's important to realize that being helped means that decisions are being made for me. It's not a win-win situation. I may be able to finish a task quickly, but I lose the opportunity to learn. Not only that, but the more I rely on a tool for assistance, the more dependent do I become of it. The're a word for that. It's called Vendor lock-in.

Final thoughts #

All of this is highly subjective and personal. My personal style is to be very deliberate and patient. I go slowly in order to move fast.

In order to demonstrate just how slowly I go, I recorded half an hour of a TDD session. There's nothing special about this TDD session. I didn't pick it to impress anyone. I didn't pick the 'best' from a pool of a dozen candidates. I just recorded how I work and uploaded it. I dare you to watch it all the way through. It will be boring. You will see much think time and long periods of inactivity. This is actually a typical depiction of how I work. Yet, somehow, I still manage to produce software in such a quality that people keep coming back to me to pay me to do more.

If you watch just five minutes of that video, it should be clear to you that a 'productivity tool' wouldn't be of any help to me. It might not slow me down either, once I'd learned to use it, but it wouldn't make me more 'productive' either, so why should I bother with it?

This is just my opinion of 'productivity tools.' It's not a one-size-fits-all judgment. If you feel that you benefit from using your favorite 'productivity tool' I'm not going to tell you to change your ways. Likewise, don't judge me because I don't use a particular tool. Some programmers that I really respect use ReSharper. I respect them not because of, but rather despite of, that.


Comments

Martin #
In my limited experience, tools like ReSharper helped me very much when i was new to programming.
It helped me to see stupid mistakes, like possible Null references etc.
As I get more experienced I rely less on ReSharper and mainly use it out of convenience and I think I could do with out it.
2013-02-04 11:42 UTC
I like to say that Visual Studio would be an excellent IDE in if it wasn't so bad to write code in, the editor really isn't nearly as good as vim or emacs and there simply is way too much chrome with all the damn sidebars, bottom-bars, top-bars, popup-windows etc. etc.. That is why I use vim or emacs with evilmode whenever I don't have to write .NET.

I don't use Resharper so much for productivity enhancement but rather to ease the cognitive requirement when navigating a big codebase. The fuzzyfind ability to go to type, symbol or file really makes it a lot easier on the brain because you don't have to remember exactly where to find the particular method that you remember part of the name of.

Same functionality can of cause be had in vim and emacs but in a much more light-weight approach with small modules doing one thing great. I don't know what it is with Windows and the obsession with big monolithic applications (and tools) compared to unix which is much more about small well-integrated but pluggable modules.
2013-02-04 11:49 UTC
While I do agree with probably the main part of your concerns, namely "I want you to reflect on your choice of tools. Don't just drink the Cool Aid.", you argue against productivity tools by using a lot of generalizations.

You mention it is hard to measure programmer productivity, yet you argue a lot from a very narrow view of programmer productivity, namely lines of code written.

Productivity boosts

"Consider that most code is read a lot more than it's written. Code should be optimized for reading, not writing. Thus, productivity, if it can be measured at all, should be measured by how quickly programmers can read and understand a piece of code - not by how fast it can be written."

Productivity tools not only focus on writing lines of code, but also aid exactly in faster navigation of the code, and optimized representations. This is exactly what one of the tweeters mentioned: "hmm.. R# is good for navigating around. Shortcuts to follow application flow, up and down. How do you do that without r#?" These features can also be found in Vim/Emacs. I haven't used these text editors myself, but if you read a bit about it they are simplistic, yet very powerful.

Productivity Power Tools offers vertical tabs, allowing to have more tabs open at once, and even color-coding them. ReSharper allows more fine-grained color coding, e.g. extension methods can be given a different color allowing to more easily identify them. All of these enhancements have nothing to do with writing code.

Railroading

I think it's important to differentiate between different types of railroading here. I am not a big fan of "automated processes" either, e.g. code snippets or wizards. They indeed tend to close doors for you, preventing you from making more informed decisions. In fact I argued before code snippets actually promote bad design: http://whathecode.wordpress.com/2010/11/03/why-code-snippets-promote-bad-design/

But what about the hint system of ReSharper? It makes you aware of potential problems in your code, or alternate approaches. That's what makes it even a good learning tool as Martin commented before, or can just remind you of things you didn't immediately think about (e.g. access to modified closures).

To summarize, I think in order to have a more honest discussion about productivity tools, you should at least also look into where they can help you, instead of generalizing you don't like them because they have a few particular features you dislike.

P.s.: I tried using some markup HTML, but none seems to work. :-(
2013-02-04 15:33 UTC
Eric #
Just a quick comment about developer productivity here, as it relates to your article. I strongly suspect that the "without #resharper my productivity drops by 50% ..." comment has nothing to do with 'lines of code' or 'typing speed', as you imply. Rather, it's a comment on 'how fast can I get my code to look like I want it' (presumably to make the code more readable, understandable, and maintainable). This includes extracting methods, including new methods in interfaces, creating new classes and moving them to new files, referencing and 'using' external libraries, etc.

All of these things can be tedious to do 'manually', and can be significantly less tedious with a productivity tool like resharper. As long as what *you* want the code to look like matches what *resharper* wants the code to look like, that is.
2013-02-04 16:49 UTC
I have a great deal of respect for you; I think on this topic, while you make valid points, your conclusion doesn't follow.

When I worked through the book Seven Languages in Seven Weeks, the lack of IntelliSense was painful. I typically had a reference web page open all the time, and had to keep looking away from the code to look up a function. With experience, the time for this is reduced, but it's not eliminated. I remember the same sort of back-and-forth from my xBase days.

I've seen you code on Pluralsight (an experience I'd recommend to others). You use the Visual Studio refactoring tools for outside-in development, writing a new method name and having the IDE create the stub. You also use NuGet extensively; I'd call that a major productivity tool.

Martin Fowler wrote Refactoring thirteen years ago. I got the impression that he hoped much of it would be automated, and this has come to pass; computers are good at repetitive tasks. My IDE can Extract Method more quickly, accurately, and safely than I can.

ReSharper has more and better refactorings than Visual Studio, in my opinion.

Navigating complex code bases, including legacy code bases, is an area in which ReSharper shines. Anything that can help me understand the code is a boon.

I agree that developers have to learn how to do these things by hand. After that, I don't feel that endlessly repeating the same routine tasks is a good use of my or my employer's time.
2013-02-04 18:54 UTC
Nelson LaQuet #
You're comparing some sort of abstraction to R# - and I think that's your biggest mistake here. R# does not abstract or hide code at all - in fact, it's quite the opposite. My favorite features of R# are the keyboard shortcuts and code analysis tools. With R# in hand, I can navigate an entire class hierarchy, jump though methods, into (decompiled) methods and even *easily* navigate stack traces that I copied from an external source (such as ASP.net's yellow screen of death). I can do this almost as quickly as I can think, which removes almost all of the friction of the user interface so that it's just me, and _my_ code.

The code generation of R# isn't the same thing as the code generation for, let's say, a WYSIWYG like Dreamweaver. It's not that R# will tell me "I need a class here" or "this should be a field that is filled in by a ctor parameter." Instead, if _I_ decide I need a new class, or _I_ decide I want to pull up some public members of a class into an interface, I can do those tasks with a simple keystroke. Sometimes the code generation "hints" get a bit annoying - for example, although I am an avid LINQ user and functional programming fan, there are many times where the readability would suffer if I let R# turn a foreach loop into a single two-thousand character expression. But that choice is still up to me - and the times where R# does suggest I use code that ends up being cleaner than what I had wrote, I take advantage of that.

So no, R# doesn't hide your code from you, it just removes some of the brain-to-code barrier that is present in all languages. Saying R# is hindering my understanding of code is like saying that you should always use C++ over C# because C# makes you more productive - or it's like suggesting that a digital painter who uses Photoshop and a drawing tablet doesn't still need to understand lighting and materials in the same way that a traditional painter would.

R# isn't going to make you a better coder, but it will make you faster and make the experience of writing and navigating code much more pleasant.
2013-02-04 23:33 UTC
Giacomo Stelluti Scala #
Hi Mark,
I'm agree with you that embracing Visual Studio and R# stimulates a sort of addiction that prevent developers to really deepen non-Microsoft technology (in sense of development, because .NET/C# are from long time standardized and can run also on *nix; but this is off-topic).

The point is that tools (or productivity tools in this case) are neutral. They can be used to enforce theoretically correct techniques or to spread the code with "code smells".

So if a developer lacks solid (in all senses, acronym included) basis, he will produce a bad design with any tool he will use.

Regards, Giacomo
2013-02-05 09:00 UTC
Another point to be added here is that .NET is TIED to Visual Studio, there is no way you can write c# outside Visual Studio and microsoft seems to put a lot of effort into this, for instance nuget worked only inside visual studio at the beginning.

Antoher example is that in almost every other language your project match with a folder.. or some friendly configuration file (package.json whatever), but in .Net you have project files, like csproj which is not human readable/editable.. so you are forced to use this IDE. Visual studio is a gigantic IDE where you can do things like integration tests, managing a database and of course write some code.
2013-02-05 10:57 UTC
Jonathan Dickinson #
Couldn't agree more. The only extensions that interact with code that I have installed are Productivity Power Tools (with only one or two of its more benign extensions enabled) and GhostDoc.

People often give me uphill for that; I get where you are coming from.

ReSharper or most other refactoring tools slow my IDE down to the point where its actually catching up with my typing (except CodeRush, it isn't *too* bad). That's a problem. When I hit a key I had better see it on the monitor, immediately. It has an effect from the 3000 LOC projects I work on here to the 50 000 LOC projects I work on here (to a much larger degree, obviously). Ever wondered why Visual Studio only comes with basic refactorings? Likely because the folks on the Windows team use Visual Studio and anything more elaborate would bog them down too much.

I work in a VM so just throwing more hardware at the problem won't fix it, and I'll take the productivity boost of VMs over a refactoring tool any day.

I guess ReSharper is why a lot of people are complaining about how slow VS2012 is, because for most people when they talk about Visual Studio they actually mean Visual Studio + ReSharper. It's probably why I don't have the same problem as them and personally think it's the fastest Visual Studio released; the ReSharper plugin must just be a little new/unoptimised at the moment.
2013-02-06 08:27 UTC
NNM #
I agree with most of that.
And things like JustCode are a waste of time.

But intellisense is the most wonderful thing ever!
The day this was added, my programming skills increased by a huge step. I could explore and discover all the libraries. A hobby of mine, is now to create empty winforms, add a button, a timer, and just start coding... "Dim foo as System.... Hmm what am I gonna do today...? Let's explore this library.."
Just to say, intellisense made me a better and MUCH faster coder. It's perfect for people like me who will never ever open a paper book again, and would rather learn by doing.
2013-02-06 10:01 UTC
pelumi #
I use Visual Studio when I do .NET and I doubt if you can do anything meaningful in .NET without Visual Studio or some other .NET IDE, just think of it - Nuget, Intellisense, in-built Debugger, perfect handshake with Sql Server, the list is endless. But when i go outside MS domain and play with NodeJS, Django etc, Sublime Text 2 gladdens my heart. I think we should apply the right tool to the right problem and no real programmer should tie his apron to just one technology domain.
2013-02-06 11:44 UTC
Gilles #
Regarding:
> How do you even measure productivity?

There are some standard to measure our productivity:
http://it-cisq.org/omg-adopts-automated-function-point-specification/

And some tools are able to automatically compute function point based on this standard.

On my side, I'm much more efficient using Resharper, event to read code.
Some feature like 'Goto Implementation' really save time which can be used ... to implement something useful :) .

Gilles

2013-02-06 11:48 UTC
JohnB #
Someone talked about their dependence upon Intellisense; for me, this is the most aggravating part of VS. The number of times I hovered over a variable in a C++ dll module and nothing happens or it takes seemingly forever for something to appear. I felt like chucking it out of the window.
2013-02-06 13:57 UTC
Winston #
I've never attempted to use a productivity tool that wasn't the opposite. I come from a pre-GUI, pre-Visual Studio, pre-Windows, pre-DOS world where programmers had to know where *all* of their files actually were. Because we not only had to edit them, we had to back them up. I programmed for Windows 2.x and 3.x without the "convenience" of a symbolic debugger because when compiled with the debug extensions, the code wouldn't fit in the base memory partition (remember those?). I already knew how to debug without such "convenience". More than once I found problems my colleagues were stumped on with old fashioned debugging techniques. Including bugs in their symbolic debugger. For the past 10 years I've been more or less forced to use Visual Studio. At least in the C++ environment I know where it hides most of the definitions. My single biggest problem with it is accidentally messing up my screen layout and trying to figure out how it happened and how to get it back. Intellisense works great except when it doesn't. When I can type faster than my near state-of-the-art computer can file what I just typed, someone is wasting an awful lot of cycles, doing what I can't fathom. And no one has mentioned Microsoft's absolutely dreadful resource editor. Oh, the graphic part of it has improved - I used to type my dialog templates directly into the .rc file. More accurate and faster. Now I can edit the look, but its ability to botch resource IDs and duplicate them continues to this day, 24 years after I first cursed the Microsoft resource editor. I am to the point that any time I do much of anything with resources, I manually edit the resource ID file and check it for the duplicates that are sure to exist. Before my current marriage to Visual Studio, I worked in pure C and Unix for 13 years, using a text editor I wrote myself. In spite of its severe limitations, and paying actual money (hundreds of cups of coffee I'll never see again) for Other People's Editors, I stuck with my own. Couldn't stand any of them. Yes, there's something unbeatable about working in your own code base. Come to think of it, the only productivity tools I ever had that improved MY productivity are the ones I wrote myself. The idea that a team of geeks thousands of miles away know how I program, or how I ought to program, is laughable.
2013-02-06 14:38 UTC
Winston #
BTW, what is Resharper? Never heard of it. Lucky me.
2013-02-06 14:39 UTC
Shadow #
Of course, lets go back to horses, because airplanes made us lazy.
2013-02-06 15:42 UTC
Carlos López #
When i started to work, i never knew about productivity tools. I always was dependant on the intellisense of the enviroment i was working on.

Last year, I was hired on a private software developer company, which used R#, and since then, i have been loving it. But i dont let it be what defines my productivity. I learned a lot from the code snippets and suggestions it makes, and nowadays, i hardly recieve a suggestion from it, i had learnt mostly all of its trick to improve the readability of the code. To this days, i use it mostly to fill in code that we had designed on code snippets from our standards, but we can work withouth it just right

Another productivity tool that i use along side R# is power tools, mostly for it search and localization of variables along the code. Selecting a variable and pressing ctrl+shit+up/down to navigate through the code is pretty awesome!
2013-02-06 16:12 UTC
David Priebe #
In general I do agree that productivity tools can be a bit over blown. I've used text editors and IDEs and Visual Studio etc. The biggest thing I get from ReSharper and other tools on a daily basis is the elimination of repetitive tasks. Creating a property is easy. Type prop-tab and the code structure appears. Then I just have to dive in and do any customization I need. Do I want to convert the property to one backed by a variable? That's just a few key strokes away. Could I do it by typing in almost the same amount of time? Sure. I've never felt that I couldn't code without productivity tools but they do make some tasks less tedious.

As far as learning about the language I'm using, I have to disagree with you there. I often use Intellisense to find available methods to call on a given object and the documentation on those methods gets displayed so it's convenient for me if I'm working with a framework or portion of .Net I'm unfamiliar with. I also found ReSharper's ability to refactor loops into Linq code to be very educational when I was starting out with Linq. It would condense loops to Linq when I couldn't see how to express it and would sometimes highlight methods that I did not realize existed, helping me understand what Linq could do and how to think about using it.

In the end the basic truth is that your mileage will vary. What you get out of a tool will depend on how you approach it's use. Productivity tools can help explore existing code and how it works and it can help you learn how to use unfamiliar frameworks but you have to be thinking about what the tool is doing.
2013-02-06 16:56 UTC
Kyle #
My take of the subject is... If you can't code properly without them then you shouldn't be using them. You should understand and be able to write what ReSharper is writing for you.

IMO, the most important "ility" is READABILITY. These productivity tools write the most unreadability code I have ever seen. Your code should be written in a way that a Jr. Developer that knows a different language can read your code (without comments) at 2am half sleep.
2013-02-06 17:23 UTC
Davyd McColl #
Having only worked the last 8 months (of over 12 years of dev) at a company where ReSharper is basically embedded in the culture, I can say this: I've seen far too many occasions of forced usage of the tool when a person could have just edited the text quicker. Personally, I use ViEmu too, because I'm very used to Vi-style editing and navigation. There's been many times when I could Vi myself to the place I want to be MUCH faster than a colleague with ReSharper.

All of that being said, now that I have it, there are some conveniences, mainly the renaming stuff (yes, I know VS has that, but I'd really like a dialog that allowed the rename to happen instead of trying to find the fiddly little drop-down; I'm not a fan of ReSharper's inline renamer, largely because it doesn't play well with ViEmu, but I digress), "find all usages" (which is a little more honed than global find, but I still use the latter quite a bit) and "Refactor method" which, I have to admit, /when it gets it right/, can do the job faster than I can. The only problem with that is that I've found it to only be about 80% correct, producing really wonky refactors at other times -- but I can quickly undo and hand-craft as required. A colleague of mine tried to force me to rather re-write parts of code to "help" the refactor to take place -- I'm sure I don't have to point out how little sense that makes in the face of just refactoring by hand.

And I do use the ReSharper test runner, because Ctrl-U,Ctrl-L is the quickest way I've found to re-run all tests before a checkin. Ctrl-U,Ctrl-R is also well handy for running the current test or fixture (scope dependent), and Ctrl-U,Ctrl-D is a quick way to step into a test with a debugger. I also prefer the ReSharper test runner UI over the NUnit one -- but that's a minor thing, really.

At the end of the day though, I can quite easily live without these things. I don't think any of these tools are *perfect*, but what is? The parts of ReSharper which work for me shave seconds off of my work day here and there -- and that all adds up, so I appreciate them where then help. But I like to stick to the idea of a tool for a task: if it enhances my work without detracting from it, then it's useful. And if it doesn't work for you, don't use it. I could more easily part with ReSharper than ViEmu, to be totally honest.
2013-02-06 18:46 UTC
Mike #
I've been programming since I got my first HP calculator back in 1980. Assembly code, compilers, Opus make scripts, line editors (even on a printer terminal!!). Back in the 90's, I used Multi-Edit for writing Clipper apps--what a fantastic macro language it had!! Now I'm coding mostly in C# with Visual Studio. I LOVE Intellisense--particularly for showing all the descriptions of the methods and parameters that I write with the /// "self"-documenter.

I tried Resharper--for about one week. The myriad of options regarding what to flag as "wrong" coding style made my head spin. So I figured I'd use the defaults--BIG mistake. It seemed everything I was writing was wrong. I spent more time trying to "fix" (and research) my Resharper-flagged "mistakes" than actually writing real code. I happily uninstalled it.
2013-02-06 18:50 UTC
I've been programming for more then 19 years now, in the begining I used the ms-dos text editor, then notepad, dreamweaver, etc...

It doesn't matter what you use, the problem comes when the editor/IDE does work you dont know about and dont understand well, then you are dependent since it does something you couldnt do without it, I like visual studio very much, especially the debugger and intellisense, but I dont need it to produce quality code, it just helps do things faster. or helps me learn new libraries without going through the reference.

Ruby and Javascript are different since they are dynamic languages so the need for static typing and reference tracking isnt there as much. but those languages on the other hand are tougher to debug and I have seen a debugger for those languages as good as the visual studio one for .NET

Sebastian
2013-02-06 23:25 UTC
Vadim #
+1
I totally agree with the author. I've tried ReSharper and couple of other "productivity tools", and found that they just distract me.
Software Engineer is paid for solving problems, not for writing code.
2013-02-07 01:45 UTC
Gregory #
I recently came across your blog and this is the first time I comment. Really good job. Now about the post...

My motto is: if you can deliver, I don't care how you did that. If I want you move my staff somewhere in a specified amount of time, I don't care whether you use a truck, a ferrari, or you did the job on foot. It is really whatever suit to any of us! You are not getting better by hand-made refactoring all the time, rather having a tool do that for you.

This argument remind me back then, when linux had no decent (if at all) user interface but some people used that just to feel "wizards" :) Writing code is not a static thing. Tools, languages etc evolve. We have to evolve as well. Some will say "I don't use Linq. Looks cheesy and I am an old schooler", "I don't use .NET. I am a C++ guy that is used to have control of my resources". As we evolve, arguments evolve. Now, even me am using .net (yes was that guy that wanted to write compact c/c++ code with full control) and here we are talking about... productivity tools! :)

I do not use any of these tools (apart from VS of course) not because I am a purist. I just haven't ever looked for it. Now that I saw what R# can do, I am really tempted!
2013-02-23 21:56 UTC
Harshdeep Mehta #
Sorry to comment on old posst. In parts I do agree with you, but I dont agree with the point that Productivity Tools stops you from learning.
In fact, it does help you learn if you observ what change it has made to your code. I believe, after few observations you will start writing code in better manner so that you dont leverage such tools for same reason anymore.
I would say, such Productivity Tools help you make good code for you intentions, but it will not help you improve your intentions in most case. For example, if you are writing Foreach loop to get some value then it will suggest you to get it converted in LinQ but it will not help you to implement Parallel.Foreach which might improve performance in some cases by introducing complexity. So such tool helps you enhance code for your intentions but will not enhnace your intentions for better code.
2017-07-07 08:35 UTC

Outside-In Test-Driven Development Pluralsight course

Wednesday, 16 January 2013 22:26:15 UTC

In case you missed the Tweet, the Google+ share or the announcement, my first course for Pluralsight, called Outside-In Test-Driven Development, is now available.


Comments

DavidS #
Hey Mark,

It is a very good course although some of the latter modules were a bit too quick for me. It must be due to my lack of experience.

On another separate note, I know that you are going to be giving a course on Dependency injection in London with SkillsMatters. Will it be covering the material in Outside in TDD? Would it be possible for you to provide more details that what's on the site?

TIA,

David
2013-02-03 17:47 UTC
Currently I don't have more information about the Skills Matter course than what's on their web site, but as it's a course about Dependency Injection, it's not going to cover TDD in particular. That's a different subject.
2013-02-04 15:27 UTC

Partial Type Name Role Hint

Friday, 11 January 2013 11:07:55 UTC

This article describes how object roles can be indicated by parts of a type name.

In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is to let a part of a class' name convey that information. This is often very useful when using Convention over Configuration.

While a class can have an elaborate and concise name, a part of that name can communicate a particular role. If the name is complex enough, it can hint at multiple roles.

Example: Selecting a shipping Strategy #

As an example, consider the shipping Strategy selection problem from a previous post. Does attribute-based metadata like this really add any value?

[HandlesShippingMethod(ShippingMethod.Express)]
public class ExpressShippingCostCalculator : IBasketCalculator

Notice how the term Express appears twice on two lines of code. You could successfully argue that the DRY principle is being violated here. This becomes even more apparent when considering the detached metadata example. Here's a static way to populate the map (in F# just because I can):

let map = Dictionary<ShippingMethod, IBasketCalculator>()
map.Add(ShippingMethod.Standard,
                       StandardShippingCostCalculator())
map.Add(ShippingMethod.Express,
                       ExpressShippingCostCalculator())
map.Add(ShippingMethod.PriceSaver,
                       PriceSaverShippingCostCalculator())

This code snippet uses some slightly unorthodox (but still valid) formatting to highlight the problem. Instead of a single statement per shipping method, you could just as well write an algorithm that populates this map based on the first part of each calculator's name. Follow that train of thought to its logical conclusion, and you don't even need the map:

public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory
{
    private readonly IEnumerable<IBasketCalculator> candidates;
 
    public ShippingCostCalculatorFactory(
        IEnumerable<IBasketCalculator> candidates)
    {
        this.candidates = candidates;
    }
 
    public IBasketCalculator GetCalculator(ShippingMethod shippingMethod)
    {
        return (from c in this.candidates
                let t = c.GetType()
                where t.Name.StartsWith(shippingMethod.ToString())
                select c).First();
    }
}

This implementation uses the start of the type name of each candidate as a Role Hint. The ExpressShippingCostCalculator already effectively indicates that it calculates shipping cost for the Express shipping method.

This is an example of Convention over Configuration. Follow a simple naming convention, and things just work. This isn't an irreversible decision. If, in the future, you discover that you need a more elaborate selection algorithm, you can always modify the ShippingCostCalculatorFactory class (or, if you wish to adhere to the Open/Closed Principle, add an alternative implementation of IShippingCostCalculatorFactory).

Example: ASP.NET MVC Controllers #

The default routing algorithm in ASP.NET MVC works this way. An incoming request to /basket/1234 is handled by a BasketController instance, /product/3457 by a ProductController instance, and so on.

The ASP.NET Web API works the same way, too.

Summary #

Using a part of a type name as a Role Hint is very common when using Convention over Configuration. Many developers react strongly against this approach because they feel that the loss of type safety and the use of Reflection makes this a bit 'too magical' for their tastes. However, even when using attributes, you can easily forget to add an attribute to a class, so in the end you must rely on testing to be sure that everything works. The type safety of attributes is often an illusion.

The great benefit of Convention over Configuration is that it significantly cuts down on the number of moving parts. It also 'forces' you (and your team) to write more consistent code, because the overall application is simply not going to work if you don't follow the conventions.


Comments

It's interesting how conventions can popup from places you wouldn't even expect them to appear.
I've used MVP pattern in Unity3D game development, where I had like MosterView and MonsterPresenter autowired by assembly scanning. As result I have IPresenter as input into View and IoC container that discover and inject correct Presenter implementation into View. I also wrote additional test, where I assert if every view has corresponding presenter, such that I would discover convention violation not at run-time, but at tests running step. Just reduces feedback time a little bit.
This idea came after watching your "Conventions: Make your code consistent" presentation. Thanks.
2013-01-23 15:42 UTC

Role Interface Role Hint

Thursday, 10 January 2013 10:37:35 UTC

This article describes how object roles can be indicated by the use of Role Interfaces.

In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is by implementing one or more Role Interfaces. As the name implies, a Role Interface describes a role an object can play. Classes can implement more than one Role Interface.

Example: Selecting a shipping Strategy #

As an example, consider the shipping Strategy selection problem from the previous post. That example seemed to suffer from the Feature Envy smell because the attribute had to expose the handled shipping method as a property in order to enable the selection mechanism to pick the right Strategy.

Another alternative is to define a Role Interface for matching objects to shipping methods:

public interface IHandleShippingMethod
{
    bool CanHandle(ShippingMethod shippingMethod);
}

A shipping cost calculator can implement the IHandleShippingMethod interface to participate in the selection process:

public class ExpressShippingCostCalculator : 
    IBasketCalculator, IHandleShippingMethod
{
    public int CalculatePrice(ShoppingBasket basket)
    {
        /* Perform some complicated price calculation based on the
         * basket argument here. */
        return 1337;
    }
 
    public bool CanHandle(ShippingMethod shippingMethod)
    {
        return shippingMethod == ShippingMethod.Express;
    }
}

An ExpressShippingCostCalculator object can play one of two roles: It can calculate basket prices and it can handle basket calculations related to shipping methods. It doesn't have to expose as a property the shipping method it handles, which enables some more sophisticated scenarios like handling more than one shipping method, or handling a certain shipping method only if some other set of conditions are also met.

You can implement the selection algorithm like this:

public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory
{
    private readonly IEnumerable<IBasketCalculator> candidates;
 
    public ShippingCostCalculatorFactory(
        IEnumerable<IBasketCalculator> candidates)
    {
        this.candidates = candidates;
    }
 
    public IBasketCalculator GetCalculator(ShippingMethod shippingMethod)
    {
        return (from c in this.candidates
                let handles = c as IHandleShippingMethod
                where handles != null
                && handles.CanHandle(shippingMethod)
                select c).First();
    }
}

Notice that because the implementation of CanHandle can be more sophisticated and conditional on the context, more than one of the candidates may be able to handle a given shipping method. This means that the order of the candidates matters. Instead of selecting a Single item from the candidates, the implementation now selects the First. This provides a fall-through mechanism where a preferred, but specialized candidate is asked before less preferred, general-purpose candidates.

This particular definition of the IHandleShippingMethod interface suffers from the same tight coupling to the ShippingMethod enum as the previous example. One fix may be to define the shipping method as a string, but you could still successfully argue that even implementing an interface such as IHandleShippingMethod in a Domain Model object mixes architectural concerns. Detached metadata might still be a better option.

Summary #

As the name implies, a Role Interface can be used as a Role Hint. However, you must be wary of pulling in disconnected architectural concerns. Thus, while a class can implement several Role Interfaces, it should only implement interfaces defined in appropriate layers. (The word 'layer' here is used in a loose sense, but the same considerations apply for Ports and Adapters: don't mix Port types and Adapter types.)


Comments

Could you expand upon how you might implement the fall-through mechanism you mentioned in your IShippingCostCalculatorFactory implementation, where more than one candidate can handle the shippingMethod?

How would you sort your IEnumerable<IBasketCalculator> candidates in GetCalculator() so that the candidate returned by First() is the one specifically meant to handle the ShippingMethod when one exists, and the default implementation when a specific one doesn't exist?

I considered using FirstOrDefault(), then returning the default implementation if the result of the query was nothing, but my default IHandleShippingMethod implementation always returns True from CanHandle() - I don't know what other value it could return.

2013-08-14 15:37 UTC

You could have super-specialized IBasketCalculator implementations, that (e.g.) are only active certain times of day, and you could have the ones I've shown here, and then you could have a fallback that implements IHandleShippingMethod.CanHandle by simply returning true no matter what the input is. If you put this fallback implementations last in the injected candidates, it's only going to picked up by the First method if no other candidate (before it) returns true from CanHandle.

Thus, there no reason to sort the candidates within GetCalculator - in fact, it would be an error to do so. As I wrote above, "the order of the candidates matters."

2013-08-14 17:39 UTC

Metadata Role Hint

Wednesday, 09 January 2013 10:42:20 UTC

This article describes how object roles can be indicated by metadata.

In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is by leveraging metadata. In .NET that would often take the form of attributes, but you can also maintain the metadata in a separate data structure, such as a dictionary.

Metadata can provide useful Role Hints when there are many potential objects to choose from.

Example: Selecting a shipping Strategy #

Consider a web shop. When you take your shopping basket to checkout you are presented with a choice of shipping methods, e.g. Standard, Express, and Price Saver. Since this is a web application, the user's choice must be communicated to the application code as some sort of primitive type. In this example, assume an enum:

public enum ShippingMethod
{
    Standard = 0,
    Express,
    PriceSaver
}

Calculating the shipping cost for a basket may be a complex operation that involves the total weight and size of the basket, as well as the distance it has to travel. If the web shop has geographically distributed warehouses, it may be cheaper to ship from a warehouse closer to the customer. However, if the closest warehouse doesn't have all items in stock, there may be a better way to optimize profit. Again, that calculation likely depends on the shipping method chosen by the customer. Thus, a common solution is to select a shipping cost calculation Strategy based on the user's selection. A simplified example may look like this:

public class BasketCostCalculator
{
    private readonly IShippingCostCalculatorFactory shippingFactory;
 
    public BasketCostCalculator(
        IShippingCostCalculatorFactory shippingCostCalculatorFactory)
    {
        this.shippingFactory = shippingCostCalculatorFactory;
    }
 
    public int CalculatePrice(
        ShoppingBasket basket,
        ShippingMethod shippingMethod)
    {
        var shippingCalculator =
            this.shippingFactory.GetCalculator(shippingMethod);
 
        return shippingCalculator.CalculatePrice(basket) + basket.Total;
    }
}

A naïve attempt at an implementation of the IShippingCostCalculatorFactory may involve a switch statement:

public IBasketCalculator GetCalculator(ShippingMethod shippingMethod)
{
    switch (shippingMethod)
    {
        case ShippingMethod.Express:
            return new ExpressShippingCostCalculator();
        case ShippingMethod.PriceSaver:
            return new PriceSaverShippingCostCalculator();
        case ShippingMethod.Standard:
        default:
            return new StandardShippingCostCalculator();
    }
}

Now, before you pull Refactoring at me and tell me to replace the enum with a polymorphic type, I must remind you that at the boundaries, applications aren't object-oriented. At some place, close to the application boundary, the application must translate an incoming primitive to a polymorphic type. That's the responsibility of something like the ShippingCostCalculatorFactory.

There are several problems with the above implementation of IShippingCostCalculatorFactory. In the simplified example code, all three implementations of IBasketCalculator have default constructors, but that's not likely to be the case. Recall that calculating shipping cost involves complicated business rules. Not only are those classes likely to need all sorts of configuration data to determine price per weight range, etc. but they might even need to perform lookups against external system - such as getting a qoute from an external carrier. In other words, the ExpressShippingCostCalculator, PriceSaverShippingCostCalculator, and StandardShippingCostCalculator are unlikely to have default constructors.

There are various ways to implement such an Abstract Factory, but none of them may fit perfectly. Another option is to associate metadata with each implementation. Using attributes for such purpose is the classic .NET solution:

public class HandlesShippingMethodAttribute : Attribute
{
    private readonly ShippingMethod shippingMethod;
 
    public HandlesShippingMethodAttribute(ShippingMethod shippingMethod)
    {
        this.shippingMethod = shippingMethod;
    }
 
    public ShippingMethod ShippingMethod
    {
        get { return this.shippingMethod; }
    }
}

You can now adorn each IBasketCalculator implementation with this attribute:

[HandlesShippingMethod(ShippingMethod.Express)]
public class ExpressShippingCostCalculator : IBasketCalculator
{
    public int CalculatePrice(ShoppingBasket basket)
    {
        /* Perform some complicated price calculation based on the
         * basket argument here. */
        return 1337;
    }
}

Obviously, the other IBasketCalculator implementations get a similar attribute, only with a different ShippingMethod value. This effectively provides a hint about the role of each IBasketCalculator implementation. Not only is the ExpressShippingCostCalculator a basket calculator; it specifically handles the Express shipping method.

You can now implement IShippingCostCalculatorFactory using these Role Hints:

public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory
{
    private readonly IEnumerable<IBasketCalculator> candidates;
 
    public ShippingCostCalculatorFactory(
        IEnumerable<IBasketCalculator> candidates)
    {
        this.candidates = candidates;
    }
 
    public IBasketCalculator GetCalculator(ShippingMethod shippingMethod)
    {
        return (from c in this.candidates
                let handledMethod = c
                    .GetType()
                    .GetCustomAttributes<HandlesShippingMethodAttribute>()
                    .SingleOrDefault()
                where handledMethod != null
                && handledMethod.ShippingMethod == shippingMethod
                select c).Single();
    }
}

This implementation is created with a sequence of IBasketCalculator candidates and then selects the matching candidate upon each GetCalculator method call. (Notice that I decided that candidates was a better Role Hint than e.g. calculators.) To find a match, the method looks through the candidates and examines each candidate's [HandlesShippingMethod] attribute.

You may think that this is horribly inefficient because it virtually guarantees that the majority of the injected candidates are never going to be used in this method call, but that's not a concern.

Example: detached metadata #

The use of attributes has been a long-standing design principle in .NET, but seems to me to offer a poor combination of tight coupling and Primitive Obsession. To be fair, it looks like even in the BCL, the more modern APIs are less based on attributes, and more based on other alternatives. In other posts in this series on Role Hints I'll describe other ways to match objects, but even when working with metadata there are other alternatives.

One alternative is to decouple the metadata from the type itself. There's no particular reason the metadata should be compiled into each type (and sometimes, if you don't own the type in question, you may not be able to do this). Instead, you can define the metadata in a simple map. Remember, 'metadata' simply means 'data about data'.

public class ShippingCostCalculatorFactory : IShippingCostCalculatorFactory
{
    private readonly IDictionary<ShippingMethod, IBasketCalculator> map;
 
    public ShippingCostCalculatorFactory(
        IDictionary<ShippingMethod, IBasketCalculator> map)
    {
        this.map = map;
    }
 
    public IBasketCalculator GetCalculator(ShippingMethod shippingMethod)
    {
        return this.map[shippingMethod];
    }
}

That's a much simpler implementation than before, and only requires that you supply the appropriately populated dictionary in the application's Composition Root.

Summary #

Metadata, such as attributes, can be used to indicate the role played by an object. This is particularly useful when you have to select among several candidates that all have the same type. However, while the Design Guidelines for Developing Class Libraries still seems to favor the use of attributes for metadata, this isn't the most modern approach. One of the problems is that it is likely to tightly couple the resulting code. In the above example, the ShippingMethod enum is a pure boundary concern. It may be defined in the UI layer (or module) of the application, while you might prefer the shipping cost calculators to be implemented in the Domain Model (since they contain complicated business logic). However, using the ShippingMethod enum in an attribute and placing that attribute on each implementation couples the Domain Model to the user interface.

One remedy for the problem of tight coupling is to express the selected shipping method as a more general type, such as a string or integer. Another is to use detached metadata, as in the second example.


Comments

Phil Sandler #
I know this post is about roles, but I think your example illustrates a question that I ask (and get asked) constantly about implementing factories using DI. I have gone around and around this issue and tried various approaches, including the map solution you provided. The attribute idea is new to me, and I will give it some consideration.

Bottom line: I've landed on just using the container directly inside of these factories. I am completely on board with the idea of a single call to Resolve() on the container, but this is one case where I have decided it's "easier" to consciously violate that principle. I feel like it's a bit more explicit--developers at least know to look in the registration code to figure out which implementation will be returned. Using a map just creates one more layer that essentially accomplishes the same thing.

A switch statement is even better and more explicit (as an aside, I don't know why the switch statement gets such a bad rap). But as you noted, a switch statement won't work if the calculator itself has dependencies that need resolution. In the past, I have actually injected the needed dependencies into the factory, and supplied them to the implementations via new().

Anyway, I'm enjoying this series and getting some nice insights from it.
2013-01-09 15:33 UTC
James Nail #
On the topic of doing this via IoC, Autofac deals with metadata quite well. I've typically used things like named/keyed registrations for specific strategy implementations, but I can see where a developer dropped into the middle of the codebase could be confused amid the indirection.
So an attribute on the implementation class can serve two important purposes here -- first, it does indeed make the intended role of the class explicit, and second, it can give your IoC container registration hints, particularly helpful when using assembly scanning / convention-based registrations.
2013-01-10 16:25 UTC
Andrey K #
Mark, big thanks for your "january post"!

The detached metadata "injecting mapping dictionary from composition root" sample is great!

In my practice I did such mapping-thing with inharitance and the result is a lot of factories all-over the code, that knows about DI-container (because of lazy-initialization with many dependencies inside each factory)

With some modifications, like generic-parameters and lazy-initialization, injection dictionary over constructor inside such universal one-universal-factory-class really could be great solution for the most cases.
2013-01-10 19:59 UTC

NSubstitute Auto-mocking with AutoFixture

Wednesday, 09 January 2013 07:18:24 UTC

This post announces the availability of the NSubstitute-based Auto-mocking extension for AutoFixture.

Almost two and a half years ago I added an Auto-mocking extension to AutoFixture, using Moq. Since then, a couple of other Auto-mocking extensions have been added, and this Sunday I accepted a pull request for Auto-mocking with NSubstitute, bringing the total number of Auto-mocking extensions for AutoFixture up to four:

(listed both in the order of age and number of NuGet downloads.)

Kudos to Daniel Hilgarth for creating this extension, and for offering a pull request of high quality!


Comments

Fxfighter #
Yay, this is much appreciated!
2013-01-09 07:28 UTC

Argument Name Role Hint

Tuesday, 08 January 2013 10:42:06 UTC

This article describes how object roles can by indicated by argument or variable names.

In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. One way code can convey information about the role played by an object is by proper naming of variables and method arguments. In many ways, this is the converse view of a Type Name Role Hint.

To reiterate, the Design Guidelines for Developing Class Libraries provides this rule:

Consider using names based on a parameter's meaning rather than names based on the parameter's type.

As described in the post about Type Name Role Hints, this rule makes sense when the argument type is too generic to provide enough information about role played by an object.

Example: unit test variables #

Previously I've described how explicitly naming unit test variables after their roles clearly communicates to the Test Reader the purpose of each variable.

[Fact]
public void GetUserNameFromProperSimpleWebTokenReturnsCorrectResult()
{
    // Fixture setup
    var sut = new SimpleWebTokenUserNameProjection();
 
    var request = new HttpRequestMessage();
    request.Headers.Authorization = 
        new AuthenticationHeaderValue(
            "Bearer",
            new SimpleWebToken(new Claim("userName", "foo")).ToString());
    // Exercise system
    var actual = sut.GetUserName(request);
    // Verify outcome
    Assert.Equal("foo", actual);
    // Teardown
}

Currently I prefer these well-known variable names in unit tests:

  • sut
  • expected
  • actual

Further variables can be named on a case-by-case basis, like the request variable in the above example.

Example: Selecting next Wizard Page #

Consider a Wizard in a rich client, implemented using the MVVM pattern. A Wizard can be modeled as a 'Graph of Responsibility'. A simple example may look like this:

This is a rather primitive Wizard where the start page asks you whether you want to proceed in a 'default' or 'custom' way:

If you select Default and press Next, the Wizard will immediately proceed to the Progress step. If you select Custom, the Wizard will first show you the Custom step, where you can tweak your experience. Subsequently, when you press Next, the Progess step is shown.

Imagine that each Wizard page must implement the IWizardPage interface:

public interface IWizardPage : INotifyPropertyChanged
{
    IWizardPage Next { get; }
 
    IWizardPage Previous { get; }
}

The Start page's View Model must wait for the user's selection and then serve the correct Next page. Using the DIP, the StartWizardPageViewModel doesn't need to know about the concrete 'custom' and 'progress' steps:

private readonly IWizardPage customPage;
private readonly IWizardPage progressPage;
private bool isCustomChecked;
 
public StartWizardPageViewModel(
    IWizardPage progressPage,
    IWizardPage customPage)
{
    this.progressPage = progressPage;
    this.customPage = customPage;
}
 
public IWizardPage Next
{
    get 
    {
        if (this.isCustomChecked)
            return this.customPage;
 
        return this.progressPage;
    }
}

Notice that the StartWizardPageViewModel depends on two different IWizardPage objects. In such a case, the interface name is insufficient to communicate the role of each dependency. Instead, the argument names progressPage and customPage are used to convey the role of each object. The role of the customPage is more specific than just being a Wizard page - it's the 'custom' page.

Example: Message Router #

While you may not be building Wizard-based user interfaces with MVVM, I chose the previous example because the problem domain (that of modeling a Wizard UI) is something most of us can relate to. Another set of examples is much more general-purpose in nature, but may feel more abstract.

Due to the multicore problem, asynchronous messaging architectures are becoming increasingly common - just consider the growing popularity of CQRS. In a Pipes and Filters architecture, Message Routers are central. Many variations of Message Routers presented in Enterprise Integration Patterns provide examples in C# where the alternative outbound channels are identified with Role Hints such as outQueue1, outQueue2, etc. See e.g. pages 83, 233, 246, etc. Due to copyright reasons, I'm not going to repeat them here, but here's a generic Message Router that does much the same:

public class ConditionalRouter<T>
{
    private readonly IMessageSpecification<T> specification;
    private readonly IChannel<T> firstChannel;
    private readonly IChannel<T> secondChannel;
 
    public ConditionalRouter(
        IMessageSpecification<T> specification,
        IChannel<T> firstChannel,
        IChannel<T> secondChannel)
    {
        this.specification = specification;
        this.firstChannel = firstChannel;
        this.secondChannel = secondChannel;
    }
 
    public void Handle(T message)
    {
        if (this.specification.IsSatisfiedBy(message))
            this.firstChannel.Send(message);
        else
            this.secondChannel.Send(message);
    }
}

Once again, notice how the ConditionalRouter selects between the two roles of firstChannel and secondChannel based on the outcome of the Specification. The constructor argument names carry (slightly) more information about the role of each channel than the interface name.

Summary #

Parameter or variable names can be used to convey information about the role played by an object. This is especially helpful when the type of the object is very general (such as string, DateTime, int, etc.), but can also be used to select among alternative objects of the same type even when the type is specific enough to adhere to the Single Responsibility and Interface Segregation principles.


Type Name Role Hints

Monday, 07 January 2013 11:28:15 UTC

This article describes how object roles can by indicated by type names.

In my overview article on Role Hints I described how making object roles explicit can help making code more object-oriented. When first hearing about the concept of object roles, a typical reaction is: How is that different from the class name? Doesn't the class name communicate the purpose of the class?

Sometimes it does, so this is a fair question. However, there are certainly other situations where this isn't the case at all.

Consider many primitive types: do the names int (or Int32), bool (or Boolean), Guid, DateTime, Version, string, etc. communicate anything about the roles played by instances?

In most cases, such type names provide poor hints about the roles played by the instances. Most developers already implicitly know this, and the Design Guidelines for Developing Class Libraries also provides this rule:

Consider using names based on a parameter's meaning rather than names based on the parameter's type.

Most of us can probably agree that code like this would be hard to use correctly:

public static bool TryCreate(Uri u1, Uri u2, out Uri u3)

Which values should you use for u1? Which value for u2?

Fortunately, the actual signature follows the Design Guidelines:

public static bool TryCreate(Uri baseUri, Uri relativeUri, out Uri result)

This is much better because the argument names communicate the roles the various Uri parameters play relative to each other. With the object roles provided by descriptive parameter names, the method signature is often all the documentation required to understand the proper intended use of the method.

The Design Guidelines' rules sound almost universal. Are there cases when the name of a type is more informative than the argument or variable name?

Example: Uri.IsBaseOf #

To stay with the Uri class for a little while longer, consider the IsBaseOf method:

public bool IsBaseOf(Uri uri)

This method accepts any Uri instance as input parameter. The uri argument doesn't play any other role than being an Uri instance, so there's no reason that an API designer should go out of his or her way to come up with some artificial 'role' name for the parameter. In this example, the name of the type is sufficient information about the role played by the instance - or you could say that in this context the class itself and the role it plays conflates into the same name.

Example: MVC instances #

If you've ever worked with ASP.NET MVC or ASP.NET Web API you may have noticed that rarely do you refer to Model, View or Controller instances with variables. Often, you just return a new model directly from the relevant Action Method:

public ViewResult Get()
{
    var now = DateTime.Now;
    var currentMonth = new Month(now.Year, now.Month);
    return this.View(this.reader.Query(currentMonth));
}

In this example, notice how the model is implicitly created with a call to the reader's Query method. (However, you get a hint about the intermediary variables' roles from their names.) If we ever assign a model instance to a local variable before returning it with a call to the View method, we often tend to simply name that variable model.

Furthermore, in ASP.NET MVC, do you ever create instances of Controllers or Views (except in unit tests)? Instances of Controllers and Views are created by the framework. Basically, the role each Controller and View plays is embodied in their class names - HomeController, BookingController, BasketController, BookingViewModel, etc.

Example: Command Handler #

Consider a 'standard' CQRS implementation with a single Command Handler for each Command message type in the system. The Command Handler interface might be defined like this:

public interface ICommandHandler<T>
{
    void Execute(T command);
}

At this point, the type argument T could be literally any type, so the argument name command conveys the role of the object better than the type. However, once you look at a concrete implementation, the method signature materializes into something like this:

public void Execute(RequestReservationCommand command)

In the concrete case, the type name (RequestReservationCommand) carries more information about the role than the argument name (command).

Example: Dependency Injection #

With basic Dependency Injection, a common Role Hint is the type itself.

public BasketController(
    IBasketService basketService,
    CurrencyProvider currencyProvider)
{
    if (basketService == null)
        throw new ArgumentNullException("basketService");
    if (currencyProvider == null)
        throw new ArgumentNullException("currencyProvider");
 
    this.basketService = basketService;
    this.currencyProvider = currencyProvider;
}

From the point of view of the BasketController, the type names of the IBasketService interface and the CurrencyProvider abstract base class carry all required information about the role played by these dependencies. You can tell this because the argument names simply echo the type names. In the complete system, there could conceivably be more than one implementation of IBasketService, but in the context of the BasketController, some IBasketService instance is all that is required.

Summary #

The more generic a type is, the less information about role does the type name itself carry. Primitives and Value Objects such as integers, strings, Guids, Uris, etc. can be used in so many ways that you should strongly consider proper naming of arguments and variables to convey information about the role played by an object. This is what the Framework Design Guidelines suggest. However, as types become increasingly specific, their names carry more information about their roles. For very specific classes, the class name itself may carry all appropriate information about the object's intended role. As a rule of thumb, expect such classes to also adhere to the Single Responsibility Principle.


Page 54 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!