ploeh blog danish software design
Where to put unit tests
This article provides arguments for (and against) putting unit tests in libraries different from the production code.
One of my readers ask me "whether unit tests should be done in separate assemblies or if they should be done in [...] the production assemblies?" As he puts it, he "always took for granted that the test should go into separate assemblies," but is this really true, and if it is: what are the arguments?
Despite the fundamental nature of this question, I haven't seen much explicit treatment of it, and to answer it, I had to pause and think. The key to the answer, I think, is to first understand why you write automated tests at all. (This way of thinking about questions and answers is closely related to the (Lean) technique of 5 whys.)
As always, the short answer is that it depends, but that's not very helpful, so here are some common reasons.
Regression testing #
In my experience, most people focus on the quality control aspect of automated testing. If your only purpose of having automated tests is to have a good regression test suite, then it seems that it doesn't really matter where the tests go.
Still, even if that's your only reason for unit testing, I think putting the unit tests in a separate library is the best choice:
- It gives you a clearer separation of concerns. If you put unit tests in your production library, it will blur the distinction between production code and test code. How do you know which code to test? How do you know what not to test? There are different ways to address these concerns (namespaces, TDD), but I think the simplest way to deal with such issues is to put the tests in a separate library. (This argument reminds me a little of Occam's razor.)
- Putting unit tests in your production code will make your compiled libraries bigger. It will take (slightly) more time to load the libraries into memory, and they will take up more memory. This probably doesn't matter at all on a big server, but may be important if your production code runs on a (small) mobile device. Once again, it depends.
- The more code you put in your production software, the larger the potential attack surface becomes. Have you performed a security analysis of your unit tests? It's much easier to put the tests in a separate library that you never deploy to production, because it means that you don't have to waste valuable time and resources doing a security analysis of your test code.
In short, if you put unit tests in the same library as your production code, all that unit test must adhere to the same standards as your production code. Obviously, if you have no standards for your production code, this doesn't apply, but then you probably have bigger problems.
API design feedback #
While I find regression testing useful in itself, I find Test-Driven Development (TDD) particularly valuable because it provides feedback about the API I'm building. In the spirit of GOOS, if the test is difficult to write, it's probably because the API is difficult to use. You'll only get this feedback if you test against the public API of your code; the unit test is the first client of your API.
However, if your unit tests reside in the same library as your production code, you can easily invoke internal classes and members from your unit tests. You simply lose the opportunity to get valuable feedback about the usability of your code. (Note that this argument also applies to the use of the [InternalsVisibleTo] attribute, which means that you should never use it.)
For me, this reason alone is enough that I always prefer putting unit tests in separate libraries.
Shipping production code with tests #
Although I think that there are strong arguments against putting unit tests in production code, there may also be reasons in favor. As one answer on Stack Overflow describes, shipping your production code with a test suite can be valuable as a self-diagnostics tool. This might be particularly valuable if you deploy your production code to heterogeneous environments, and you're unable to predict and test the configuration of all environments before you ship your code.
Summary #
In the end, the answer depends on your context. If you understand why you (want to) have unit tests, you'll be able to answer the initial question: should unit tests go in a separate library, or can they go in the same library as the production code? In mainstream scenarios I will strongly recommend putting tests in a separate library, but you may have special circumstances where it makes sense to put the tests together with the production code. As long as you understand the pros and cons, you can make your own informed decision.
REST lesson learned: Consider a self link on all resources
Add a self-link on RESTful resource.
This suggestion is part of my REST lessons learned series of blog posts. Contrary to previous posts, this advice doesn't originate from a lesson learned the hard way, but is more of a gentle suggestion.
One of the many advantages of a well-designed REST API is that it's 'web-scale'. The reason it's 'web-scale' is because it can leverage HTTP caching.
HTTP caching is based on URLs. If two URLs are different, they represent different resources, and can't be cached as one. However, conceivably, a client could arrive at the same resource via a number of different URLs:
- If a client follows redirect, it may not arrive at the URL it originally requested.
- If a client builds URLs from templates, it may order query string parameters in various ways.
Consider, as courtesy, adding a self-link to all resources. Adding a self link is as easy as this:
<product xmlns="http://fnaah.ploeh.dk/productcatalog/2013/05" xmlns:atom="http://www.w3.org/2005/Atom"> <atom:link href="http://catalog.api.ploeh.dk/products/1337" rel="self" /> <atom:link href="http://catalog.api.ploeh.dk" rel="http://catalog.api.ploeh.dk/docs/rels/home" /> <atom:link href="http://catalog.api.ploeh.dk/categories/chocolate" rel="http://catalog.api.ploeh.dk/docs/rels/category" /> <atom:link href="http://catalog.api.ploeh.dk/categories/gourmet" rel="http://catalog.api.ploeh.dk/docs/rels/category" /> <name>Fine Criollo dark chocolate</name> <price>50 DKK</price> <weight>100 g</weight> </product>
Notice the link with the rel value of "self"; from its href value you now know that the canonical URL of that product resource is http://catalog.api.ploeh.dk/products/1337
.
Particularly when there are (technically) more than one URL that will serve the same content, do inform the client about the canonical URL. There's no way a client can tell which URL variation is the canonical URL. Only the API knows that.
Redirected clients #
A level 3 RESTful API can evolve over time. As I previously described, you may start with URLs like http://foo.ploeh.dk/orders/1234
only to realize that in a later version of the API, you'd rather want it to be http://foo.ploeh.dk/customers/1234/orders
.
As the RESTful Web Services Cookbook explains, you should keep URLs cool. In practice, that means that if you change the URLs of your API, you should at least leave a 301 (Moved Permanently) at the old URL.
If you imagine a mature API, this may have happened more than once, which means that a client arriving at one of the original URLs may be redirected several times.
In your browser, you know that if redirects happen, the address bar will (normally) display the final URL at which you arrived. However, consider that REST clients are applications. It will be implementation-specific whether or not the client realizes that the value of the final address isn't the URL originally requested.
As a courtesy to clients, do consider informing them of the final address at which they arrived.
URL variations #
While the following scenario isn't applicable for level 3 RESTful APIs, level 1 and 2 services require clients to construct URLs from templates. As an example, such services may define a product search resource as /products/search?q={query}&p={page}
. To search for chocolate (and see the third (zero-indexed) page) you could construct the URL as /products/search?q=chocolate&p=2
. However, most platforms (e.g. the ASP.NET Web API) would handle /products/search?p=2&q=chocolate
in exactly the same way. The more query parameters you allow, the more permutations are possible.
This is a well-known problem in search (and computer science); it's called Canonicalization. Only the API knows the canonical value of a given URL: do consider informing the client about this value.
Summary #
Adding a self-link to each resource isn't much of a burden for the service, but could provide advantages for both the service and its clients. In most cases I'd expect the ROI to be high, simply because the investment is low.
Comments
The primary reason I didn't mention the Content-Location header here was that, frankly, I wasn't aware of it. Now that I've read the specification, I'll have to take your word for it :) You may be right, in which case there's no particular reason to prefer a self-link over a Content-Location header.
When it comes following links, it's true that a protocol-near client would know the location after each redirect. The question is, if you're using an HTTP library, do you ever become aware of this?
Consider a client, using a hypothetical HTTP client library:
client.FollowRedirects = true; var response = client.Get("/foo");
The response may actually be the result of various redirects, so that the final response is from "/bar", but unless the client is very careful, it may never notice this.
REST lesson learned: Consider a home link on all resources
Add home links on RESTful resources.
This suggestion is part of my REST lessons learned series of blog posts. Contrary to the previous posts, this advice doesn't originate from a lesson learned the hard way, but is more of a gentle suggestion.
When designing a level 3 RESTful API, I've found that it's often helpful to think about design issues in terms of: how would you design it if it was a web site? On web sites, we don't ask users to construct URLs and enter them into the browser's address bar. On web sites, users follow links (and fill out forms). Many well-designed web sites are actually HATEOAS services, so it makes sense to learn from them when designing RESTful APIs.
One almost universal principle for well-designed web sites is that they always have a 'home' link (usually in the top left corner). It makes sense to transfer this principle to RESTful APIs.
All the RESTful APIs that I've helped design so far have had a single 'home' resource, which acts as a starting point for all clients. This is the only published URL for the entire API. From that starting page, clients must follow (semantic) links in order to accomplish their goals.
As an example, here's a 'home' resource on a hypothetical product catalog API:
<home xmlns="http://fnaah.ploeh.dk/productcatalog/2013/05" xmlns:atom="http://www.w3.org/2005/Atom"> <atom:link href="http://catalog.api.ploeh.dk/products/1234" rel="http://catalog.api.ploeh.dk/docs/rels/products/discounted" /> <atom:link href="http://catalog.api.ploeh.dk/products/5678" rel="http://catalog.api.ploeh.dk/docs/rels/products/discounted" /> <atom:link href="http://catalog.api.ploeh.dk/products/1337" rel="http://catalog.api.ploeh.dk/docs/rels/products/discounted" /> <atom:link href="http://catalog.api.ploeh.dk/categories" rel="http://catalog.api.ploeh.dk/docs/rels/product/categories" /> </home>
From this (rather sparse) 'home' resource, you can view each discounted product by following the discounted link, or you can browse the catalog by following the categories link.
If you follow one of the discounted links, you get a new resource:
<product xmlns="http://fnaah.ploeh.dk/productcatalog/2013/05" xmlns:atom="http://www.w3.org/2005/Atom"> <atom:link href="http://catalog.api.ploeh.dk" rel="http://catalog.api.ploeh.dk/docs/rels/home" /> <atom:link href="http://catalog.api.ploeh.dk/categories/chocolate" rel="http://catalog.api.ploeh.dk/docs/rels/category" /> <atom:link href="http://catalog.api.ploeh.dk/categories/gourmet" rel="http://catalog.api.ploeh.dk/docs/rels/category" /> <name>Fine Criollo dark chocolate</name> <price>50 DKK</price> <weight>100 g</weight> </product>
Notice that in addition to the two category links, the resource representation also contains a home link, enabling the client to go back to the 'home' resource.
Having a home link on all resources is another courtesy to the client, just like avoiding 204 responses - in fact, if you can't think of anything else to return instead of 204 (No Content), at the very least, you could return a home link.
In the above examples, I didn't obfuscate the URLs, but I would do that for a real REST API. The reason I didn't obfuscate the URLs here is to make the example easier to understand.
REST lesson learned: Avoid hackable URLs
Avoid hackable URLs if you are building a REST API.
This is a lesson about REST API design that I learned while building non-trivial REST APIs. If you provide a full-on level 3 REST API, consider avoiding hackable URLs.
Hackable URLs #
A hackable URL is a URL where there's a clear pattern or template for constructing the URL. As an example, if I present to you the URL http://foo.ploeh.dk/products/1234
, it's easy to guess that this is a resource representing a product with the SKU of 1234. If you know the SKU of another product, it's easy to 'hack' the URL to produce e.g. http://foo.ploeh.dk/products/5678
.
That's a really nice feature of your API if you are doing a level 1 or 2 API, but for a HATEOAS API, this defies the purpose.
The great divide #
Please notice that the shift from level 2 to level 3 RESTful APIs mark a fundamental shift in the way you should approach URL design.
Hackable URLs are great for level 1 and 2 APIs because the way you (as a client) are told to construct URLs is by assembling them from templates. As an example, the Windows Azure REST APIs explicitly instruct you to construct the URL in a particular way: the URL to get BLOB container properties is https://myaccount.blob.core.windows.net/mycontainer?restype=container
, where you should replace myaccount with your account name, and mycontainer with your container name. While code aesthetics are subjective, it's not even a particularly clean URL, but it's easy enough to produce. The URL template is part of the contract, which puts the Windows Azure API solidly at level 2 of the Richardson Maturity Model. If I were designing a level 1 or 2 API, I'd make sure to make URLs hackable, too.
However, if you're building a level 3 API, hypermedia is king. Clients are expected to follow links. The addresses of resources are not published as having a particular template; instead, clients must follow semantic links in order to arrive at the desired resource(s). When hypermedia is the engine of application state, it's no good if the client can short-circuit the application flow by 'hacking' URLs. It may leave the application in an inconsistent state if it tries to do that.
Hackable URLs are great for level 1 and 2 APIs, but counter-productive for level 3 APIs.
Evolving URLs #
One of the main attractions of building a level 3 RESTful API is that it's easier to evolve. Exactly because URL templates are not part of the contract, you can decide to change the URL structure when evolving your API.
Imagine that the first version of your API has an (internal) URL template like /orders/{customerId}
, so that the example URL http://foo.ploeh.dk/orders/1234
is the address of the order history for customer 1234. However, in version 2 of your API, you realize that this way of thinking is still too RPC-like, and you'd rather prefer /customers/{customerId}/orders
, e.g. http://foo.ploeh.dk/customers/1234/orders
.
With a level 3 RESTful API, you can change your internal URL templates, and as long as you keep providing links, clients following links will not notice the difference. However, if clients are 'hacking' your URLs, their applications may stop working if you change URL templates.
Keep clients safe #
In the end, HATEOAS is about encapsulation: make it easy for the client to do the right thing, and make it hard for the client to do the wrong thing. Following links will make clients more robust, because they will be able to handle changes in the API. Making it easy for clients to follow links is one side of designing a good API, but the other side is important too: make it difficult for clients to not follow links: make it difficult for clients to 'hack' URLs.
The services I've helped design so far are level 3 APIs, but they still used hackable URLs. One reason for that was that this is the default in the implementation platform we used (ASP.NET Web API); another reason was that I thought it would be easier for me and the rest of the development team if the URLs were human-readable. Today, I think this decision was a mistake.
What's the harm of supplying human-readable URLs for a level 3 RESTful API? After all, if a client only follows links, the values of the URLs shouldn't matter.
Indeed, if the client only follows links. However, clients are created by human developers, and humans often take the road of least resistance. While there are long-time benefits (robustness) from following links, it is more work in the short term. The API team and I repeatedly experienced that the developers consuming our APIs had 'hacked' our URLs; when we changed our URL templates, their clients broke and they complained. Even though we had tried to explicitly tell them that they must follow links, they didn't. While we never documented our URL templates, they were simply too easy to guess from pure extrapolation.
Opaque URLs #
In the future, I plan to make URLs opaque when building level 3 APIs. Instead of http://foo.ploeh.dk/customers/1234/orders
, I'm going to make it http://foo.ploeh.dk/DC884298C70C41798ABE9052DC69CAEE
, and instead of http://foo.ploeh.dk/products/2345
, I'm going to make it http://foo.ploeh.dk/598CB0CAC30646E1BB768596BFE91F2C
, and so on.
Obviously, that means that my API will have to maintain some sort of two-way lookup table that can map DC884298C70C41798ABE9052DC69CAEE to a request for customer 1234's orders, 598CB0CAC30646E1BB768596BFE91F2C to request for product 2345, but that's trivial to implement.
It puts a small burden on the server(s), but effectively stops client developers from shooting themselves in their feet.
Summary #
Hackable URLs are a good idea if you are building a web site, or a level 1 or 2 REST API, but unless you know that all client developers are enthusiastic RESTafarians, consider producing opaque URLs for level 3 REST APIs.
Comments
I understand the benefits of the idea that clients must follow links, I just can't see how it would work in reality. Say you are building a website to display products and you are consuming a rest API, how do you support deep linking to individual products? OK, you can either use the same URL as the underlying API or encode it in, but what are you expected to do if the API changes it's links? Redirect the user to your home page? What if you want to display information from 2 or more resources on the same webpage?
First of all, keep in mind that while it can be helpful to think about REST design principles in terms of "how would I design this if it was a web site", a REST API is not a web site.
You can do deep linking in your web site, but why would you want to do 'deep linking' for an API? These are two different concerns.
It's very common to create a web site where each page calls many individual services. This can be done either from the web server (e.g. from ASP.NET or similar), or from the browser as AJAX calls. This is commonly known as mash-up architecture, because the GUI is really just a mash-up of service data. Amazon.com works that way. You can still deep link to a web page; it's the web page's responsibility to figure out which services to call with what parameters.
That said, as described in the RESTful Web Services Cookbook, you should serve cool URLs, so if you ever decide to change your internal URI template, you should leave a 301 (Moved Permanently) behind at the old URL. This would enable a client that once bookmarked a resource to follow the redirect to the new address.
When you say "it's the web page's responsibility to figure out which services to call", that's what I'm getting at, how would the client do that?
I'm guessing one way would be to cache links followed and update them on 301s or "re-follow" on 404s. So, say you wanted a web page displaying "product/24", you might:
- GET the REST endpoint
- GET product catalogue URL from response
- GET product URL from response
- Cache the product URL against the website product URL
- Subsequent requests hit the cached URL
Then if the product URL changes, if the response is 301, you simply update the cache. If the response is 404 then you'd redo the above steps.
I'm just thinking this through, is the above a "standard" approach for creating rest clients?
That sounds like one way of doing it. I don't think there's a 'standard' way for creating RESTful clients.
The sketch you paint sounds like a lot of work, and it seems that it would be easier if the client could simply assemble appropriate URLs from templates. That would indicate level 1 and 2 REST APIs, which might be perfectly fine if the only purpose of building the service is to support a GUI. However, what you get in exchange for the extra effort it takes to consume a level 3 API, is better decoupling. It's always going to be a trade-off.
It's definitely going to be more work to build and consume a truly HATEOAS-based API, so you should only do it if it's going to provide a good return on investment. When would that be? One general scenario I can think of is when you're building a service, which is going to be consumed by multiple (unknown) clients. If you control the service, but not the clients, I'd say a level 3 API is very beneficial, because it enables you to evolve the API independently of the clients. Conversely, if the only purpose of building a service is to support a single client, it's probably going to be overkill.
The way I see it is that being HATEOAS compliant does not impose non-hackable URLs, but that URLs - even if they might look hackable (/1, /2...etc) - are not guaranteed to work, are not part of the contract and thus should not be relied uppon. In other words, a link is only guaranteed to work if you, the client, got it from a previous reponse, be it "hackable-looking" or not.
So I think opaque URLs are just a way to inforce that contract. But don't we always here that "a good REST client should be well behaved" ?
Reda, thank you for writing. You're right, and if we could rely on all clients to be well-behaved, there'd be no problems. However, in my experience, client developers often don't read the documentation particularly thoroughly. Instead, they look at the returned data and start inferring the URL scheme from examples. I already wrote about this in this post:
"The API team and I repeatedly experienced that the developers consuming our APIs had 'hacked' our URLs; when we changed our URL templates, their clients broke and they complained. Even though we had tried to explicitly tell them that they must follow links, they didn't."So, in this imperfect world, non-hackable URLs start to look attractive, because then the client developers have no choice but to follow the links.
I'd like to propose an alternate strategy to achieve the desired result while maintaining the ability to easily debug production issues at the client: only use the non-hackable urls for the "dev sandbox".
Another inducement to better client behvior might be to have the getting started documentation use existing libraries for the chosen hypermedia media type.
Finally, have the clients send a "developer [org] identifier". Find a way to probe clients for correctness. For instance, occasionally alter hrefs, but also support the "hackable" form.
With this data, be proactive about informing the user that "we're planning a release and your app will probably break because you're not following links as expected".
In the end, some people will still fail and be angry. With opaque urls, I would worry that users would have a hard time communicating production issues when looking at logs /fiddler / browser tools / etc.
Kijana, thank you for writing. You suggest various good ideas that I'll have to keep in mind. Not all of them are universally applicable, but then I also don't think that my suggestion should be applied indiscriminately.
Your first suggestion assumes that there is a dev sandbox; this may or may not be the case. The APIs with which I currently work have no dev sandboxes.
The idea about using existing client libraries for the chosen hypermedia type is only possible if such libraries exist. The APIs I currently design use vendor media types, so client libraries only exist if we develop them ourselves. However, one of the important goals of RESTful services is to ensure interoperability, so we can't assume that clients are going to run on .NET, or Java, or Ruby, or whatever. For vendor media types, I don't think this is a viable option.
Using a client identifier is an option, but in order to work well, there must be some way to correlate that identifier to a contact in the client's organisation. That's an option if you already have a mechanism in place where you only allow known clients to access your API. On the other hand, if you publish a truly scalable public API, you may not want to do that, as registration requirements tend to hurt adoption. The APIs I currently work with is a mix of both of those; some are publicly accessible, while others require a 'client key'.
These are all interesting ideas, but ultimately, I'm not sure I understand your concern. Let's first establish that 'users' are other programmers. Why would a URL like http://foo.ploeh.dk/DC884298C70C41798ABE9052DC69CAEE
be harder to communicate than http://foo.ploeh.dk/customers/1234/orders
? Isn't it copy and paste in both cases?
To be clear: I don't claim that obfuscated URLs don't make client developers' work more difficult; it does, compared to 'cheating' by hacking the URL schemes, instead of following links. Even for well-behaved client developers, another level of abstraction always makes things harder. On the other hand, this should also make clients more robust, so as always, it's a trade-off.
“The API team and I repeatedly experienced that the developers consuming our APIs had 'hacked' our URLs; when we changed our URL templates, their clients broke and they complained. Even though we had tried to explicitly tell them that they must follow links, they didn't.”
I recently worked on an API for an iOS app and we ran into the same problem. It didn't matter how often I said "these URLs are probably going to change, don't hard-code them into the app" we still ran into issues. This was an internal team, I can only imagine how much more difficult it would be with an external team.
Instead of hashing the URLs, we changed the server-side URL generation to HMAC the URL and append the signature onto the query string. Requests without a valid signature would return a 403 Forbidden response. The root of the domain is the only URL that doesn't require a signature, and it returns a json response with signed URLs for each of the "base" resources.
We enabled this on a staging server and since all dev was happening on it, all the hard-coded URLs stopped functioning and were quickly removed from the application. It forced both us and the iOS developers to think more about navigation between resources, since it was impossible to get from one resource to another without valid URLs included in the json responses. I think it was probably also nicer for debugging than hashed URLs would be.
Dan, thank you for sharing that great idea! That looks like a much better solution than my original suggestion of obscuring the URL, because the URL is still human-readable, and thus probably still easier to troubleshoot for developers.
Your suggestion also doesn't require a lookup table. My suggested solution would require a lookup table in order to understand what each obscured URL actually means, and if you're running on multiple servers, that lookup table would have to be kept consistent across all servers, which can be hard to do in itself (you can use a database, but then you'd have a single point of failure). Your solution doesn't need any of that; it only requires that the HMAC key is the same on all servers.
The only disadvantage that I can think of is that you may need to keep that HMAC key secret, particularly in internal projects, in order to prevent clients from (literally) hacking the URLs.
Still, it sounds like your solution has more advantages, so I'm going to try that approach next time. Thank you for sharing!
“The only disadvantage that I can think of is that you may need to keep that HMAC key secret, particularly in internal projects, in order to prevent clients from (literally) hacking the URLs.”
That was one concern of ours too. One approach we considered was including the git commit SHA as part of the HMAC secret so that every commit would invalidate existing URLs. We decided against it because we were doing Continous Delivery, so multiple times a day we were deploying to the server, and we didn't want the communication overhead of having to notify everyone on each commit. We wanted to be able to merge feature branches into master and just know the CI server was going to handle deployment; and that unless we got alerts from CI, Pingdom or New Relic, everything is working as expected. The last thing we wanted to do was babysit the build so we could tell everyone "ok, the new build has deployed to staging, URLs will break so restart any clients you are in the middle of testing". Also part of the reason we use REST (and json-api) was to decouple front and backend development, it seemed counterproductive to introduce coupling back into the process.
An internal team could still re-implement this whole mechanism and sign their own URLs. We could either generate random secrets every day or have a simple tool that we can use to change the secret at-will (probably both).
Even though it doesn't completely stop URL hard coding, another idea we had was to include an expiration timestamp in the query string and then HMAC the URL with the timestamp included. We would use the responses' Cache-Control max-age to calculate the time, which would allow the client to use any URLs from cached responses. If the client didn't implement proper cache invalidation the URLs would inexplicably break, thus having a nice side-effect of forcing the client to handle Cache-Control max-age properly.
Dan, there are lots of interesting ideas there. What I assumed from your first comment is that you'd calculate the HMAC using asymmetric encryption, which would mean that unless clients have the server's encryption key, they wouldn't be able to recalculate the HMAC, and that would effectively prevent them from attempting to 'guess' URLs instead of following links.
If an external client has the server's encryption key, you have a different sort of problem.
However, for internal clients, developers may actually be able to find the key in your source control repository, build server, or wherever else you keep it (depending on the size of your organisation). You can solve this with security measures, like ACL-based security on the crypto key itself. It not hard, but it's something you may explicitly have to do.
When it comes to tying the URL to cache invalidation, that makes me a bit uneasy. While RESTful clients should follow links, they are allowed to bookmark links. It's a fundamental tenet of proper web design (not only REST) that cool URIs don't change. A client should be allowed to keep a particular URL around forever, and a service following Postel's law should keep honouring requests for that URL. As the RESTful Web Services Cookbook explains, if you move the resource, you should at least return a 301 (Moved Permanently) "or, in rare cases," issue a 410 (Gone) response.
As Dan said, you can use an HMAC to protect certain parameters in a URL from tampering. I actually formalized this approach in a .NET library I call Clavis, and which I first discussed in detail here.
Clavis just provides a simple framework for declaring a resources parameter interface. Given that interface, Clavis provides a way to generate URLs when given a valid set of parameters, and also provides a way to safely parse values given a constructed URL. Arbitrary types can be specified as resource parameters, and there's very little boilerplate to convert to/from strings.
Parameters are protected from tampering by default, but you can declare that some are unprotected, meaning the client can change them without triggering a server validation error. You want this in some cases, for instance to support GET forms, like a search.
Late to the party, but I wonder how you will create a hypermedia form that generates http://foo.ploeh.dk/598CB0CAC30646E1BB768596BFE91F2C
rather than http://foo.ploeh.dk/products/2345
.
In the case of hyperlinks, I fully agree: linking to a product should not require an specifically structured identifier. However, hypermedia is more than links alone: what if the client wants to navigate to product with a given tracking number, or a list of people with a given first name?
“Hackable URIs” are not a necessity for REST, but they are a potential/likely consequence of hypermedia forms. Such forms can be described with less expressive power if only simple string replacements are required instead of more complex transformations.
Ruben, thank you for writing. This is a commonly occuring requirement, particularly when searching for particular resources, and I usually solve it with URI templates, as outlined in the RESTful Web Services Cookbook. Essentially, it involves providing URI templates like http://foo.ploeh.dk/0D89C19343DA4ED985A492DA1A8CDC53/{term}
, and making sure that each template is served like a link, with a particular relationship type. For example:
<home xmlns="http://fnaah.ploeh.dk/productcatalog/2013/05"> <link-template href="http://foo.ploeh.dk/0D89C19343DA4ED985A492DA1A8CDC53/{term}" rel="http://catalog.api.ploeh.dk/docs/rels/product/search" /> </home>
It isn't perfect, but the best I've been able to come up with, and it works fairly well in practice.
Sure, URI templates are a means for creating hypermedia controls. But the point is that what they generate are hackable URIs.
You still have a (semi-)hackable URI with http://foo.ploeh.dk/0D89C19343DA4ED985A492DA1A8CDC53/{term}
. Even though the /0D89C19343DA4ED985A492DA1A8CDC53/
part is non-manipulable, terms can still be changed at the end. This shows that hackable URIs cannot entirely be avoided if we want to keep hypermedia controls simple. What else are hackable URIs but underspecified, out-of-band hypermedia controls?
An in-band IRI template makes the message follow the REST style by putting the control inside of the message, but it results in a hackable IRI. So, apparently, an HTTP REST API without hackable URIs is hard when there's more than just links.
Ruben, it's not that you can't avoid hackable URLs. URI templates are an optimisation that you can opt to apply, but you don't have to use them. As with so many other architecture decisions, it's a trade-off.
When considering RESTful API design, it often helps to ask yourself: how would a web site running in a browser work? You don't have to copy web site mechanics one-to-one (for one, browsers utilise only GET and POST), but you can often get inspiration.
A feature like search is, in my experience, one of the clearest cases for URI templates, so let's consider other designs. How does search work on a web site? It works by asking the user to type the search term into a form, and then submitting this form to the server.
You can do the same with a RESTful design. When a client wishes to search, it'll have to perform an HTTP POST request with a well-formed request body containing the search term(s):
POST /108B618F06644379879B9F2FEA1CAE92 HTTP/1.1 Content-Type: application/json Accept: application/json { "term": "chocolate" }
The response can either contain the search hits, or a link to a resource containing the hits.
Both options have potential drawbacks. If you return the search results in the response to the POST request, the client immediately receives the results, but you've lost the ability to cache them.
HTTP/1.1 200 OK Content-Type: application/json { "term": "chocolate", "results": [{ "name": "Valrhona", "link": { "href": "http://example.com/B69AD9193F6E472AB75D8EBD97331E22", "rel": "product" } }, { "name": "Friis-Holm", "link": { "href": "http://example.com/3C4BEC72BFB54E1B8FADB16CCB3E7A67", "rel": "product" } }] }
On the other hand, if the POST response only replies with a link to the search result, the search results themselves are cacheable, but now the client has to perform an extra request.
HTTP/1.1 303 See Other Location: http://example.com/0D25692DE79B4AF9A73F128554A9119E
In both cases, though, you'll need to document the valid representations that a client can POST to the search resource, which means that you need to consider backwards compatibility if you want to change things, and so on. In other words, you can 'protect' the URL schema of the search resource, but not the payload schema.
If you're already exposing an API over HTTPS, caching is of marginal relevance anyway, so you may decide to use the first POST option outlined above. Sometimes, however, performance is more important than transport security. As an example, I once wrote (for a client) a publicly available music catalogue API. This API isn't protected, because the data isn't secret, and it has to be fast. Resources are cached for 6 hours, so if you're searching for something popular (say, Rihanna), odds are that you're going to get a response from an intermediary cache. URI templates work well in this scenario, but it's a deliberate performance optimisation.
One can always avoid hackable URLs with trickery, but would you say that leads to a cleaner solution? I agree that search is “one of the clearest cases for URI templates”, so why bother putting in an unneeded POST
request just to avoid hackable URIs? I fail to see why this would be a better REST design, given that you loose the beneficial properties of the uniform interface constraints by overloading the POST
method for what is essentially a read-only operation. The trade-off here is entirely non-technical: you trade the performance of the interface for the aesthetics of its URLs.
Here's how a website would do it: it would have an HTML form for a GET
request, which would naturally lead to a hackable URL. Because that's how HTML forms work: like URI templates, they result in hackable URLs. It's unavoidable. Hackable URLs are a direct consequence of common hypermedia controls, which are a necessity to realize the hypermedia constraint. You can swim against the current and make more difficult hypermedia controls, but to whose benefit? It's much easier to just accept that hackability is a side-effect of HTML forms and URI templates.
Therefore, your thesis that one should avoid hackable URIs breaks down at this point. APIs with non-hackable URIs are inherently incompatible with URI templates—and why throw away this excellent means of affordance? After all, the most straightforward way to realize search functionality is with an in-band URI template. And URI templates lead to hackable URIs. Hence, hackable URIs occur in REST APIs. You can work around this, but it's quite artificial to do so. In essence, you're using the POST
request to obfuscate the URI; having such a translation step for purely aesthetic motives complicates the API and sacrifices caching for no good reason. Hackable URIs are definitely the lesser evil of the two.
So, would you avoid hackable URIs at any cost? Or can you agree that they indeed have a place if we like to keep using the hypermedia controls that have been standardized?
Ruben, my intent with this post was to highlight a problem I've experienced when publishing REST APIs, and that I've attempted to explain in the section titled Keep clients safe. I wish that problem didn't exist, but in my experience, it does. It most likely depends on a lot of factors, so if you don't have that problem, you can disregard the entire article.
The issue isn't that I want to make URLs opaque. The real issue at stake is that I want clients to follow links, because it gives me options to evolve the API. Consider the example in this article. If a client wants to see a user's messages, it should follow links. If, however, a client developer writes the code in such a way that it'll explicitly look for a root > users > user details > user messages
route, little is gained.
A client looking for user messages should be implemented with a prioritised rule set:
- Look for a
user-messages
link. If found, follow it. Done. - Look for a
user-details
link. If found, follow it, and apply this rule list again from the top. - Look for a
user
link. If found, follow it, and apply this rule list again from the top.
If client developers can guess, and thereby 'hack', my service's URLs, I can't restructure my API. It's not really a question of aesthetics; it's not that I want to be able to rename URLs. The real issue is that sometimes, when developing software, you discover that you've modelled a concept the wrong way. Perhaps you've conflated two concepts that ought to be treated as separate resources. Perhaps you've create a false distinction, but now you realise that two concepts that you thought were separate, is really the same. Therefore, sometimes you want to split one type of resource into two, or more, or you want to combine two or more different types of resources into one.
If clients follow links, you have a chance to pull off such radical restructuring. If clients have hard-coded URLs, you're out of luck.
Thank you for an interesting post and discussion. I am no expert wrt REST but I have a comment (posed as a question) and a question.
If client developers abuse transparent URLs (that are provided in a level 3 api) isn't the fact that the client stops working when the URLs change their problem? And sure, they can guess other URLs, just like in the browser we can guess other pages. Doesn't mean the application stage should let them access those pages etc.
And now the question. Are there any libraries or tools that allow REST client applications to easily get a list of the links and "follow the links" (i.e. do an API call)? I can see a client having slots for possible / expected links, filling those that exist, and calling them. Is there any library to help this?
Ashley, thank you for writing. In the end, as you point out, it's the client's problem if they don't adhere to the contract. In some scenarios, that's the end of the story. In other scenarios (e.g. the ones I often find myself in), there's a major client that has a lot of clout, and if those developers find that you've moved their cheese, their default reaction is that it's your fault - not theirs. In such cases, it can be political to ensure that there's no doubt about the contract.
I'm not aware of any libraries apart from HttpClient. It doesn't do any of the things you ask about, because it's too low-level, but at least it enables you to build a sensible client API on top of it.
Herding Code podcast
I'm on Herding Code.
At the Danish Developer Conference 2013 I had the pleasure of meeting Jon Galloway, who interviewed me for a Herding Code podcast. In this interview, we talk about AutoFixture, testing (or not testing) trivial code, as well as lots of the unit testing topics I also cover in my Pluralsight course on Advanced Unit Testing.
REST lesson learned: Avoid 204 responses
Avoid 204 responses if you're building a HATEOAS application.
This is a lesson about REST API design that I learned while building non-trivial REST APIs. In order to be as supportive of the client as possible, a REST API should not return 204 (No Content) responses.
From the service's perspective, a 204 (No Content) response may be a perfectly valid response to a POST, PUT or DELETE request. Particularly, for a DELETE request it seems very appropriate, because what else can you say?
However, from the perspective of a proper HATEOAS-aware client, a 204 response is problematic because there are no links to follow. When hypermedia acts as the engine of application state, when there are no links, there's no state. In other words, a 204 response throws away all application state.
If a client encounters a 204 response, it can either give up, go to the entry point of the API, or go back to the previous resource it visited. Neither option is particularly good.
Giving up is not a good option if there's still work to do. Essentially, this is equivalent to a crashing client.
Going to the entry point of the API may allow the client to move on, doing what it was doing, but state may still be lost.
Going back (the equivalent of using your browser's back button) may be the best option, but has a couple of problems: First, if the client just did a DELETE, the previous resource in the history may now be gone (it was just deleted). The client would have to go back twice to arrive at a proper resource. Second, while your browser has built-in history, a programmatic HTTP client probably hasn't. You could add that feature to your client, but it would require more work. Once more, it would require the client to maintain state, which means that you'd be moving state from hypermedia to the client. It's just not a HATEOAS-compliant approach.
A good REST API should make it easy to be a client. While this is only a variation of Postel's law, I also like to think of this in terms or courtesy. The service has a lot of information available to it, so it might as well be courteous and help the client by sharing appropriate pieces of information.
Instead of a 204 (No Content) response, tell the client what it can do now.
Responding to POST requests #
An HTTP POST request often represents some sort of Command - that is: an intent to produce side effects. If the service handles the request synchronously, it should return the result of invoking the Command.
A common POST action is to create a new resource. At the very least, the REST API should return a 201 (Created) with a proper Location header. That's all described in detail in the RESTful Web Services Cookbook.
Even if the API decides to handle the request asynchronously, it can provide one or more links. Actually, the specification of 202 (Accepted) says that the "entity returned with this response SHOULD include an indication of the request's current status and [...] a pointer to a status monitor". That sounds like a link to me.
Responding to PUT requests #
An HTTP PUT request is often intended to update the state of a particular resource. Instead of returning 204 (No Content), the API should be courteous and return the new state of the resource. You may think that this is redundant because the client just transferred the state of the resource to the service, so why pay transmission costs to get the resource back from the service?
Even if a client PUTs the entire state of a resource to the API, the API may still have information about the resource not available to the client. The most universal example is probably the resource's ETag. If the client wishes to do further processing on the resource, it's likely going to need the ETag value sooner or later.
The representation of the resource may also include optional or denormalized data that the client may not have. If that data is optional, the client can make a PUT without it, but it might want that data to display to a user, so it would have to read the latest state of the resource regardless.
Responding to DELETE requests #
A DELETE request represents the intent to delete a resource. Thus, if the service successfully handles a DELETE request, what else can it do than returning a 204 (No Content)? After all, the resource has just been removed.
A resource is often a member of a collection, or otherwise 'owned' by a container. As an example, http://foo.ploeh.dk/api/tags/rock
represents a "rock" tag, but another way of looking at it is that the /rock
resource is contained within the tags
container (which is itself a resource). This should be familiar to Atom Pub users.
Imagine that you want to delete the http://foo.ploeh.dk/api/tags/rock
resource. In order to accomplish that goal, you issue a DELETE request against it. If all your client gets back is a 204 (No Content), it's just lost its context. Where does it go from there? Unless you keep state on the client, you don't know where you came from.
Instead of returning 204 (No Content), the API should be helpful and suggest places to go. In this example I think one obvious link to provide is to http://foo.ploeh.dk/api/tags
- the container from which the client just deleted a resource. Perhaps the client wishes to delete more resources, so that would be a helpful link.
Responding to GET requests #
Returning 204 (No Content) as a response to a GET request is a bit weird, but probably not completely unheard of. Such a resource would represent a flag or semaphore, where the existence of the resource (as opposed to a 404 (Not Found)) signifies something. However, consider being helpful to the client and at least provide a link it can use to move on.
Summary #
Technically, responding with 204 (No Content) is perfectly valid, but if a REST API does that, its clients lose their current context. A service should be helpful and inform clients where they can go from the current resource. This will make client development easier, and only puts a small burden on the service.
Comments
The server has fulfilled the request but does not need to return an entity-body, and might want to return updated metainformation. The response MAY include new or updated metainformation in the form of entity-headers, which if present SHOULD be associated with the requested variant.So you can still have a response body without entity information but contain meta information maintaining HATEOS.
Xander, thank you for writing. If you read this article carefully, you may notice that it's not an exegesis of the HTTP specification, but a guideline to writing pleasant and useful RESTful services.
The bottom line is that if you're a client developer trying to follow links, 204 responses are annoying, which isn't the same as to say that they're invalid (they're not).
Could the service put links in the headers? Perhaps it could, but then it'd be forcing the client developer to do more work (by looking after links in two places).
Forcing client developers to do more work scales poorly (in terms of effort, not performance). A service developer needs to do a particular piece of work once, after which it's available to all client developers (even if there are hundreds of clients).
If a service doesn't provide a consistent API, every client developer has to duplicate the work the service developer could have done once. Every time that happens, you risk introducing bugs (on the client side). The end result is a more brittle system.
In the end, this has nothing to do with HTTP, but with Postel's law.
Mark, I understand why you might say that about link headers, but not all media types are hypermedia (image/png, audio/mp4, video/ogg to name a few) whereas all HTTP header sections are. Given that, wouldn't it be wise for a client to always maintain awareness of protocol-level linking and, when a media type offers links, maintain awareness of that as well?
Since you mentioned client/server development, it's worth considering that once a given domain (be it 'people', 'autos', 'nutrition', etc.) has become cemented with standard media types and link relations, there would probably be a community effort to develop client and server libraries and applications for varying languages and platforms so that each development team doesn't have to write and test their own implementations from scratch (see: HTML browsers and frameworks.)
It makes sense that the 'World Wide Web' has become standard and RESTful; everyone is interested in the domain of general-purpose knowledge and entertainment. Unfortunately I think it will still be a while before we see many other, more specific domains become standardized to that same degree.
Derek, thank you for writing. It's a good point that some media types like images, audio, etc. don't have a natural place to put meta-data like links. Conceivably, that might even include vendor-specific media types where, for some reason or other, links don't fit.
In such cases, I think that links in headers sounds preferable to no links at all. In the presence of such media types, my argument against links in headers no longer holds.
Once more, we've learned that no advice is independent of context. In the context of a RESTful API where such media types may occur, clients would have to be aware that links can appear in headers as well as in the body of any document. In such APIs, 204 responses may be perfectly fine. Here, the extra work of looking two places (both in the body, and in headers) is warranted.
In contexts of APIs where such media types don't occur, clients have no particular reason to expect links in headers. In this case, I still maintain that 204 responses put an unwarranted burden on client developers.
So far, I've only been involved with APIs without such special media types, so my initial advice was obviously coloured by my experience. Thus, I appreciate the alternative perspective.
REST lesson learned: Avoid user-supplied data in URI segments
Be careful with user-supplied data in URI segments.
This is a lesson about design of REST APIs that I learned the hard way: if you built service URIs from dynamic data, be careful with the data you allow in URI segments.
Some HTTP frameworks (e.g. the ASP.NET Web API) let you define URI templates or routes that handle incoming requests - e.g.
routeTemplate: "api/{controller}/{id}"
which means that a request to http://foo.ploeh.dk/api/fnaah/sgryt
would map the value "fnaah" to controller, and the value "sgryt" to id.
If you are building a level 2 or less service, you may even publish your URI templates to consumers. If you are building a level 3 RESTful API, you may not be publishing your URI templates, but you can still use them internally.
Avoid user-supplied data in URI segments #
Be careful with the source of data you allow to populate URI segments of your URI template. At one point, I was involved with designing a REST API that (among other things) included a 'tag cloud' feature. If you wanted to see the contents of a specific tag, you could navigate to it.
Tags were all user-defined strings, and they had no internal ID, so our first attempt was to simply treat the value of the tag as the ID. That seemed reasonable, because we wanted the tag resource to list all the resources with that particular tag value. Thus, we modeled the URI template for tag resources like the above route template.
That worked well for a URL like http://foo.ploeh.dk/api/tags/rock
because it would simply match the value "rock" to the id variable, and we could then list all resources tagged with "rock".
However, some user had defined a tag with the value of "sticky & sweet" (notice the ampersand character), which meant that when you wanted to see all resources with this link, you would have to navigate to http://foo.ploeh.dk/api/tags/sticky & sweet
. However, that sort of URL is considered dangerous, and IIS will (by default) refuse to handle it.
Can you get around this by URL encoding the value? No, it's part of the request path, not part of any query string, so that's not going to work. The issue isn't that the URL is invalid, but that the server considers it to be dangerous. Even if you URL encode it, the server will decode it before handling it, and that would you leave you at square one.
You can either change the URI template so that the URL instead becomes http://foo.ploeh.dk/api/tags?id=sticky%20%26%20sweet
. This URL encodes the query string (the part of the URL that comes after the ?), but gives you an ugly URL.
Another alternative is to be very strict about input validation, and only allow users to create values that are safe when used as URI segments. However, that's putting an unreasonable technical limitation on an application feature. If a user wants to tag a resource with "sticky & sweet", the service should allow it.
In the end, we used a third alternative: we assigned an internal ID to all tags and mapped back and forth so that the URL for the "sticky & sweet" tag became http://foo.ploeh.dk/api/tags/1234
. Yes: that makes it impossible to guess the URL, but we were building a level 3 RESTful API, so clients are expected to follow links - not guess the URL.
REST lessons learned
This post provides an overview of some lessons I learned while bulding non-trivial REST APIs.
Last year I spent a good deal of the year designing and implementing a handful of non-trivial REST APIs for a customer of mine. During that process, I learned some small lessons about the design of RESTful systems that I haven't seen described elsewhere, and I want to share these lessons with you.
In order to learn the concepts and philosphy behind REST, I think that REST in Practice is a great resource (pun intended), but when it comes to practical guidance, I find the RESTful Web Services Cookbook invaluable. It's full of useful and concrete tips and tricks for building RESTful APIs, but I don't remember reading about the following lessons, that I had to learn the hard way.
There's so much hype and misrepresentation about REST that I have to point out that when I'm talking about REST, I mean full-on, level 3 REST, with resources, verbs, hypermedia controls and the works.
Each of these lessons deserves a small article of its own, but here's an overview:
- Avoid user-supplied data in URI segments
- Avoid 204 responses
- Avoid hackable URLs
- Consider a home link on all resources
- Consider a self link on all resources
- Consider including identity in URLs
I hope you find these tips useful.
How to change the Generate Property Stub refactoring code snippet in Visual Studio 2012
This post describes how to change the code template for the 'Generate Property Stub' refactoring in Visual Studio 2012.
This is mostly a quick note for my own benefit (since I just spent half an hour chasing this down), but I'm posting it here on the blog, in case someone else might find it useful. Here's how to change the code snippet for the "Generate Property Stub" smart tag in Visual Studio 2012:
The template for this code is defined in
C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC#\Snippets\1033 \Refactoring\GenerateProperty - Auto Property.snippet
You can open this file and hand-edit it, but contrary to previous versions of Visual Studio, it seems that you have to restart Visual Studio before the change takes effect.
If you do this, be sure that you know what you're doing; you'd be mucking around with the internals of your Visual Studio installation.
Advanced Unit Testing Pluralsight course
Service announcement: my new Pluralsight course Advanced Unit Testing is now available. Read more in Pluralsight's announcement, or go straight to the course.
If you don't already have a Pluralsight account, you can get a free trial of up to 200 minutes.
Comments
Another big issue with having tests in your production code is that you will drag along depencencies to test libraries. You mention this in the second issue above but I think it would be a good idea to state it more explicitly.
Could you share your opinion on bundling test projects with their target projects in source control repositories? I use Git, but I would imagine this is applicable in many source control implementations.
I currently keep my tests in their own projects, as you generally recommend here, and also track them in their own Git repository, separately from the tested project. However, I'm finding that when I create a new branch to add a new feature in the project, I create the same branch in the test project. This is critical especially in situations where the new behavior necessarily causes existing tests to fail. If I forget to create a new branch in the test project to match the new branch in the project under test, it can sometimes lead to annoying clean-up work that was preventable.
I am beginning to question why I'm not combining the test project and the tested project in the same Git repository, so that when I branch out on a new idea, I only need to do it once, and my tests are forever in sync with my tested code. I haven't come across a scenario where that would cause problems - can you think of any compelling reason not to keep the projects in the same source control repository?
Jeff, thank you for writing.
If I were to exaggerate a bit, I would claim that I've used Test-Driven Development (TDD) longer than I've used source control systems, so it has never occurred to me to keep tests in a separate repository. It sounds really... inconvenient, but I've never tried it, so I don't know if it's a good or bad idea. Yet, in all that time, I've never had any problems keeping tests and production code in the same version control repository.
With TDD, the tests are such an integral part of the code base that I wouldn't like to experiment with keeping them in a different repository. Additionally, when you use a test suite as a regression test suite, it's important that you version tests and production code together. This also means tagging them together.
If you take a look at the various code bases I maintain on GitHub, you will also see that all of them include unit tests as part of each repository.