When can we consider the code we write “good enough”? For some, a passing build is good enough. For others perhaps it’s a passing acceptance suite. Maybe it’s the existence of unit tests that define completion, with a specific code coverage percentage. These options are all quantifiable, which makes them simple targets to aspire to. We can go further than that and push for something less tangible but more worthwhile. All apprentices at 8th Light undertake a project called the HTTP Server. It’s no small undertaking, and consequently brings up numerous occasions to question whether the code you have written is “good enough”.

Aim for configuration

Development of the server is guided by pre-written Fitnesse acceptance tests. These tests provide realistic user scenarios that try to persuade the apprentice’s server to comply with the HTTP/1.1 specification. My use of the word “persuade” is deliberate, since the tests can only check that the final result is correct, not how the result is produced. For example:

“When I navigate to /redirect, I should be redirected to the homepage”.

This will test that the appropriate HTTP status code is returned in the response, and that the response header includes a “Location” field. We have options here, we could hard-code a response inside an if statement:

if (request.getPath() == "/redirect") {
  return new Response(302);

This approach could push us towards a long if/else chain, which is a recipe for violation of the SOLID principles. If we want to change this behaviour we would need to reopen our server to modify it, violating the Open Closed Principle. This also implies violation of the Single Responsibility Principle, because our server class is now responsible for at least routing of requests and constructing responses.

Not only do we violate some principles here, but we also tie our server to the requirements of our acceptance suite. This may be obvious to some readers, but our preference should be to produce a generic server “component” that is not tied to one specific use case.

It can be difficult to see where the divide between a generic component and the specific use case should lie. When we’re struggling to find that perfect split, one of the best sources of inspiration is - as with many things - similar projects. In this case, open source servers show us where a user is expected to start configuration. In producing my server, I looked at some familiar servers like Sinatra and Express. These are both well established projects with great documentation that explains in great detail “how to configure the server to your requirements”.

Sinatra, a Domain Specific Language for quickly creating web applications in Ruby, gives an example for getting started:

# myapp.rb
require 'sinatra'

get '/' do
  'Hello world!'

The documentation explains that “routes” are an http method pair with a URL-matching pattern, each route being associated with a block of code. The routes are matched in the order they are defined, and the first match is invoked. The documentation goes on to explain how static files can be handled, views can be rendered and various other more complex items.

Express.js is uncannily similar:

var express = require('express');
var app = express();

app.get('/', function (req, res) {
  res.send('Hello World!');

We can ignore the rest of the documentation for now, as the code snippets we have already tell us a lot. They suggest that users will want to configure the responses to requests through methods/functions. This seems to be a common pattern for defining this sort of behaviour (it’s very similar in Phoenix, Play and Spark).

Defining routing behaviour through methods/functions helps us find a good point to create this split, but we are not in the clear yet. The server needs to do more than just routing. The acceptance tests list some of these additional behaviours. Again, we need to decide what should and shouldn’t be configurable.

We are in a fortunate position where all the requirements of our server are available upfront through the acceptance tests. Interpreting the implications of those requirements correctly can be hard, but it becomes easier as more features are delivered. We should worry about making the perfect architectural decision up front, that is rarely possible. As with many things in programming, the best approach is to follow the Scout Rule:

“Always leave the campsite cleaner than you found it.”

This ranges from refactoring to a better abstraction or giving something a better name.

Naming for the domain

Naming is widely regarded as one of the hardest things about programming. Even outside of programming it is hard, so much so some people choose to outsource it.

It would be reasonable to think that naming in the domain of HTTP and web servers is easy. After all people have using web servers for over 25 years, so the names used should be pretty familiar. Consider for example the following question:

What word should we use to describe /users in the request line GET /users HTTP/1.1?

Here’s five options that could be suitable:

  • Path
  • Route
  • Action
  • URI
  • Resource

They are all appropriate, some more so than others. “Action” has connotations with just about everything, and is only commonly used in HTML forms. “URI” might only be valid when it is specifically formatted.

As with structure, open source projects can also offer an insight on naming. Server creation has been solved many times, consequently so has naming that is unambiguous and specific. Without the community, how can we improve our naming? What can we do to check that names we assign are “good enough”?

Perhaps we can ask ourselves some questions about them:

  1. Is the name accurate. Does it represent the object, or describe the behaviour well?
  2. If something changed in the function or object, would the name be forced to change?
  3. Is there a more domain appropriate name for the same thing?
  4. Long names are harder to read, can we shorten the name?
  5. Does anything else have a similar name, if we refactor, will we need to change the name based on its new use?
  6. For injected objects, does the object name match its parameter name in the function? If not, what does this tell us about the name?

This list is not exhaustive, but forms a basis for continued improvement. The more questions we ask of our names, the more robust they become.


Let’s assume that ground zero for our code is a passing build, exhaustive unit test coverage and appropriate acceptance tests. What can we do to move from here, to a position where we are comfortable calling it “good enough”?

Truth is, I don’t know the answer, why not ask your code that question?

Not the answer you were looking for? Let’s ask ourselves how many times we have moved on from a story as soon as it’s delivered. Could and should we spend more time analysing that work? Committing to this post-analysis prepares us for the tough questions, features. New features question the structure of our projects. They result in anything from a small code addition, a bout of refactoring or even a thorough redesign. Good software is flexible in change. Perfectly flexible software should not be the aim though, since flexibility comes packaged with complexity. The aim for “good enough”, is to have answers for your own questions before the feature asks them.

In naming, the exam questions are other developers, maintainers. They are the people for whom good names are written. As with structure, our names should answer our own questions first, before answering theirs. As craftsmen, we take pride in our work and naming, but even if we don’t, we can all rest assured:

The person that ends up maintaining your code won’t be your violent psychopathic neighbour… will they?