I love the idea of “collective ownership” in a development project. I love the idea that in a development team, “everyone is an architect”. My problem is with the cut-and-dried “Agile” definition of these concepts.
- Block out the overall process, step by step, in comments.
- Any complex step (more than five or ten lines of code), replace the comment with a clearly-named method or function call, and create a stub method/function.
- Replace comments with equivalent logging statements.
- Implement functionality.
- Give all functions, methods, classes, parameters, properties, and variables clear, concise names, so that the code ends up in some semblance of readable English.
- Use thorough sanity checking, by means of assertions or simple if blocks. When using if blocks, include logging for any failed checks, including what was expected and what was found. These should be warnings.
- Include logging in any error/exception handling code. These should be errors if recoverable, or fatal if not. This is all too often the only logging a developer includes!
A huge problem I see with responsive/adaptive design today is that, all too often, it treats “small viewport” and “mobile” as being synonymous, when the two concepts are orthogonal. A mobile device can have a high-resolution display, just as a desktop user can have a small display, or just a small browser window.
Responsive designs need to design for viewport size, and nothing more. It’s not mobile, it’s a small display. Repeat that to yourself about a thousand times.
Assertions and unit tests are all well and good, but they’re too narrow-minded in my eyes. Unit tests are great for, well, testing small units of code to ensure they meet the basic requirements of a software contract – maybe a couple of typical cases, a couple of edge cases, and then additional cases as bugs arise and new test cases are created for them. No matter how many cases you create, however, you’ll never have a test case for every possible scenario.
Assertions are excellent for testing in-situ; you can ensure that unacceptable values aren’t given to or by a piece of code, even in production (though there is a performance penalty to enabling assertions in production, of course.) I think assertions are excellent, but not specific enough: any assertion that fails is automatically a fatal error, which is great, unless it’s not really a fatal error.
That’s where the concept of assumptions and expectations come in. What assertions and unit tests really do is test assumptions and expectations. A unit test says “does this code behave correctly when given this data, all assumptions considered?” An assertion says “this code assumes this thing, and will not behave correctly if it gets another, so throw an error.”
When documenting an API, it’s important to document assumptions and expectations, so users of the API know how to work with your code. Before I go any further, let me define what I mean by these very similar terms: to me, code that assumes something operates as if its assumptions are correct, and will likely fail if its assumptions turn out to be incorrect. Code that expects something operates as if its expectations are met, but will likely still operate correctly even if they aren’t. It’s not guaranteed to work, or guaranteed to fail; it’s likely to work, but someone should probably know about it and look into it.
Therein lies the rub: these are basically two types of assertions, one fatal, one not. What we need is an assertion framework that allows for warning-level assertion failures. What’s more, we need an assertion framework that is performant enough to be regularly enabled in production.
So, any code that’s happily humming along in production, that says:
will fail immediately if percentage is outside those bounds. It’s assuming that percentage is between zero or one hundred, and if it assumes wrong, it will likely fail. Since it’s always better to fail fast, any case where percentage is outside that range should trigger a fatal error – preferably even if it’s running in production.
On the other hand, code that says:
will trigger a warning if numRows is over a thousand. It expects numRows to be under a thousand; if it isn’t, it can still complete correctly, but it may take longer than normal, or use more memory than normal, or it may simply be that if it got more rows than that, something may be amiss with the query that got the rows or the dataset the rows came from originally. It’s not a critical failure, but it’s cause for investigation.
Any assumption or expectation that fails should of course be automatically and immediately reported to the development team for investigation. Naturally a failed assumption, being fatal, should take priority over a failed expectation, which is recoverable.
This not only provides greater flexibility than a simple assertion framework, it also provides more explicit self-documenting code.
Source code tends to follow the second law of thermodynamics, with some small differences. In software, as in thermodynamics, systems tend toward entropy: as you continue to develop an application, the source will increase in complexity. In software, as well as in thermodynamics, connected systems tend toward equilibrium: in development, this is known as the “broken windows” theory, and is generally considered to mean that bad code begets bad code. People often discount the fact that good code also begets good code, but this effect is often hidden by the fact that the overall system, as mentioned earlier, tends toward entropy. That means that the effect of broken windows is magnified, and the effect of good examples is diminished.
In thermodynamics, Maxwell’s Demon thought experiment is, in reality, impossible – it is purely a thought experiment. However, in software development, we’re in luck: any developer can play the demon, and should, at every available opportunity.
Maxwell’s demon stands between two connected systems, defeating the second law of thermodynamics by selectively allowing less-energetic particles through only in one direction, and more-energetic particles through only in the other direction, causing the two systems to tend toward opposite ends of the spectrum, rather than naturally tending toward entropy.
By doing peer reviews, you’re doing exactly that; you’re reducing the natural entropy in the system and preventing it from reaching its natural equilibrium by only letting the good code through, and keeping the bad code out. Over time, rather than tending toward a system where all code is average, you tend toward a system where all code is at the lowest end of the entropic spectrum.
Refactoring serves a similar, but more active role; rather than simply “only letting the good code through”, you’re actively seeking out the worse code and bringing it to a level that makes it acceptable to the demon. In effect, you’re reducing the overall entropy of the system.
If you combine these two effects, you can achieve clean, efficient, effective source. If your review process only allows code through that is as good or better than the average, and your refactoring process is constantly improving the average, then your final code will, over time, tend toward excellence.
Without a demon, any project will be on a continuous slide toward greater and greater entropy. If you’re on a development project, and it doesn’t have a demon, it needs one. Why not you?
Having my concept complete (see Part 0) and my simple test case working (see Part 1), I was ready to start on my moderate-complexity test case. This would use more features than the simple test case, and more pages. I really didn’t want to have to build a complete site just for the proof of concept, so I decided to use an existing site, and I happened to have one handy: rogue-technologies.com.
The site is currently built in HTML5, using server-side includes for all of the content that remains the same between pages. It seemed like a pretty straightforward process to convert this to my template engine, so I got to work: I started with one page (the home page), and turned it into the master template. I took all of the include directives and replaced them with the content they were including. I replaced all of the variable references with model references using injection or substitution. I ID’d all the elements in the master template that would need to be replaced by child templates. I then made another copy of the homepage, and set it up to derive from the master template.
I didn’t want to convert the site to use servlets, since it wasn’t really a dynamic site; I just wanted to be able to generate usable HTML from my template files. So I created a new class that would walk a directory, parse the templates, and write the output to files in an output directory. Initially, it set up the model data statically by hand at the start of execution.
All was well, but I needed a way for the child template to add elements to the page, rather than just replacing elements from the parent template. I came up with the idea of appending elements, using a data attribute data-z-append=”before:” or “after:”, to cause an element to be appended to the page either before or after the element from the parent with the specified ID. This worked perfectly, allowing me to add the Google Webmaster Tools meta tag to the homepage.
With this done, I set to work converting the remaining pages. Most of the pages were pretty straightforward, being handled just like the homepage; I dumped the SSI directives, added some appropriate IDs and classes, and all was well. However, the software pages presented some challenges. For one thing, they used a different footer than the rest of the site. It was time to put nested derivation to the test.
I created a software page template, which derived from the master template, that appended the additional footer content. I then had the software pages derive from this template, instead of deriving from the master template and – by some stroke of luck – it worked perfectly on the first try. I wasn’t out of the woods yet, though.
The software pages also used SSI directives to dynamically insert the file size for downloadable files next to the links to download them. I wasn’t going to reimplement this functionality, however, I was prepared to replace these directives with file size data stored in the model. But I wanted to keep the model data organized, so I needed to support nesting. The software pages also used include directives to include a Google+ widget on the pages; this couldn’t be added to the template, as it was embedded in the body content, so it seemed like a perfect case for snippets – which meant I needed to implement snippet support.
Snippet support was pretty easy – find the data attribute, look up the snippet file, parse it as an HTML fragment, and replace the placeholder element with the snippet. Easy to implement, worked pretty well.
Nested properties I thought would be a breeze, as I had assumed it was natively supported by StrSubstitutor. Unfortunately it wasn’t, so I had to write my own StrLookup. I decided that, since I was already doing some complex property lookups for injection, I’d build a unified model lookup class that could act as my StrLookup and could be used elsewhere. I wanted nested scope support as well, for my project list: each project had an entry in the model, that consisted of a name, latest version, etc. I wanted the engine to iterate this list, and for each entry, rather than replacing the entire content of the element with the text value of the model entry, I wanted it to find sub-elements and replace each with an appropriate property of the model entry. This meant I needed nested scoping support.
I implemented this using a scope stack and a recursive lookup. Basically, every time a nested scope was entered (e.g., content injection using an object or map, or iteration over a list), I would push the current scope onto the stack. When the nested scope was exited (i.e., the end of the element that added the scope), I popped the scope off. When iterating a loop, at the start of the iteration, I’d push the current index, and at the end of the iteration, I’d pop it back off.
This turned out to be very complex to implement, but after some trial and error, I got it working correctly. I then re-tested against my simple test case, having to fix a couple of minor defects introduced there with the new changes. But, at last, both my simple and moderate test cases were working.
I didn’t like the static creation of model data – not very flexible at all – so I decided to swap it out with JSON processing. This introduced a couple of minor bugs, but it wasn’t all that difficult to get it all working. The main downside was that it added several additional dependencies, and dependency management was getting more difficult. I wasn’t too concerned on that front though, since I was already planning for the real product to use Maven for dependency tracking; I was just beginning to wish I had used Maven for the prototype as well. Oh well, a lesson for next time. For now, I was ready for my complex test case – I just had to decide what to use.