My Present Setup

I thought I’d take a quick moment to lay out my current setup. It’s not perfect, it’s not top-of-the-line (nor was it when any of the parts were purchased), it’s not extravagant, but I find it extremely effective for the way I work.

The Machine (DIY Chronos Mark IV):

  • Intel Core i5 750 LGA1156, overclocked from 2.6GHz to 3.2GHz
  • ASRock P55 Extreme
  • 8GB DDR3 from GSkill
  • ATi Radio HD 5870
  • 256GB Crucial m4 SSD (SATA3) – OS, applications, caches & pagefile
  • 2 x 1TB Seagate HDD – one data drive, one backup drive
  • Plextor DVD-RW with LiteScribe
I find this configuration to be plenty performant enough for most of my needs. The only thing that would prompt an upgrade at this point would be if I started needing to run multiple VM’s simultaneously on a regular basis. The GPU is enough to play my games of choice (League of Legends, StarCraft 2, Total War) full-screen, high-quality, with no lag. The SSD keeps everything feeling snappy, and the data drive has plenty of space for projects, documents, and media. The second drive I have set up in Windows Backup to take nightly backups of both the primary and data drives.
My interface to it:
  • Logitech G9x mouse (wired)
  • Microsoft Natural Elite 4000 keyboard (wired)
  • 2 x Dell U2412M 24″ IPS LCD @ 1920×1200
  • Behringer MS16 monitor speakers
If you couldn’t tell, I have a strong preference for wired peripherals. This is a desktop machine; it doesn’t go anywhere. Wireless keyboards I find particularly baffling for anything other than an HTPC setup; the keyboard doesn’t move, why would I keep feeding it batteries for no benefit? The mouse is an excellent performer, and I love the switchable click/free scroll wheel (though I wish the button weren’t on the bottom).
The displays are brilliant and beautiful, they’re low-power, I definitely appreciate the extra few rows from 1920×1200 over standard 1080p, and having two of them suits my workflow extremely well; I tend to have one screen with what I’m actively working on, and the other screen is some combination of reference materials, research, communications (chat, etc.), and testing whatever I’m actively working on. Particularly when working with web applications, it’s extremely helpful to be able to have code on one screen and the browser on the other, so you can make a change and refresh the page to view it without having to swap around. These are mounted on an articulated dual-arm mount to keep them up high (I’m 6’6″, making ergonomics a significant challenge) and free up a tremendous amount of desk space – more than you’d think until you do it.
The Behringers are absolutely fantastic speakers, I love them, to death, and I think I need to replace them. I recently rearranged my desk, and since hooking everything back up, the speakers have a constant drone as long as they’re turned on, even with the volume all the way down. I’ve swapped cables and fiddled with knobs and I’m not sure the cause.
The network:
  • ASUS RT-N66U “Dark Night” router
  • Brother MFC-9320CW color laster printer/scanner/copier/fax (on LAN via Ethernet)
  • Seagate 2TB USB HDD (on LAN via USB)
The RT-N66U or “Dark Night” as it’s often called is an absolutely fantastic router. It has excellent wireless signal, it’s extremely stable, it’s got two USB ports for printer sharing, 3G/4G dongle, or NAS using a flash drive or HDD (which can be shared using FTP, Samba, and ASUS’ aiDisk and aiCloud services). The firmware source is published regularly by ASUS, it’s Linux-based, and it includes a complete OpenVPN server. It offers a separate guest wireless network with its own password, which you can throttle separately and you can limit its access to the internal network. It has enough features to fill an entire post on its own.
Mobility:
  • Samsung Galaxy S4 (Verizon)
  • ASUS Transformer Prime (WiFi only)
The SGS4 is an excellent phone, with a few quirks due to Samsung’s modifications of the base Android OS. The display is outstanding, the camera is great, the phone is snappy and stable, and it has an SD card slot. That’s about all I could ask for. The tablet I bought because I thought it would make an excellent mobile client for my VPN+VNC setup; unfortunately, I’ve had some issues getting VNC to work, and now that I’m on a 3840×1200 resolution, VNC @ 1080p has become less practical. However, it still serves as a decent mobile workstation using Evernote, Dropbox, and DroidEdit.
All in all, this setup allows me to be very productive at home, while providing remote access to files and machines, and shared access to the printer and network drive for everyone in the house. The router’s NAS even supports streaming media to iTunes and XBox, which is a plus; between that, Hulu, and Netflix, I haven’t watched cable TV in months.

Code Patterns as Microevolution

Code patterns abide by survival of the fittest, within a gene pool of the code base. Patterns reproduce through repetition, sometimes with small mutations along the way. Patterns can even mate, after a fashion, by combining them, taking elements of each to form a new whole. This is the natural evolution of source code.

The first step to taming a code base is to realize the importance of assessing fitness and taking control over what patterns are permitted or encouraged to continue to reproduce. Code reviews are your opportunity to thin the herd, to cull the weak, and allow the strong to flourish.


Team meetings, internal discussions, training sessions, and learning investments are then your opportunity to improve both the quality of new patterns and mutations that emerge, as well as the group’s ability to effectively manage the evolution of your source, to correctly identify the weak and the strong, and to have a lasting impact on the overall quality of the product.

If you think about it, the “broken windows” problem could also be viewed as bad genes being allowed to perpetuate. As the bad patterns continue to reproduce, their number grows, and so does their impact on the overall gene pool of your code. Given the opportunity, you want to do everything you can to make sure that it’s the good code that’s continuing to live on, not the bad.

Consider a new developer joining your project. A new developer will look to existing code as an example to learn from, and as a template for their own work on the project, perpetuating the “genes” already established. That being the case, it seems imperative that you make sure those genes are good ones.

They will also bring their own ideas and perspectives to the process, establishing new patterns and mutating existing ones, bringing new blood into the gene pool. This sort of cross-breeding is tremendously helpful to the overall health of the “code population” – but only if the new blood is healthy, which is why strong hiring practices are so critical.

The New GMail for Android

The new GMail for Android UX sucks. I mean… it’s really awful.

They’ve replaced the checkboxes next to each message (useful) with sender images (gimmick), or, if there is no sender message (i.e., everything that’s not a G+ contact – so, every newsletter, receipt, order confirmation, etc. you’ll ever get), a big colorful first initial (completely useless waste of space). This image then acts as if it were the checkbox that used to be there (confusing) for selecting messages. You can turn off the images, but you don’t get the checkboxes back; you can only tap-hold to select multiple messages, though this isn’t mentioned anywhere, you just have to guess.

They’ve gotten rid of the delete button (why?), and moved it to the menu.

If you have no messages selected, pressing the device’s menu key gives you the menu. However, if you do have messages selected, the menu key does nothing, instead you must tap the menu button that appears at the top-right of the display. It’s not there if you don’t have messages selected.

Once you’re viewing a message, there are two menus: one when you tap the menu button, with 90% of the options in it, and another at the top-right gives you just two options, forward and reply-all; this almost makes sense, except that it uses the same, standard “here’s the menu” button that’s used on (some) other screens as the *only* available menu.

In the message view they’ve also gotten rid of the delete button (to match the annoyance of the message list, I supposed).

There is also a new “label settings” screen that’s fairly mysterious; I assume it applies to the current label, though this includes “Inbox”, which – while I understand it’s treated internally as a label – I think most users don’t think of as being a label in the typical sense.

Building a Foundation

It’s been said that pharmaceutical companies produce drugs for pennies per pill – except the first pill, which costs millions. Things aren’t so different in the land of software development: the first usage of some new functionality might take hours, building the foundation and related pieces. But it could be re-used a hundred times trivially, and usually expanded or modified with little effort as well (assuming it was well-written to start with).

This is precisely what you should be aiming for: take the time to build a foundation that will turn complex tasks into trivial ones as you progress. This is the main purpose behind design concepts like the single responsibility principle, the Hollywood principle, encapsulation, DRY, and so on.

This isn’t to be confused with big upfront design; in face, it’s especially important to keep these concepts in mind in an agile process, where you’re building the architecture as you go. It can be tempting to just hack together what you need at the moment. That’s exactly what you should be doing for a prototype, but not for real development. For lasting functionality, you should assemble a foundation to support the functionality you’re adding now, and similar functionality in the future.

It can be difficult to balance this against YAGNI – you don’t want to build what you don’t need, but you want to build what you do need in such a way that it will be reusable. You want to save yourself time in the future, without wasting time now.

To achieve a perfect balance would require an extraordinary fortune teller, of course. Experience will help you get better at determining what foundation will be helpful, though. The more experience you have and the more projects you work on, the better sense you’ll have of what can be done now to help out future you.

My Take on "Collective Ownership"/"Everyone is an Architect"

I love the idea of “collective ownership” in a development project. I love the idea that in a development team, “everyone is an architect”. My problem is with the cut-and-dried “Agile” definition of these concepts.

What I’ve been reading lately is a definition of “collective ownership” that revolves around the idea of distributing responsibility, primarily in order to lift the focus on finger-pointing and blaming. A defect isn’t “your fault”, it’s “our fault”, and “we need to fix it.” That’s all well and good, but distributing blame isn’t exactly distributing ownership; and ignoring the source of an issue is a blatant mistake.
The latter point first: identifying the source of an issue is important. I see no need for blame, or calling people out, and certainly no point in trying to use defects as a hard metric in performance analysis. However, a development team isn’t a factory; it’s a group of individuals who are constantly continuing their education, and honing their craft, and in that endeavor  they need the help of their peers and managers to identify their weaknesses so they know what to focus on. “Finding the source of an issue” isn’t about placing blame or reprimanding someone, it’s about providing a learning opportunity so that a team member can improve, and the team as a whole can improve through the continuing education of each member.
In regard to distributing ownership, it’s all too rare to see discussion of distributing ownership in a positive way. I see plenty of people writing about eliminating blame, but very few speaking of a team wherein every member looks at the entire code base and says “I wrote that.” And why should they? They didn’t write it alone, so they can’t make that claim. For the product, they can say “I had a hand in that,” surely. But it’s unlikely they feel like they had a hand in the development of every component.
That brings us around to the idea that “everyone is an architect.” In the Agile sense, this is generally taken to mean that every developer is given relatively free rein to architect the component they’re working on at any given moment, without bowing down to The Architect for their product. I like this idea, in a sense – I’m all for every developer doing their own prototyping, their own architecture, learning their own lessons, and writing their own code. Up to a point.
There is a level of architecture that it is necessary for the entire team to agree on. This is where many teams, even Agile teams, tend to fall back on The Architect to keep track of The Big Picture and ensure that All The Pieces Fit Together. This is clearly the opposite of “everyone is an architect”. So where’s the middle ground?
If a project requires some level of architecture that everyone has to agree on – language, platform, database, ORM, package structure, whatever applies to a given situation – then the only way to have everyone be an architect is design by committee. Panning design by committee has become a cliche at this point, but it has its uses, and I feel this is one of them.
In order to achieve collective ownership, you must have everyone be an architect. In order for everyone to be an architect, and feel like they gave their input into The Product as a whole – or at least had the opportunity to do so – you must make architectural decisions into group discussions. People won’t always agree, and that’s where the project manager comes in; as a not-an-architect, they should have no bias and no vested interest in what choices are made, only that some decision is made on each issue that requires consideration. Their only job in architectural discussions is to help the group reach a consensus or, barring that, a firm decision.
This is where things too often break down. A senior developer or two, or maybe a project manager with development experience, become de facto architects. They make the calls and pass down their decrees, and quickly everyone learns that if they have an architecture question, they shouldn’t try to make their own decision, they shouldn’t pose it to the group in a meeting, they should ask The Guy, the architect-pro-tem. Stand-up meetings turn into a doldrum of pointless status updates, and discussion of architecture is left out entirely.
Luckily, every team member can change this. Rather than asking The Guy when a key decision comes up, ask The Group. Even better, throw together a prototype, get some research together, and bring some options with pros and cons to the next stand-up meeting. Every developer can do their part to keep the team involved in architecture, and in ownership, and to slowly shift the culture from having The Architect to having Everyone Is An Architect.

The Importance of Logging

Add more logging. I’m serious.
Logging is what separates an impossible bug report from an easy one. Logging lets you replace comments with functionality. I’d even go so far as to say good logging separates good developers from great ones.
Try this: replace your inline comments with equivalent logging statements. Run your program and tail the log file. Suddenly, you don’t need a step wise debugger for the vast majority of situations, because you can see, in the log, exactly what the program is doing, what execution path it’s taking, where in the source where each logging statement is coming from, and where execution stopped in the event of a crash.
My general development process focuses on clean, readable, maintainable, refactorable, self-documenting code. The process is roughly like this:
  1. Block out the overall process, step by step, in comments.
  2. Any complex step (more than five or ten lines of code), replace the comment with a clearly-named method or function call, and create a stub method/function.
  3. Replace comments with equivalent logging statements.
  4. Implement functionality.
    • Give all functions, methods, classes, parameters, properties, and variables clear, concise names, so that the code ends up in some semblance of readable English.
    • Use thorough sanity checking, by means of assertions or simple if blocks. When using if blocks, include logging for any failed checks, including what was expected and what was found. These should be warnings.
    • Include logging in any error/exception handling code. These should be errors if recoverable, or fatal if not. This is all too often the only logging a developer includes!
  5. Replace inline comments with equivalent logging statements. These should be debug or info/trace level; major section starts should be higher level, while mid-process statements should be lower level.
  6. Add logging statements to the start of each method/function. These should also be debug or info/trace level. Use higher-level logging statements for higher-level procedures, and lower-level logging statements for more deeply-nested calls.
  7. For long-running or resource-intensive processes, particularly long loops, add logging statements at regular intervals to provide progress and resource utilization details.
Make good use of logging levels! Production systems should only output warnings and higher by default, but it should always be possible to enable deeper logging in order to troubleshoot any issues that arise. However, keep the defaults in mind, and ensure that any logging you have in place to catch defects will provide enough information in the production logs to at least begin an investigation.
Your logging messages should be crafted with dual purpose in mind: first, to provide useful, meaningful outputs to the log files during execution (obviously), but also to provide useful, meaningful information to a developer reading the source – i.e., the same purpose served by comments. After a short time with this method you’ll find it’s very easy to craft a message that serves both purposes well.
Good logging is especially useful in an agile environment employing fast iteration and/or continuous integration. It may not be obvious why at first, but all the advantages of good logging (self-documenting code, ease of maintenance, transparency in execution) do a lot to facilitate agile development by making code easier to work with and easier to troubleshoot.
But wait, there’s more! Good logging also makes it a lot easier for new developers to get up to speed on a project. Instead of slogging through code, developers can execute the program with full logging, and see exactly how it runs. They can then review the source code, using the logging statements as waypoints, to see exactly how the code relates to the execution.

If you need a tool for tailing log files, allow me a shameless plug: try out my free log monitor, Rogue Informant. It’s been in development for several years now, it’s stable, it’s cross-platform, and it’s completely free to use privately or commercially. It allows you to monitor multiple logs at once, filter and search logs, and float a log monitoring window on top of other applications, to make it easier to watch the log while using the program to see exactly what’s going on behind the scenes.Give it a try, and if you find any issues or have feature suggestions, feel free to let me know!

The Problem with Responsive Design

A huge problem I see with responsive/adaptive design today is that, all too often, it treats “small viewport” and “mobile” as being synonymous, when the two concepts are orthogonal. A mobile device can have a high-resolution display, just as a desktop user can have a small display, or just a small browser window.

Responsive designs need to design for viewport size, and nothing more. It’s not mobile, it’s a small display. Repeat that to yourself about a thousand times.

What’s holding back single-design philosophies isn’t display size, it’s user interface; for decades, web designers have counted on there being a mouse cursor to generate events – mouseovers, clicks, drags. That’s not how it works on touchscreen devices, and we need some facility – JavaScript checks, CSS media queries – to cater to touch-based devices as opposed to cursor-based devices.

Sanity Checks: Assumptions and Expectations

Assertions and unit tests are all well and good, but they’re too narrow-minded in my eyes. Unit tests are great for, well, testing small units of code to ensure they meet the basic requirements of a software contract – maybe a couple of typical cases, a couple of edge cases, and then additional cases as bugs arise and new test cases are created for them. No matter how many cases you create, however, you’ll never have a test case for every possible scenario.

Assertions are excellent for testing in-situ; you can ensure that unacceptable values aren’t given to or by a piece of code, even in production (though there is a performance penalty to enabling assertions in production, of course.) I think assertions are excellent, but not specific enough: any assertion that fails is automatically a fatal error, which is great, unless it’s not really a fatal error.

That’s where the concept of assumptions and expectations come in. What assertions and unit tests really do is test assumptions and expectations. A unit test says “does this code behave correctly when given this data, all assumptions considered?” An assertion says “this code assumes this thing, and will not behave correctly if it gets another, so throw an error.”

When documenting an API, it’s important to document assumptions and expectations, so users of the API know how to work with your code. Before I go any further, let me define what I mean by these very similar terms: to me, code that assumes something operates as if its assumptions are correct, and will likely fail if its assumptions turn out to be incorrect. Code that expects something operates as if its expectations are met, but will likely still operate correctly even if they aren’t. It’s not guaranteed to work, or guaranteed to fail; it’s likely to work, but someone should probably know about it and look into it.

Therein lies the rub: these are basically two types of assertions, one fatal, one not. What we need is an assertion framework that allows for warning-level assertion failures. What’s more, we need an assertion framework that is performant enough to be regularly enabled in production.

So, any code that’s happily humming along in production, that says:

Assume.that(percentage).isBetween(0,100);

will fail immediately if percentage is outside those bounds. It’s assuming that percentage is between zero or one hundred, and if it assumes wrong, it will likely fail. Since it’s always better to fail fast, any case where percentage is outside that range should trigger a fatal error – preferably even if it’s running in production.

On the other hand, code that says:

Expect.that(numRows).isLessThan(1000);

will trigger a warning if numRows is over a thousand. It expects numRows to be under a thousand; if it isn’t, it can still complete correctly, but it may take longer than normal, or use more memory than normal, or it may simply be that if it got more rows than that, something may be amiss with the query that got the rows or the dataset the rows came from originally. It’s not a critical failure, but it’s cause for investigation.

Any assumption or expectation that fails should of course be automatically and immediately reported to the development team for investigation. Naturally a failed assumption, being fatal, should take priority over a failed expectation, which is recoverable.

This not only provides greater flexibility than a simple assertion framework, it also provides more explicit self-documenting code.

Be Maxwell’s Demon

Source code tends to follow the second law of thermodynamics, with some small differences. In software, as in thermodynamics, systems tend toward entropy: as you continue to develop an application, the source will increase in complexity. In software, as well as in thermodynamics, connected systems tend toward equilibrium: in development, this is known as the “broken windows” theory, and is generally considered to mean that bad code begets bad code. People often discount the fact that good code also begets good code, but this effect is often hidden by the fact that the overall system, as mentioned earlier, tends toward entropy. That means that the effect of broken windows is magnified, and the effect of good examples is diminished.

In thermodynamics, Maxwell’s Demon thought experiment is, in reality, impossible – it is purely a thought experiment. However, in software development, we’re in luck: any developer can play the demon, and should, at every available opportunity.

Maxwell’s demon stands between two connected systems, defeating the second law of thermodynamics by selectively allowing less-energetic particles through only in one direction, and more-energetic particles through only in the other direction, causing the two systems to tend toward opposite ends of the spectrum, rather than naturally tending toward entropy.

By doing peer reviews, you’re doing exactly that; you’re reducing the natural entropy in the system and preventing it from reaching its natural equilibrium by only letting the good code through, and keeping the bad code out. Over time, rather than tending toward a system where all code is average, you tend toward a system where all code is at the lowest end of the entropic spectrum.

Refactoring serves a similar, but more active role; rather than simply “only letting the good code through”, you’re actively seeking out the worse code and bringing it to a level that makes it acceptable to the demon. In effect, you’re reducing the overall entropy of the system.

If you combine these two effects, you can achieve clean, efficient, effective source. If your review process only allows code through that is as good or better than the average, and your refactoring process is constantly improving the average, then your final code will, over time, tend toward excellence.

Without a demon, any project will be on a continuous slide toward greater and greater entropy. If you’re on a development project, and it doesn’t have a demon, it needs one. Why not you?

Tales of Accidental SEO

I have a post on this blog that’s a top-10 result for a pretty generic search term. Yes, the page is relevant, and uses the terms in question pretty frequently. But, honestly, I don’t make any effort at SEO on this blog: it’s a soapbox, not a marketing engine, and I don’t care to invest the time and energy necessary to get myself into the top ranks on the SERPs. But somehow I’ve done it accidentally, and I think I know how.

By linking my Blogger profile to my Google+ profile, my blog posts become “social media” content in some part of Google’s algorithm. Because “social media” and the “live web” are the hip things in search engineering these days, that gets me an arbitrarily huge boost in rank. It’s not based on profiling either: I can run the search anonymously and get the same results, and I can have friends that don’t use Google+ run the search and get the same results.

Why do I think it’s related to Google+ at all? My profile picture of G+ is right next to the post (though, oddly, none of the photos from the actual post are in there), and it includes the byline “by Adrian Price – in 70 Google+ circles”. That’s not part of my blog post, that’s not even part of my Blogger profile, aside from the fact that it is linked to my Google+ profile.

Social marketing in so many ways is your parents trying to talk to you in the same language you use with your friends. To poach a phrase, it’s so unhip it’s a wonder their bums don’t fall off. Honestly, almost every social marketing effort I’ve ever seen reeks of desperation, confusion, and so much effort trying to “seem engaged” that they would have saved time in the end actually being engaged.

So, any marketers out there desperately trying to squeeze every drop of ROI they can out of social media, consider this: it looks like, just maybe, you can get quite a lot out of it just by having it at all, even if you aren’t using it to push out news, or contests, or desperately promoting your latest video in an attempt to force it to “go viral”. Who knows how long the free lunch will last, but you might as well take advantage while you can.