A Wrinkle in Time

You’ve built a prototype, everything is going great. All your dates and times look great, they load and store correctly, everything is spiffy. You have your buddy give it a whirl, and it works great for them too. Then you have a friend in Curaçao test it, and they complain that all the times are wrong – time zones strike again!

But, you’ve got this covered. You just add an offset to every stored date/time, so you know the origin time zone, and then you get the user’s time zone, and voila! You can correct for time zones! Everything is going great, summer turns to fall, the leaves change, the clocks change, and it all falls apart again. Now you’re storing dates in various time zones, without DST information, you’re adjusting them to the user’s time zone, trying to account for DST, trying to find a spot here or there where you forgot to account for offsets…

Don’t fall into this trap. UTC is always the answer. It is effectively time-zone-less, as it has an offset of zero and does not observe daylight savings time. It’s reliable, it’s universal, it’s always there when you need it, and you can always convert it to any time you need. Storing a date/time with time zone information is like telling someone your age by giving your birthday and today’s date – you’re dealing with additional data and additional processing with zero benefit.

When starting a project, you’re going to be better off storing all dates as UTC from the get-go; it’ll save you innumerable headaches later on. I think it is atrocious that .NET defaults to system-local time for dates; one of the few areas where I think Java has a clearly better design. .NET’s date handling in general is a mess, but simply defaulting to local time when you call DateTime.Now encourages developers to exercise bad practices; the exact opposite of the stated goals of the platform, which is to make sure that the easy thing and the correct thing are, in fact, the same thing.

On a vaguely related note, I’ve found a (in my opinion) rather elegant solution for providing localized date/time data on a website, and it’s all wrapped up in a tiny Gist for your use: https://gist.github.com/aprice/7846212

This simple jQuery script goes through elements with a data attribute providing a timestamp in UTC, and replaces the contents (which can be the formatted date in UTC, as a placeholder) with the date/time information in the user’s local time zone and localized date/time format. You don’t have to ask the user their time zone or date format.

Unfortunately it looks like most browsers don’t take into account customized date/time formatting settings; for example, on my computer, I have the date format as yyyy-mm-dd, but Chrome still renders the standard US format of mm/dd/YYYY. However, I think this is a relatively small downside, especially considering that getting around this requires allowing users to customize the date format, complete with UI and storage mechanism for doing so.

On Code Comments

I’ve been seeing a lot of posts lately on code comments; it’s a debate that’s raged on for ages and will continue to do so, but for some reason it’s been popping up in my feeds more than usual the last few days. What I find odd is that all of the posts generally take on the same basic format: “on the gradient of too many to too few comments, you should aim for this balance, in this way, don’t use this type of comments, make your code self-documenting.” The reasoning is fairly consistent as well: comments get stale, or don’t add value, or may lead developers astray if they don’t accurately represent the code.

And therein lies the rub: they shouldn’t be representing the code at all. Code – clean, self-documenting code – represents itself. It doesn’t need a plain-text representative to speak on its behalf unless it’s poorly written in the first place.

It may sound like I’m simply suggesting aiming for the “fewer comments” end of the spectrum, but I’m not; there’s still an entity that may occasionally need representation in plain text: the developer. Comments are an excellent way to describe intent, which just so happens to take a lot longer to go stale, and is often the missing piece of the puzzle when trying to grok some obscure or obtuse section of code. The code is the content; the comments are the author’s footnotes, the director’s commentary.

Well-written code doesn’t need comments to say what it’s doing – which is just as well since, as so many others have pointed out, those comments are highly likely to wind up out-of-sync with what the code is actually doing. However, sometimes – not always, maybe even not often, but sometimes – code needs comments to explain why it’s doing whatever it’s doing. Sure, you’re incrementing Frobulator.Foo, and everybody is familiar with the Frobulator and everybody knows why Foo is important and anyone looking at the code can plainly see you’re trying to increment it. But why are you incrementing it? Why are you incrementing it the way you’re doing it in this case? What is the intent, separate from its execution? That’s where comments can provide value.

As a side note (no pun intended), I hope we can all agree that doc comments are a separate beast entirely here. Doc comments provide meta data that can be used by source code analyzers, prediction/suggestion/auto-completion engines, API documentation generators, and the like; they provide value through some technical mechanism and are generally intended for reading somewhere else, not for reading them in the source code itself. Because of this I consider doc comments to be a completely separate entity, that just happen to be encoded in comment syntax.

My feelings on doc comments are mixed; generally speaking, I think they’re an excellent tool and should be widely used to document any public API. However, there are few things in the world more frustrating that looking up the documentation for a method you don’t understand, only to find that the doc comments are there but blank (probably generated or templated), or are there but so out of date that they’re missing parameters or the types are wrong. This is the kind of thing that can have developers flipping desks at two in the morning when they’re trying to get something done.

You’re Being Held Hostage and You May Not Even Know It

To me, net neutrality isn’t about fair business practices between businesses. That’s certainly part of it, but it’s not the crux of the issue. To me, net neutrality is about consumer protection.

Your broadband provider would like to charge companies – particularly content companies – extra in order to bring you their content. Setting aside the utterly delirious reasoning behind this for the moment, let’s think about this from the consumer’s perspective. You’re paying your ISP to provide you access to the internet – the whole thing. When you sign up for service, you’re signing up for just that: access to the internet. Period. What your ISP fails to disclose, at least in any useful detail, is how they intend to shape that access.

For your $40, $50, $60 or more each month, you might get high-speed access to some things, and not to others. You don’t get to know what ahead of time, or even after you sign up – the last thing your ISP wants is for you to be well-informed about your purchase in this regard. They’ll do whatever they can to convince you that your service is plain, simple, high-speed access to the whole internet.

Then, in negotiations behind closed doors, they’re using you as a hostage to extort money from the businesses you’re a customer of. Take Netflix as an example: you pay your ISP for internet service. Netflix also has an ISP, or several, that they pay for internet service. Those ISPs have what are called “peering arrangements” that determine who, if anyone, pays, and how much, when traffic travels between their networks on behalf of their customers. This is part and parcel of what you and Netflix pay for your service. You pay Netflix a monthly fee to receive Netflix service, which you access using your ISP. Netflix uses some part of that monthly fee to pay for their own internet service.

Your ISP has gone to Netflix and said “hey, if you want to deliver high-definition video to your customers who are also my customers, you have to pay me extra, otherwise my customers which are also your customers will receive a sub-par experience, and they might cancel their Netflix account.” They’re using you as a bargaining chip without your knowledge or consent, in order to demand money they never earned to begin with; everyone involved is already paying their fair share for their connection to the global network, and for the interconnections between parts of that global network.

To me, when a company I do business with uses me, and degrades my experience of their product, without my knowledge or consent, that’s fraud from a consumer standpoint. Whatever Netflix might think about the deal, whether Netflix is right or wrong in the matter, doesn’t enter into it; I’m paying for broadband so that I can watch Netflix movies, I’m paying for Netflix so that I can watch movies over my broadband connection, and my ISP is going behind my back and threatening to make my experience worse if Netflix doesn’t do what they want. Nobody asked me how I feel about it.

Of course, they could give full disclosure to their customers (though they never would), and it wouldn’t matter a whole lot, because your options as a broadband consumer are extremely limited; in the majority of cases, the only viable solution is cable, and when there is competition, it comes from exactly one place: the phone company. The cable companies and phone companies are alike in their use of their customers as hostages in negotiations.

What about fiber broadband? It’s a red herring – it’s provided by the phone company anyway. Calling fiber competition is like saying Coke in cans competes with Coke in bottles – it’s all Coke, and whichever one you buy, your money goes into Coke’s pocket.

What about wireless? Wireless will never, ever be able to compete with wired service, due to simple physics. The bandwidth just isn’t there, the spectrum isn’t there, there’s noise to contend with, and usage caps make wireless broadband a non-starter for many cases, especially streaming HD video. Besides, the majority of truly high-speed wireless service is provided by the phone companies anyway; see the previous paragraph.

Why aren’t they regulated? The FCC is trying, in its own way, but there’s little traction; the cable and telephone companies have the government in their collective pockets with millions of dollars of lobbying money, and We The People haven’t convinced Our Government that we care enough for them to even consider turning down that money.

In the United States, we pay many, many times what people pay in much of the developed world, and we get many, many times less for what we spend. On top of that, our ISPs are using us as bargaining chips, threatening to make our already overpriced, underpowered service even worse if the companies we actually chose in a competitive market – unlike our ISPs – don’t pay up. This is absolutely preposterous, it’s bordering on consumer fraud, and you should be angry about it. You should be angry enough to write your congressman, your senator, the president, the FCC, and your ISP (not that the last will do you much good, but it can’t hurt.)

Some excellent places to find more information:

Engineers, Hours, Perks, and Pride

This started as a Google+ post about an article on getting top engineering talent and got way too long, so I’m posting here instead.

I wholeheartedly agree that 18-hour days are just not sustainable. It might work for a brand-new startup cranking out an initial release, understaffed and desperate to be first to market. But, at that stage, you can expect the kind of passion and dedication from a small team to put in those hours and give up their lives to build something new.

Once you’ve built it, though, the hours become an issue, and the playpen becomes a nuisance. You can’t expect people to work 18-hour days forever, or even 12-hour days. People far smarter than I have posited that the most productive time an intellectual worker can put in on a regular basis is 4 to 6 hours per day; after that, productivity and effectiveness plummet, and it only gets worse the longer it goes on.

Foosball isn’t a magical sigil protecting engineers from burn-out. Paintball with your coworkers isn’t a substitute for drinks with your friends or a night in with your family. An in-house chef sounds great on paper, until you realize that the only reason they’d need to provide breakfast and dinner is if you’re expected to be there basically every waking moment of your day.

Burn-out isn’t the only concern, either. Engineering is both an art and a science, and like any art, it requires inspiration. Inspiration, in turn, requires experience. The same experience, day-in, day-out – interacting with the same people, in the same place, doing the same things – leaves one’s mind stale, devoid of inspiration. Developers get tunnel-vision, and stop bringing new ideas to the table, because they have no source for them. Thinking outside the box isn’t possible if you haven’t been outside the box in months.

Give your people free coffee. Give them lunch. Give them great benefits. Pay them well. Treat them with dignity and respect. Let them go home and have lives. Let them get the most out of their day, both at work and at home. You’ll keep people longer, and those people will be more productive while they’re there. And you’ll attract more mature engineers, who are more likely to stick around rather than hopping to the next hip startup as soon as the mood strikes them.

There’s a certain pride in being up until sunrise cranking out code. There’s a certain macho attitude, a one-upmanship, a competition to see who worked the longest and got the least sleep and still came back the next morning. I worked from 8am until 4am yesterday, and I’m here, see how tough I am? It’s the geek’s equivalent to fitness nuts talking about their morning 10-mile run. The human ego balloons when given the opportunity to endure self-inflicted tortures.

But I’m inclined to prefer an engineer who takes pride in the output, not the struggle to achieve it. I want someone who is stoked that they achieved so much progress, and still left the office at four in the afternoon. Are they slackers, compared to the guy who stayed another twelve hours, glued to his desk? Not if the output is there. It’s the product that matters, and if the product is good, and gets done in time, then I’d rather have the engineer that can get it done without killing themselves in the process.

“I did this really cool thing! I had to work late into the night, but caffeine is all I need to keep me going. I kept having to put in hacks and work-arounds, but the important thing is that it’s done and it works. I’m a coding GOD!” Your typical young, proud engineer. They’re proud of the battle, not the victory; they’re proud of how difficult it was.

“I did this really cool thing! Because I had set myself up well with past code, it was a breeze. I was amazed how little time it took. I’m a coding GOD!” That’s my kind of developer. That’s pride I can agree with. They’re proud because of how easy it was.

This might sound like an unfair comparison at first, but think about it. When you’re on a 20-hour coding bender, you aren’t writing your best code. You’re frantically trying to write working code, because you’re trying to get it done as fast as you can. Every cut corner, every hack, every workaround makes the next task take that much longer. Long hours breed technical debt, and technical debt slows development, and slow development demands longer hours. It’s a vicious cycle that can be extremely difficult to escape, especially once it’s been institutionalized and turned into a badge of honor.

Assumptions and Unit Tests

I’ve written previously on assumptions and how they affect software development. Taking this as a foundation, the value proposition of unit testing becomes much more apparent: it offers a subjective reassurance that certain assumptions are valid. By mechanically testing components for correctness, you’re allowing yourself the freedom to safely assume that code which passes its tests is highly unlikely to be the cause of an issue, so long as there is a test in place for the behavior you’re using.

This can be a double-edged sword: it’s important to remember that a passing test is not a guarantee. Tests are written by developers, and developers are fallible. Test cases may not exercise the behavior in precisely the same way as the code you’re troubleshooting. Test cases may even be missing for the particular scenario you’re operating under.
By offering a solid foundation of trustworthy assumptions, along with empirical proof as to their validity, you can eliminate many possible points of failure while troubleshooting, allowing you to focus on what remains. You must still take steps to verify that you do have test coverage for the situation you’re looking at, in order to have confidence in the test results. If you find missing coverage, you can add a test to fill the gap; this will either pass, eliminating another possible point of failure, or it will fail, indicating a probable source of the issue.
Just don’t take unit test results as gospel; tests must be maintained just like any other code, and just like any other code, they’re capable of containing errors and oversights. Trust the results, but not the tests, and learn the difference between the two: results will reliably tell you whether the test you’ve written passed or failed. It is the test, however, that executes the code and judges passing or failing. The execution may not cover everything you need, and the judgement may be incorrect, or not checking all factors of the result.

Feature Disparity Between Web, Mobile, and Desktop

I understand that mobile is new territory, and that web applications have certain restrictions on them (though less and less so with modern standards and modern browsers), but it seems very strange to me that there are still such glaring disparities between the web, mobile, and desktop versions of some products – even products designed with mobile in mind.

Take Evernote as an example. It’s been out for Android for ages, with regular new releases offering new features and functionality. Yet there are still basic features that are not available in the mobile client, including strike-through text, horizontal rules, alignment, and font face/size changes. If you have a note with these features, and you edit the note in the Android app, you get a friendly warning that the note contains unsupported features, and the editor forces you to edit paragraph-by-paragraph, like the old and irritating Google Docs app for Android. I find this more than a little bit ridiculous; why are you adding new, nice-to-have features when basic functionality is still unsupported?

Look at Google Keep for the opposite example. The mobile app allows reordering the items in a checklist with drag-and-drop. The web app doesn’t allow you to reorder items. The only way to reorder items is using cut and paste. This is something you can absolutely achieve in a web app, and they’ve done it before, but for some reason that one, basic, important feature is just somehow missing.

The Mint mobile app allows changing budgets, but not changing whether or not the budget surplus/deficit should roll over month-to-month, which you can do in the web app. It’s most of the feature, just missing one little part that can cause frustration because if most of the feature is there, you expect the whole feature to be there.

The GitHub web app doesn’t even include a git client – the closest you can get is downloading a repo, but you can’t actually check out and manage a working copy.

The Google Maps app for Android doesn’t allow editing your “My Maps”, or to choose from (or create) alternate routes when getting directions. It also doesn’t include the web version’s traffic forecasting. The Blogger web app is next to useless; editing a note created on the desktop gives you a WYSIWYG editor with the plain text littered with markup, and writing a post on mobile and then looking at it on desktop shows that there’s some serious inconsistencies with handling of basic formatting elements like paragraphs. Don’t even get me started on the useless bundle of bytes that is the Google Analytics Android app; it’s such a pathetic shadow of the web application that there’s no point in even having it installed.

These seem to me like cases of failure to eat your own dog food. If there were employees – especially developers or product managers – of these companies, using these applications on each supported platform, these issues would have been solved. They’re the sorts of things that look small and insignificant on a backlog until they affect you on a day-to-day basis; those little annoyances, repeated often enough, become sources of frustration.

Teaching a Developer to Fish

I write a lot about development philosophy here, and very little about technique. There are reasons for this, and I’d like to explain.

In my experience, often what separates an easy problem from an intractable one is method and mindset. How you approach a problem tends to be more important than the implementation you end up devising to solve it.

Let’s say you’re given the task of designing a recommendation engine – people like you were interested in X, Y, and Z. Clearly this is an algorithmic problem, and a relatively difficult one at that. How do you solve it? 

The algorithm itself isn’t significant; as a developer, the algorithm is your output. The process you use to achieve the desired output is what determines how successful you’ll be. I could talk about an algorithm I wrote, but that’s giving a man a fish. I’d much rather teach a man to fish.

So how do you fish, as it were, for the perfect algorithm? You follow solid practices, you iterate, and you measure. That means you start with a couple of prototypes, you measure the results, you whittle down the candidate solutions until you have a best candidate, and then you refine it until it’s as good as it can get. Then you deploy it to production, you continue to measure, and you continue to refine it. If you code well, you can A/B test multiple potential algorithms, in production, and compare the results.

How do you fish for a fix to a defect? You follow solid practices, you iterate, and you measure. You start by visual inspection, checking for code quality, and doing light refactoring to try to simplify the code and eliminate points of failure, to narrow down the possibilities. Often this alone will bring the root cause of the defect to the surface quickly, or even solve it outright. If it doesn’t, you add logging, and you watch the results as you recreate the error, trying to recreate it in different ways, to assess the boundaries of the defect; if this is for an edge case, what exactly defines the “edge” that’s affected? What happens during each step of execution when it happens? Which code is executing and which code isn’t? What parameters are being passed around?

In my experience, logging tends to be a far more effective debugging tool than a step-wise debugger in most cases, and with a strong logging framework, you can leave your logging statements in place with negligible performance impact in production (with debug logging disabled), and with fine-grained controls to allow you to turn up verbosity for the code you’re inspecting without turning all logging on and destroying the signal-to-noise ratio of your logging output.

You follow solid practices, you iterate, and you measure. If you use right process, with the right mindset, you’ll wind up at the right solution.

That’s why I tend to wax philosophical instead of writing about concrete solutions I’ve implemented. Chances are I wrote the solution to my problem, not your problem; and besides, I’d much rather teach a man to fish than give a man a fish.