Stupid Internet Tricks

A few weeks ago I picked up the delightful ASUS Transformer Infinity tablet. About a week ago, I finally replaced my aging, flaky Linksys WRT310N router with a fancy new ASUS RT-N66U router. For one thing, the difference was surprising. They’re both Wireless-N routers from major manufacturers; however, while the wifi on the Linksys was unreliable and suffered random device compatibility issues (I had particular issues getting and staying connected from both Android and iOS mobile devices), the Asus is rock-solid. The speed and range are also a tremendous improvement. What more could you want from a router?

Stupid internet tricks, that’s what. I spent a few minutes in the (notably snazzy) administration UI for the new router. I set up my (free) ASUS dynamic DNS host name and the (built-in) OpenVPN service, and I can now VPN into my home network securely from anywhere with internet access. A quick install of VNC onto my desktop machine, and I no longer have any use for LogMeIn; I have a free service that does the same thing, but entirely under my control.

The router has some other neat features, including a Guest Access option for the WiFi that allows internet access but blocks LAN access; optional complete separation of the 2.4GHz and 5GHz WANs; dead-simple static IP assignment; and, well, more advantages over my old router than I can even count. Definitely worth what it cost, assuming that it has the reliability and longevity I’ve come to expect from ASUS products.
Some additional features that influenced my decision to buy the RT-N66U that I’ve yet to tinker with:

  • The router has 2 USB ports. My printer already has WiFi and Ethernet connectivity, so I don’t need them for printer sharing, but I fully intend to hook two USB HDDs up to the router for network file service (which, of course, will then be accessible over the VPN!)
  • The router has full IPv6 support. I tried to set this up at one point and it failed miserably; this is apparently because the shipped firmware (3.0.0.4.122) has a fatal failure in the IPv6 implementation that causes the router to become completely unresponsive when you turn it on. I had to factory reset to get back into it. I’ve since updated the firmware, but I haven’t yet attempted setting up IPv6 again yet.

The Semantic Web: Practical Semantic Markup

There’s been a lot of talk, for many years about the coming of “the semantic web” – all markup will include semantics that automated systems can read and understand for as-yet-undefined purposes, though prognosticators will speculate on all manner of technical advances that could come from semantics. What about the here and now, though? Right now, today, semantic markup can help you. Semantic markup does one very useful thing: it makes building and styling web pages a heck of a lot easier, if done right.

So, how do you do it right, and reap those benefits? Starting from a blank slate, start filling in your content. Don’t even think about layout or styling – worry only about organizing the content in a clean and sensible way. Your headings should be in h* tags, with lower-level headings using lower-level heading tags. Your content should be in p tags, with no br’s. Enclose the main content in an article tag, enclose sidebars in aside tags, and so on. Enclose your header, navigation, and footer in the appropriate tags.

Load the page. It’ll look like crap. All you’re looking for right now is a sensible document flow. The page should read cleanly from top to bottom with no styling. If not, reorganize your markup until it does.

Now that you have a well-organized, semantically-tagged document, start identifying the parts of the page that are specific to your document. Add id’s to unique elements on the page. Add classes to just about every container on the page to identify its purpose – even if it already has an ID (more on this later.) Name your IDs and classes based on what they identify, not how it’s supposed to look. For example, don’t use “small” or “bold” as class names; if you want your copyright footer to be small, name it “copyright” and worry about the appearance later. If you want text to be bold, use the strong tag if it’s appropriate (e.g. a bold segment of body text), or use a class name that says what the thing is that you want to be bold (e.g. class=”announcement” or class=”specialOffer”.)

Try to use a consistent naming scheme. I use CamelCase for all classes and IDs, with IDs starting with a capital letter and classes starting with a lowercase letter. This is just what makes sense to me personally; it doesn’t matter what your standard is, as long as you find it intuitive and you stick to it.

After all this, your page looks exactly like it did before. Excellent. Now that you’ve got semantic tags identified with semantic classes and IDs, you’re ready to start styling your document. It doesn’t really matter what you start with, but I tend to start with typographic styling. The reason behind this is that typographic styling will change the font metrics, and many parts of a responsive design will be relative to your font metrics, so starting with typography gives you a solid foundation on which to build your layout.

For typography, start at the bottom and work your way up: start by applying your default font style to body, and then use that as a base to style any other elements you need to style – headers, paragraphs, strong/emphasis, a, blockquote, and so on. Start with the most generic styles, where you can apply the style to the tag globally, with no class name or ID specified. Work your way in deeper, first with those cases where you can still identify with only tag names, but based on ancestry; for example, you may want list elements inside nav to look one way, list elements inside article to look another way, and list elements inside an aside to have a third, different styling. This is still global based on document structure, not based on classes or IDs.

View your document again; the layout still sucks, but the document should be readable, and your typography should be pretty close to what you want in the finished product. Identify the places where certain uses of an element – which should already be identified by semantic classes and IDs – should be styled a certain way, and start defining those styles in CSS. Avoid using IDs in your CSS; identifying elements by class rather than by ID lends more flexibility to your code.

Once you have your typography more or less like you want it (at least the font families and sizes), start thinking about layout. Your document is already well-organized, but the layout is very 1995. Now is the time to fix that. Presumably you already have a final design in mind, but if not, take the time to quickly sketch out a rough layout for the page, where you want everything to be, and how you want the document to flow in its final incarnation.

You should conveniently already have all of the blocks that you want to lay out in appropriate tags with appropriate classes, so it should be easy to identify them in CSS. If not, review your markup and clean it up. Again, start with the big chunks and work your way deeper from there. Adjust the layout of the main page elements: header, footer, body, columns/grid. View your page, and start tweaking the layout of the elements within those main containers; adjust the layout of inline elements like sidebars and images, adjust the layout of your navigation items, and so on.

Now that your typography is set, and your layout is looking good, you can start on the fancy stuff, like borders, backgrounds, rounded corners, drop shadows, spriting, and so on and so forth: all of the interface fluff that takes a site from usable to beautiful. We’re on the home stretch now!

If you’re building a modern website, you’re probably going to be implementing some fancy UI behaviors using something like jQuery. Depending on the complexity of what you want to achieve, this may be quick and easy, or it may be weeks worth of iteration. Regardless, you’ve already given yourself a significant advantage: all of that semantic markup, the careful selection and classing of elements, gives you a huge boost using a tool like jQuery, for a couple of reasons. First, it makes it easier to identify the elements you’re trying to control in your scripts. Second, it makes your code more readable automatically, because you can quickly tell from the way you’ve identified an element what you’re trying to do. “$(‘p.boldRed’)” doesn’t tell you much, but “$(‘p.callToAction’)” tells anyone reading the code that you’re manipulating the call to action paragraph on the page. They know what to look for in the HTML, they know what to look for when they’re looking at the page in the browser, it’s all immediately clear from the identifier used.

This is the basic process for building a semantic web page. This doesn’t cover the finer points of responsive design, which is a whole can of worms that I look forward to opening in a future post.

A Programmer’s Comparison of Java and C#/.NET

I’ve been developing in Java for almost ten years, and in C# for only a few months, so this comparison may not be as thorough as it could be. If I’ve missed something please feel free to comment.

Java is far more portable than C# for one. I’ve built applications on Windows, and ported them to Linux and Mac with minimal effort. With Android running on a Java-based platform, porting to Android is made far easier. There’s no fussing with Mono and the like, no making sure you don’t use any API functions that are only available in .NET.

Java also has a wide array of available IDEs with excellent features, and a huge, active community. There are libraries available for just about any technology you can think of, usually more than one, such that one of the options will likely fit your specific situation.

Java’s runtime generics seem to be easier to learn and work with than C#’s compile-time generics; however, compile-time generics are likely better performing. Java’s overridable-by-default scheme also makes development a lot easier in many cases. While I do understand the idea behind C#’s final-by-default scheme, I prefer a language that leaves those kinds of choices up to the developer rather than enforcing good development practices through language features.

The JVM is also now expanding to support other scripting languages, including PHP, Python, Ruby, Scala, and others.

C# has some excellent language features that I would like to see in Java, however. Extension methods are extremely useful for adding functionality to classes without having to subclass, particularly useful in adding functionality to library classes. C#’s delegate features are really excellent, and beat any workaround Java has for its lack of closures for callbacks and event handlers. The upcoming Java closure functionality looks like it will still pale in comparison to C#’s delegates.

LINQ is something I would love to see come to Java; the ability to query collections like databases is extraordinarily useful and eliminates a lot of tedious code iterating over collections. I’ve yet to use it for querying against a database, but it seems more than adequate for that purpose and likely much friendlier than JDBC. And while porting is more complicated, Mono is a very strong platform, and there’s even a Mono module for hosting web applications through Apache.

Speaking of web applications, I have no experience so far with building web applications in C# .NET, but I have done some research. Based on that research, I have to say I significantly prefer JSP/JSTL/EL over ASP.NET. I prefer the syntax, the workflow, and JSP’s tag library system over ASP.NET, which reminds me a little too much of PHP or old-school JSP (pre-JSTL/EL) for my tastes.

All in all, I can’t say one is superior to the other; like any development decision, it comes down to which option is best suited to the situation and the developer. If you’ve had the opportunity to make that choice, please leave a note in the comments and let me know what you chose and why, I’d be happy to hear it!

The State of PC Upgrades

I’m not the first to point this out, but PCs have really reached the point of diminishing returns recently in many respects. While technological progress marches on, there’s not a tremendous subjective difference between this year’s hottest CPU and a mid-range part from two years ago. In particularly intensive applications, sure, you’ll notice it; but for the majority of users, there’s not much incentive to upgrade. For the rest, there’s likely to be one or two parts that will really get you a big benefit, while you’ll be happy with the rest of the system being 2-3 years old, and those parts will likely satisfy you at least 2-3 years more.

I used to operate on a two-year upgrade path: every other year I’d build a new machine, and in the years between, I’d make some individual upgrades (additional RAM, additional disks, faster GPU). Now I’m looking more at a 5-year path, with individual upgrades every year or two between. I really think that, at this point, anyone with a machine built in the last 3 years has little to benefit from a total overhaul. There are a few areas where everyone is likely to see real improvements:

  • Operating system: if you don’t have Windows 7, get it, along with any upgrades required to meet the minimum specs. I can’t recommend Windows 8 for any user for any purpose at the current time.
  • RAM: if you have a 32-bit system (unlikely if it’s less than 3 years old), you should have 4GB of RAM. If you have a 64-bit system, you should have 8GB; possibly 16GB for computer audio, video, or graphics professionals, or users running intensive virtual machines.
  • SSD: you should have an SSD. Honestly. If you don’t have one, get one. They’re getting cheaper by the day, and will give you a real, noticeable performance improvement across the board. Your SSD should host your OS and applications, at the least. Let Windows 7 handle optimizing system configuration for the SSD; just do a clean install onto the SSD and let it do the rest. Ignore all the “SSD tuning tips” that require changing OS settings or disabling services. 99% of them are wrong, and the other 1% are debatable.
  • Display: IPS displays are a world away from your typical bargain LCD, and they’re getting cheaper constantly. You can now get a name-brand, 24″ IPS display for under $300. If you’re a computer professional, you probably want at least two.
This is, of course, a generalization, and depending on how you use a PC, you will have different needs. Audio professionals will obviously see benefits from discreet audio hardware. Imaging and video professionals will want a fast CPU. 3D graphics professionals will want a fast CPU with as many cores as they can get, as well as a fast GPU. 3D gamers will want the fastest GPU they can get – possibly upgrading GPU every year to 18 months.
My system is two years old now. So far I’ve upgraded from 4GB of RAM to 8GB, and I’ve added a 256GB SSD drive. I’m planning on replacing my single 23″ TN LCD with dual 24″ IPS LCDs soon. In another year or two I’ll probably upgrade the GPU, and in three years or so I might be in the market for a total replacement – or I might not. I wouldn’t be shocked to find that, three years from now, brand-new hardware doesn’t put enough distance between itself and what I’ve already got to make it worth the money. Only time will tell.

Video Game Business Models

I see an opportunity, particularly for indie game developers, in developing new business models for sellings games. There are currently three predominant business models in the gaming industry:

  1. The major retail model: release a game for $60 in major retail outlets, with a huge marketing push, looking for a big launch week payout. Steadily lower the retail price by $5 or $10 a couple of times a year as it ages, until it eventually ends up in the $10 bargain bin. In the meantime, release DLC or expansions to try to get more money out of existing players, and raise the total cost for those buying the game late for $20 at retail up to or above the original $60 price tag.
  2. The subscription model: the game itself is cheap or free, but players must pay a monthly fee (usually around $15) to play the game. This is most common in the MMO genre, but can be seen elsewhere as well.
  3. The “freemium” model: the game itself is free, but players pay for in-game items, bonuses, avatars, skins, or other unlockable content, on a per-item basis. This is most commonly done with a points system, where players buy points with cash, and then spend the points on in-game items. This is particularly popular with mobile games, but is fairly widespread in general.
All three have found great success with the big game publishing houses, and the last one has found a good deal of success for indie game developers. But that last option doesn’t work with all game types, and has two possible outcomes: either all the purchasable content is purely aesthetic, and doesn’t seem worth paying for, or it offers real in-game advantages, and gives players the option to “pay to win”, leaving those who can’t or don’t pay feeling unfairly handicapped.
I think there’s another option waiting in the wings, however; I call it the value model, for lack of a better term, and it works something like this: release a game at a very low price point, and do the exact opposite of the major retail model. Players can purchase the game at any time and gain access to all content, past, present, and future. As content is added through updates and expansions, the price goes up accordingly with value. This has several effects on the sales dynamic:
  • For indie developers, releasing at an initial low price point can help to boost sales when a large marketing budget is unavailable, and help to fund further development. It’s also easier to sell a game at a lower price point before it gets popular, and easier to set a higher price point as popularity increases.
  • For players, it helps to avoid feeling like they’re being swindled, or continuously squeezed for more money; they know up front what they’re paying, they know what they’re getting right away, and if it’s worth it, then whatever content (which is free for them) is a welcome bonus.
  • From a marketing perspective, it gives the opportunity for a reverse discount: if you announce ahead of time that new content will be released (and therefor the price will be going up), it can push people to make the purchase (to lock in the lower price while guaranteeing the upcoming content) the same way a true discount would, without actually having to lower the price. The price is effectively reduced because prospective buyers are aware that the price is about to increase.
Does anyone know of any examples of such a model being used for games? I’ve seen it occasionally in game content (e.g. Unity assets and the like), but I don’t think I’ve seen it for a public game release. I’d be happy to hear thoughts on the subject in the comments!

Convenience Languages

I’ve come to see the uncertainty of untyped and interpreted languages as something of a curse. In a strongly typed, compiled language, you know ahead of time that the code is at least trying to do what you want it to; you know you didn’t typo any variable, function, method, or class names. You know you didn’t misuse or misunderstand a function, passing one type when another is required. Sanitizing inputs and returns is a matter of checking bounds, not types. Type-safe comparison is a non-issue.

After working with PHP and JavaScript extensively, as well as dabbling in Perl, Python, and Ruby, I miss the basic assurance you get from a language like C/C++, C#, or Java that if it compiles, nothing is completely wrong. Even in HTML, you can validate the markup. But in PHP or JavaScript, you probably don’t know about even a major, simple error until run-time testing (unit or functional).

To me, that’s a nightmare. I miss knowing. I miss that little bit of certainty and stability. With an untyped interpreted language, you may never be 100% certain that you’ve not made a silly but fatal mistake somewhere that your tests just didn’t happen to catch.

These are languages of convenience: easy to learn, quick to implement small tasks, ubiquitous. But they just aren’t professional-grade equipment.

Developing software is both an art and a science. I make an effort every day not to just be a coder, but to be a code poet. That’s hard to do on the unsure footing of a dynamic language. I won’t argue that these languages let you do some neat tricks; on the other hand, I also won’t discuss the performance issues. My concern is purely quality.

Is it possible to write quality code in a dynamic language? Absolutely. Unfortunately, it’s harder, and far more rare – not just because it’s challenging. It’s mainly temptation. Why would the language offer global variables if you weren’t supposed to use them? Why have dynamic typing at all if you aren’t going to have variables and function return values that could have various types depending on the context? Even with the best intentions, you can commit these Crimea against code accidentally, without even knowing it until you finally track down that pesky bug 6 months down the road.

Using (and abusing) these sorts of language features makes for messy, sloppy, confusing, unreadable code that can be an extraordinary challenge to debug. Add to that the fact IDEs are severely handicapped with these languages, unable to offer much – if any – information on variables and functions, and unable to detect even the simplest of errors. That’s because variable types and associated errors only exist at runtime; and while an IDE can rapidly attempt to compile a source file and use that to detect errors, it can’t possibly execute every possible code path in order to determine what type(s) a variable might contain, or function might return.

I know most of this has been said before, and every new language will inspire a new holy war. I’m writing this more because all of the above leads me to wonder about the growing popularity of dynamic languages like Python, Ruby and JavaScript, and the continued popularity of PHP. Anyone care to shed some light on the subject in the comments?

Simplicity, Flexibility, and Agility

Agile programming is supposed to be about flexibility in the face of changing requirements. It’s supposed to be about rapid development and iteration. But all too often it ends up being like classical methodologies in many ways. Many agile methodologies drown developers in process, taking time away from development. Test-driven development is a brilliant concept, but it puts more time between planning and iteration, making it more difficult to deal with changing requirements, not easier, and increasing the burden of change and the cost of refactoring.


Every developer wants carefully, precisely defined requirements. Developers often try to handle changing requirements by developing for flexibility, but flexibility often comes at the cost of added complexity. Trying to write-in endless flexibility to allow for changing requirements is very much akin to premature optimization. You end up doing a whole lot of work to make some code “better” – more flexible in this case, versus more performant in the case of optimization – when you don’t yet know which code really needs it and which code doesn’t.

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.” –C. A. R. Hoare 

Often the best way to maintain flexibility is through simplicity. A program that meets its requirements with the simplest possible implementation is one that will be naturally flexible, maintainable, stable, and manageable.Of course, intelligent development plays a major role; making appropriate use of design patterns can do a lot to improve both flexibility and simplicity. Key design tenets like DRY, YAGNI, separation of concerns, and avoiding premature optimization (including overarchitecting for flexibility) help keep complexity down and productivity up.

What if requirements change? What if the simplest possible solutions isn’t as extensible? Good news: having invested in the simplest possible solution, you’ve lost little in development. You haven’t built anything that wasn’t strictly necessary. The new solution will still aim for simplicity, still reap the same rewards. By keeping things simple, you’ve kept down the cost of change.

Software development is a learning process by nature; you’re building something that’s never been done before, or at least building something in a way that’s never been done before. Innovation is at its heart, and learning is the personal experience of innovation. That being the case, every iteration has value in what’s learned, even if the code is later removed or replaced. The experience of writing it adds value to the team, and the team defines the value of the end product.


Many readers may think of targeting simplicity as a given, but it truly isn’t; while simplicity is often a goal, all too frequently it takes a back seat to other concerns, or is abandoned altogether because simplicity becomes far more difficult to achieve with many of the popular frameworks and libraries available today. Frameworks have to aim for maximum flexibility in order to be successful; in order to be general-purpose, they can’t make many assumptions, and they have to account for a whole host of different usage scenarios and edge cases. This increases the complexity of the framework, and accordingly, the complexity of any implementation using the framework. The fewer assumptions the framework can make, the more effort a developer has to put in just telling the framework what she’s trying to accomplish.


I can’t count how many implementations I’ve seen that are drastically more complex than necessary, simply because they have been forced to apply the conventions required by their chosen framework; and even following those conventions, they’re still left managing arcane configuration files, and tracking down bugs becomes an epic undertaking requiring delving deep into the inner workings of the framework and libraries. All too often, the quick-start “hello world” app is far more complex with a framework than without it, and adding functionality only makes the situation more bleak.


So, what does all this add up to? Here’s the bullet-point version:

  • If you’re aiming for agility – the ability to adapt quickly to changing requirements – don’t invest too much time on nailing down requirements. Get as much detail as you can, and start iterating. If you’re planning for requirements to change, go all the way – assume that your initial requirements are wrong, and think of each iteration as an opportunity to refine the requirements. All code is wrong until proven right by user acceptance.
  • Use interface mockups (for software with a UI) or API documentation (for libraries and services) as a tool to give stakeholders a chance to revise requirements while looking at a proposed solution and thinking about using it in real-world scenarios.
  • Don’t choose flexibility or modularity over simplicity. Choose a framework that won’t get in your way; if there isn’t one, then don’t use a framework at all. Don’t write what you don’t need. Don’t turn a piece of code into a general-purpose API just because you might need to use it again. If you need it in multiple places now, separate it out; otherwise, you can refactor it when it’s appropriate.
  • Think about separation of concerns early in the game, but don’t sacrifice simplicity for the sake of compartmentalization. Simple code is easier to refactor later if refactoring turns out to be necessary. Overarchitecting is the same sin as premature optimization, it’s just wearing a nicer suit.
  • The simplest solution isn’t always the easiest. The simplest solution often requires a lot of thought and little code. Don’t be a code mason, laying layer after layer of brick after brick; be a code poet, making every line count. If a change could be implemented by increasing the complexity of existing code, or by refactoring the existing code to maintain simplicity, always take the latter route; you’ll end up spending the same amount of time either way, but will reap far more benefits by maintaining simplicity as a priority.
  • Simplicity carries a great many implicit benefits. Simpler code is very often faster (and easier to optimize), more stable (and easier to debug), cleaner (and easier to refactor), and clearer to read and comprehend. This reduces development, operational, and support costs across the board.
  • Simplicity doesn’t just mean “less code”, it means better code. SLOC counts don’t correlate directly to complexity.
  • Don’t reinvent the wheel – unless you need to. All wheels aren’t created equal; there’s a reason cars don’t use the same wheels as bicycles.
What are your experiences? What ways have you found to keep hold of simplicity in the face of other pressures? Feedback is welcome in the comments!

Dynamic Playlists

There is a feature I miss from Audion, which they removed before they retired the app entirely, that I have yet to see recreated in any other music player. It was simple. It was brilliant. I want it back.
Basically, Audion used to allow you to group tracks in your playlists into folders, and – this is the important part – check or uncheck folders to include or exclude them from the playlist, temporarily.
Big deal, right? iTunes lets you check or uncheck songs, and you can multi-select and do a bunch at once. Except that it unchecks them everywhere, not just in that playlist, and you still have to do your multi-select by hand every time.
The beauty of this was it gave playlists a sort of dynamic quality: I could have a playlist with folders for happy tracks, funny tracks, angry tracks, sad tracks, tracks with a good beat, tracks with a fast beat, etc. and so on. Then, depending on my mood, I could check, say, happy songs and songs with a good beat, when I’m in a good mood. Or angry songs and songs with a fast beat, when I’m looking to play some first-person shooters. And if my mood changes an hour later, I can uncheck parts and check other parts and it will just keep shuffling through whatever is active when it comes time to pick the next track. It was brilliant.
And I miss it, and I want it back. iTunes doesn’t do it, WinAmp doesn’t do it, Songbird doesn’t do it, VLC doesn’t do it… nobody does. And it doesn’t seem like it should be necessary to write an entire music player just to get this one feature; maybe one day I’ll get up the nerve to modify Songbird or VLC to do it. Or maybe, sometime between now and then, some kind-hearted developer will hear my pleas and implement it in their player. Who knows.
If anybody out there knows of a player that does have this functionality, please, let me know in the comments… I’d be forever grateful!

On New Tricks, Old Hacks, and Web Browsers

I must say, I’m a little curious why I haven’t seen mention of this before; a quick Google search didn’t turn anything up either. For the last, oh, ten years or so, web designers have been wrestling with all the different browsers, and different versions of each browser, to get their web pages to behave the same – or at the very least, behave relatively well – on all the browsers their users are likely to employ.

The whole time, the W3C has been releasing new standards and new versions of old standards to give web designers new tricks… and every time, the browsers all catch up to the new standards at different speeds, and implement different parts of the standards, or implement them slightly differently.
My question, then, is this: why is there no W3C specification for browser detection? Why can’t I use CSS selectors to target certain styles at certain browsers, without resorting to lousy hacks? Even CSS3’s new media queries allow me to check the screen size before applying styles, but not whether or not the browser supports, say, CSS3 Of course, it’ll take forever for designers to be able to count on all the browsers supporting a new feature like that, but I haven’t even seen a proposal.
Today, putting together a design involves pulling up your design in all the browsers, figuring out what works and what doesn’t, and then applying hacks specific to each browser. Life would be so much easier in the web design world if instead you could say something like, “if the browser doesn’t support CSS3 background properties, apply this style instead.”
Suddenly, I don’t need to use the hack that hides CSS from IE, and the other hack that hides CSS from everything but IE, and test it, and then find another set of hacks for the Android browser, and another for FireFox, and so on. I can apply styles logically by selecting for specific features, rather than selecting for specific browsers, then having to keep up with the features of each browser – because the features are all I really care about as a designer.
I would much rather “hack” for specific features than specific browsers because it’s more intuitive, and it’s less work to support multiple browsers and different versions of each browser. The browser makers know what features they support. If I can select for the features I want to use, I don’t have to worry about keeping up-to-date with what features are supported by what versions of what browsers.
I’m bringing it up on the W3C mailing list, but I thought I would bring it up here… I’d love to hear your thoughts in the comments!

Home-Made Guitar Work Mat

I made a guitar work mat for about $6 and ten minutes’ work. It works perfectly, so I thought I’d share it here.

From Mat

While I was at the grocery store one day, I saw a small selection of shelf/drawer liners. They had a rubber webbing liner, and a self-adhesive cork liner. I bought one of each, I think the cork was $4 and the rubber was $2.
The rubber roll was 12″ x 5′, and the cork roll was 12″ by 4′. 12″ is too narrow, so I used two rows of each. I laid the rubber down in two 2′ rows, then adhered the cork to it in two perpendicular rows, so that each side would hold the other side together. So far it’s been fine, but if the layers start to separate I plan to seal around the edges with tape, probably electrical tape.

This picture shows how the seams on each layer run perpendicular. Half of the mat is rolled up so you can see the cork side against the rubber side.

Advantages:
  • Rubber bottom provides traction on all surfaces — I have a glass work table and it’s rock solid.
  • Cork top provides good traction for your guitar body, but will not mar or scuff the finish in any way.
  • Both the cork and the rubber provide some level of impact protection; a regular cloth doesn’t protect your guitar from the hard surface underneath, while the rubber and cork will.
  • Rolls up for easy storage.
  • Costs less than a store-bought guitar work mat, which will probably not have the nice non-skid surface. Cheap & easy to replace if it gets lost or damaged.
If you try this out yourself, please let me know in the comments how it worked out for you!