Zen Templates Development Journal, Part 2

Having my concept complete (see Part 0) and my simple test case working (see Part 1), I was ready to start on my moderate-complexity test case. This would use more features than the simple test case, and more pages. I really didn’t want to have to build a complete site just for the proof of concept, so I decided to use an existing site, and I happened to have one handy: rogue-technologies.com.

The site is currently built in HTML5, using server-side includes for all of the content that remains the same between pages. It seemed like a pretty straightforward process to convert this to my template engine, so I got to work: I started with one page (the home page), and turned it into the master template. I took all of the include directives and replaced them with the content they were including. I replaced all of the variable references with model references using injection or substitution. I ID’d all the elements in the master template that would need to be replaced by child templates. I then made another copy of the homepage, and set it up to derive from the master template.

I didn’t want to convert the site to use servlets, since it wasn’t really a dynamic site; I just wanted to be able to generate usable HTML from my template files. So I created a new class that would walk a directory, parse the templates, and write the output to files in an output directory. Initially, it set up the model data statically by hand at the start of execution.

All was well, but I needed a way for the child template to add elements to the page, rather than just replacing elements from the parent template. I came up with the idea of appending elements, using a data attribute data-z-append=”before:” or “after:”, to cause an element to be appended to the page either before or after the element from the parent with the specified ID. This worked perfectly, allowing me to add the Google Webmaster Tools meta tag to the homepage.

With this done, I set to work converting the remaining pages. Most of the pages were pretty straightforward, being handled just like the homepage; I dumped the SSI directives, added some appropriate IDs and classes, and all was well. However, the software pages presented some challenges. For one thing, they used a different footer than the rest of the site. It was time to put nested derivation to the test.

I created a software page template, which derived from the master template, that appended the additional footer content. I then had the software pages derive from this template, instead of deriving from the master template and – by some stroke of luck – it worked perfectly on the first try. I wasn’t out of the woods yet, though.

The software pages also used SSI directives to dynamically insert the file size for downloadable files next to the links to download them. I wasn’t going to reimplement this functionality, however, I was prepared to replace these directives with file size data stored in the model. But I wanted to keep the model data organized, so I needed to support nesting. The software pages also used include directives to include a Google+ widget on the pages; this couldn’t be added to the template, as it was embedded in the body content, so it seemed like a perfect case for snippets – which meant I needed to implement snippet support.

Snippet support was pretty easy – find the data attribute, look up the snippet file, parse it as an HTML fragment, and replace the placeholder element with the snippet. Easy to implement, worked pretty well.

Nested properties I thought would be a breeze, as I had assumed it was natively supported by StrSubstitutor. Unfortunately it wasn’t, so I had to write my own StrLookup. I decided that, since I was already doing some complex property lookups for injection, I’d build a unified model lookup class that could act as my StrLookup and could be used elsewhere. I wanted nested scope support as well, for my project list: each project had an entry in the model, that consisted of a name, latest version, etc. I wanted the engine to iterate this list, and for each entry, rather than replacing the entire content of the element with the text value of the model entry, I wanted it to find sub-elements and replace each with an appropriate property of the model entry. This meant I needed nested scoping support.

I implemented this using a scope stack and a recursive lookup. Basically, every time a nested scope was entered (e.g., content injection using an object or map, or iteration over a list), I would push the current scope onto the stack. When the nested scope was exited (i.e., the end of the element that added the scope), I popped the scope off. When iterating a loop, at the start of the iteration, I’d push the current index, and at the end of the iteration, I’d pop it back off.

This turned out to be very complex to implement, but after some trial and error, I got it working correctly. I then re-tested against my simple test case, having to fix a couple of minor defects introduced there with the new changes. But, at last, both my simple and moderate test cases were working.

I didn’t like the static creation of model data – not very flexible at all – so I decided to swap it out with JSON processing. This introduced a couple of minor bugs, but it wasn’t all that difficult to get it all working. The main downside was that it added several additional dependencies, and dependency management was getting more difficult. I wasn’t too concerned on that front though, since I was already planning for the real product to use Maven for dependency tracking; I was just beginning to wish I had used Maven for the prototype as well. Oh well, a lesson for next time. For now, I was ready for my complex test case – I just had to decide what to use.

Zen Templates Development Journal, Part 0

Zen Templates is based on an idea I’ve been tossing around for about six months. It started with a frustration that there was no way to validate a page written in PHP or JSP as valid HTML without executing it to get the output. It seemed like there had to be a way to accomplish that.

I started out looking into what I knew were global attributes, class and id. I did some research and found that the standard allows any character in a class or id; this includes parens and such, meaning a functional syntax could be used in these attributes, which a parser could then process to render the template.

This seemed practically ideal; I could inject content directly into the document, identifying the injection targets using these custom classes. I toyed with the idea of using this exclusively, but saw a couple of serious shortcomings. For one, sometimes you want to insert dynamic data into element attributes, and I didn’t see a good way to handle that without allowing a substitution syntax like that of JSP or ASP. I decided this would be a requirement to do any real work with it.

I also saw the problem of templates. Often each page in a dynamic site is called a template, but I’m referring to the global templates that all pages on a site share, so there is only one place to update the global footer, for example. I had no good solution for this. I started thinking about the idea of each page being a template and sharing a global template – much akin to subclasses in object oriented programming, one template derives from another.

I started batting around different possibilities for deriving one template from another, and decided on having a function (in a class attribute) to identify the template being derived from, with hooks in the parent template to indicate which parts the “subtemplate” would be expected/allowed to “override”.

I let the idea percolate for a while – a few weeks – as other things in life kept me too busy to work on it. Eventually it occurred to me that all these special functions in class attributes were pretty messy, and a poor abstraction for designers. It could potentially interfere with styling. It would produce ugly code. And I was on a semantic markup kick, and it seemed like a perfect opportunity to do something useful with semantic markup.

So I started rebuilding the concept and the current Zen Templates was born (and the name changed from its original, Tabula Obscura.) As I committed to maximizing the use of semantic markup and keeping template files as valid, usable HTML, I reworked old ideas and everything started falling into place. I remembered that the new HTML5 data attributes are global attributes as well, and would give me an additional option for adding data to markup without interfering with classes or ruining the semantics of the document.

I ironed out all the details of how derivation should work; it made semantic sense that a page that derived from another page could be identified by class; and, taking a page from OOP’s book, it made sense that an element in the subpage with the same ID as an element in the parent page would override that element, making any element with an ID automatically a hook; somewhat like a subclass overriding methods in the superclass by defining a method with the same signature.

I sorted out the details of content Injection as well, thinking that, semantically, it made sense that an element of a certain class would accept data from the model with an identifier matching the class name. Even better, I didn’t need a looping syntax; if you try to inject a list of items into an element, it would simply repeat the element for each item in the list. This simplified a lot of syntax I’ve had to use in the past using JSP or Smarty.

I also wrote out how substitution should work, using a syntax derived from JSP. Leaning on JSP allowed me to answer a lot of little questions easily. I would try to avoid the use of functions in the substitution syntax, because it does make the document messier, and forces more programming tasks on the designer. I conceded that some functions would likely be unavoidable.

When I felt like I had most of the details ironed out, a guiding principal in mind, and a set of rules of thumb to help guide me through questions down the road, I was ready for a prototype. Stay tuned for Part 1!

Zen Templates Development Journal, Part 1

Once my concept was well documented (see Part 0), I was ready to start developing my prototype. I had many questions I needed to answer:

  • Is the concept feasible, useful, and superior in some meaningful way to existing solutions?
  • What kind of performance could I expect?
  • How would derivation work in real-world scenarios? What shortcomings are there in the simple system described in my concept?
  • Ditto for content injection and substitution.
  • How would I handle model data scoping?
  • Would it be better to parse the template file into a DOM Document, or to parse it as a stream?
I started with an extremely simple use case: two templates, one deriving from the other; a couple of model data fields, one of them a list; use of basic derivation, injection, and substitution, with no scope concerns. I built the template files and dummy model data, such that I could quickly tell what was working and what wasn’t (“this text is from the parent template”, “this text is from the child template”, “this text shouldn’t appear in the final output”, etc.) I also build a dead-simple servlet that did nothing but build the model, run the template renderer, and dump the output to the HttpServletResponse.
With this most basic use case in place, I started to work on the template renderer. I started with the state, initialization, and entry point. For the state, I knew I needed a reference to the template file, and I needed a Map for the model data. For initialization, I needed to take in a template file, and initialize the empty model. For convenience, I allowed initialization with a relative file path and a ServletContext, to allow referencing template files located under WEB-INF, so that they couldn’t be accessed directly (a good practice borrowed from JSP.) I created accessors for adding data to the model.
The entry point was a function simply named “render”. It was 5 lines long, each line calling an unimplemented protected method: loadTemplate, handleDerivation, handleInjection, handleSubstitution, and writeOut. These were the five steps needed for my basic use case.
I then went to work on building out each of the steps. The easiest was loading the template file from disk into a DOM Document using Jsoup (since XML handlers don’t deal well with HTML content). At this point I added two Documents to the renderer’s state, inDoc and outDoc. inDoc was the Document parsed from the template file, outDoc was the Document in memory being prepared for output. I followed a basic Applications Hungarian Notation, prefixing references to the input document with “in” and references to the output document with “out”.
Since I needed to be able to execute derivation recursively, I decided to do it by creating a new renderer, passing it the parent template, and running only the loadTemplate and handleDerivation methods; then the parent renderer’s outDoc became the child’s starting outDoc. In this way, if the parent derived from another template, the nested derivation would happen automagically. I would then scan the parent document for ID’s that matched elements in the child document, and replace them accordingly. Derivation was done.
Next up was injection: I started out by iterating over the keys in my model Map, scanning the document for matching class names. Where I found them, I simply replaced the innerHtml of the found element with the toString() value of the model data; if the model data was an array or collection, I would instead iterate the data, duplicating the matched element for each value, and replacing the cloned element’s innerHtml with the list item’s toString() value. This was enough functionality for my simple test case.
Reaching the home stretch, I did substitution ridiculously simply, using a very basic regex to find substitution placeholders (${somevariable}) and replacing each with the appropriate value from the model. I knew this solution wouldn’t last, but it was enough for this early prototype.
Last up was writing the rendered output, and in this case, I allowed passing in an HttpServletResponse to write to. I would set the content type of the response, and dump the HTML of my final Document to the response stream.
I ran it, and somehow, it actually worked. I was shocked, but excited: in the course of a little over an hour, I had a fully working prototype of the most basic functions of my template engine. Not a complete or usable product by any means, but an excellent sign. I made a few tweaks here and there, correcting some minor issues (collection items were being inserted in reverse order, for example), but it was pretty much solid. I also replaced my RegEx-based substitution mechanism with the StrSubstitutor from commons-lang; this was pretty much a direct swap that worked perfectly.

Time for the next test, my moderate complexity test case.

The Development Stream

I was reading today about GitHub’s use of chat bots to handle releases and continuous integration, and I think this is absolutely brilliant. In fact, it occurs to me that using a chat bot, or a set of chat bots, can provide an extremely effective workflow for any continuous-deployment project. Of course, it doesn’t necessarily have to be a chat room with chat bots; it can be any sort of stream that can be updated in real-time – it could be a Twitter feed, or a web page, or anything. The sort of setup I envision would work something like this:

Everyone on the engineering team – developers, testers, managers, the whole lot – stay signed in to the stream as long as they’re “on duty”. Every time code is committed to a global branch – that is, a general-use preproduction or production branch – it shows up in the stream. Then the automated integration tests run, and the results are output to the stream. The commit is deployed to the appropriate environment, and the deployment status is output to the stream. Any issues that occur after deployment are output to the stream as well, for immediate investigation; this includes logged errors, crashes, alerts, assertion failures, and so on. Any time a QA opens a defect against a branch, the ticket summary is output to the stream. The stream history (if it’s not already compiled from some set of persistent-storage sources) should be logged and archived for a period of time, maybe 7 to 30 days.

It’s very important that the stream be as sparse as possible: no full stack traces with error messages, no full commit messages, just enough information to keep developers informed of what they will need to look into further elsewhere. This sort of live, real-time information stream is crucial in the success of any continuous-deployment environment, in order to keep the whole team abreast of any issues that might be introduced into production, along with when and how they were introduced.

Now, what I’ve described is a read-only stream: you can’t do anything with it. GitHub’s system of using an IRC bot allows them to issue commands to the bot to trigger deployments and the like. That could be part of the stream, or it could be part of another tool; as long as the deployment, and its results, are output to the shared stream for all to see. This is part of having the operational awareness necessary to quickly identify and fix issues, and to maintain maximum uptime.

There are a lot of possible solutions for this sort of thing; Campfire looks particularly promising because of its integration with other tools for aggregating instrumentation data. If you have experience with this sort of setup, please post in the comments, I’d love to hear about it!

Truly Agile Software Development

Truly agile software development has to, by nature, allow for experimentation. In order to quickly assess the best option among a number of choices, the most effective method is empirical evidence: build a proof of concept for each option and use the experience of creating the proof, as well as the results, to determine which option is the best for the given situation.

While unit tests are valuable for regression testing, a test harness that supports progression testing is at least as useful. Agile development methodologies tend to focus on the idea of iterating continuously toward a goal along a known path; but what happens when there’s a fork in the road? Is it up to the architect to choose a path? There’s no reason to do so when you can take both roads and decide afterward which you prefer.

Any large development project should always start with a proof of concept: a bare-bones, quick-and-dirty working implementation of the key functionality using the proposed backing technologies. It doesn’t need to be pretty, or scaleable, or extensible, or even maintainable. It just has to work.

Write it, demo it, document what you’ve learned, and then throw the code away. Then you can write the real thing.

It may seem like a waste of time and effort at first.  You’ll be tempted to over-engineer, you’ll be tempted to refactor, you’ll be tempted to keep some or all of the code. Resist the urge.

Why would you do such a thing? If you’re practicing agile development, you might think your regular development is fast enough that you don’t need a proof. But that’s not the point; the point is to learn as much as you can about what you’re proposing to do before you go all-in and build an architecture that doesn’t fit and that will be a pain to refactor later.

Even if it takes you longer to build the proof,it’s still worth it – for one thing, it probably took longer because of the learning curve and mistakes made along the way that can be avoided in the final version, and second because again, you’ve learned what you really need and how the architecture should work so that when you make the production version you can do it right the first time, with greater awareness of the situation.

This approach allows much greater confidence in the solutions chosen, requiring less abstraction to be built in to the application, which allows for leaner, cleaner code, and in less time. Add to that the value of building a framework that is flexible enough to allow for progression testing, and you’ve got the kind of flexibility that Agile is really all about.

Note: Yes, I understand that Scrum calls prototypes “spikes”. I think this is rather silly – there are already terms for prototypes, namely, “prototype” or “proof of concept”. I’m all for new terms for things that don’t have names, but giving new names to things that already have well-known names just seems unnecessary.

HTML5 Grid Layouts

I have to take issue with the swarm of “responsive grid layout” systems that have been cropping up lately. Yes, they’re great for wireframes and prototypes. No argument there. And yes, they take care of a lot of the legwork involved in producing a responsive layout. Great. But in the process, they throw semantic markup and separation of concerns out the window.

The idea of semantic markup is that your document structure, IDs, and classes should describe the content of the document. Separation of concerns, in HTML and CSS, means using classes and IDs to identify what something is (not how it should appear), and using CSS to identify content and determine how it should appear; this allows you to change content without having to change appearance, and vice versa: the concerns of document structure and appearance are kept separate.

That means, as far as I’m concerned, as soon as you put a ‘class=”two column”‘ into your HTML, you’ve lost the game. You’ve chained you structure to your presentation. Can you change your presentation without modifying the markup? Not any more. All we’ve achieved in this is bringing back the days of nested tables for layout, with a pretty CSS face on it. With one dose of “clever” we’ve traveled back in time 15 years. Only this time, there *are* other ways to do it. There’s no excuse. It’s just plain laziness.

Is building a truly semantic, responsive, attractive layout possible? Absolutely. Difficult? Yes. Is it worth the effort? In the long run, I think it is – except for those cases mentioned above, prototypes and wireframes, code that’s meant to be disposable. But any code that has to be maintained in the future will be hamstrung by these systems.

Web development has made tremendous strides over the last 10 years. It’s amazing how far we’ve come in terms of what can be done and how. Don’t take all those advances and use them to regress all the way back to clunky old table-based layouts. Try using them to do something new, and interesting instead. There’s no reason the idea of software craftsmanship should be missing from the web design world.

Assumptions

We all make assumptions. It’s the only way we can get anything done. If every time you found a bug you started at the iron – testing the CPU to make sure every operation returns an expected result – it’d take you months to troubleshoot the simplest issue. So we make assumptions to save us time, when we know that the likelihood of something being the cause of a problem is far less than the time it would take to verify it.

We also make assumptions out of sheer bloody-mindedness. You can spot these assumptions by phrases like “that couldn’t possibly be it” or “it’s never been a problem before” or “I wrote that code, I know it works”. These are the kinds of assumptions that can get us into trouble, and they’re the exact reason why it’s important to have developers from different backgrounds, with different perspectives, who make different assumptions.

Since we all make assumptions, the best way to challenge those assumptions is to have someone who makes different assumptions look at the issue. They’ll bring their perspective and experience to the matter, challenge your assumptions where they don’t make sense, and make you prove those assumptions to be accurate or not. This stands as a strong incentive to hire a team with diverse backgrounds and areas of expertise. They bring not just talent to your team, but a different perspective.

It’s also a good reason to invest the time in learning different technologies, languages, and development philosophies. Getting outside of your comfort zone can open your eyes to things you might not otherwise have considered, and help you to gain new perspective on your work – helping you to challenge your own assumptions.

The Semantic Web: Practical Semantic Markup

There’s been a lot of talk, for many years about the coming of “the semantic web” – all markup will include semantics that automated systems can read and understand for as-yet-undefined purposes, though prognosticators will speculate on all manner of technical advances that could come from semantics. What about the here and now, though? Right now, today, semantic markup can help you. Semantic markup does one very useful thing: it makes building and styling web pages a heck of a lot easier, if done right.

So, how do you do it right, and reap those benefits? Starting from a blank slate, start filling in your content. Don’t even think about layout or styling – worry only about organizing the content in a clean and sensible way. Your headings should be in h* tags, with lower-level headings using lower-level heading tags. Your content should be in p tags, with no br’s. Enclose the main content in an article tag, enclose sidebars in aside tags, and so on. Enclose your header, navigation, and footer in the appropriate tags.

Load the page. It’ll look like crap. All you’re looking for right now is a sensible document flow. The page should read cleanly from top to bottom with no styling. If not, reorganize your markup until it does.

Now that you have a well-organized, semantically-tagged document, start identifying the parts of the page that are specific to your document. Add id’s to unique elements on the page. Add classes to just about every container on the page to identify its purpose – even if it already has an ID (more on this later.) Name your IDs and classes based on what they identify, not how it’s supposed to look. For example, don’t use “small” or “bold” as class names; if you want your copyright footer to be small, name it “copyright” and worry about the appearance later. If you want text to be bold, use the strong tag if it’s appropriate (e.g. a bold segment of body text), or use a class name that says what the thing is that you want to be bold (e.g. class=”announcement” or class=”specialOffer”.)

Try to use a consistent naming scheme. I use CamelCase for all classes and IDs, with IDs starting with a capital letter and classes starting with a lowercase letter. This is just what makes sense to me personally; it doesn’t matter what your standard is, as long as you find it intuitive and you stick to it.

After all this, your page looks exactly like it did before. Excellent. Now that you’ve got semantic tags identified with semantic classes and IDs, you’re ready to start styling your document. It doesn’t really matter what you start with, but I tend to start with typographic styling. The reason behind this is that typographic styling will change the font metrics, and many parts of a responsive design will be relative to your font metrics, so starting with typography gives you a solid foundation on which to build your layout.

For typography, start at the bottom and work your way up: start by applying your default font style to body, and then use that as a base to style any other elements you need to style – headers, paragraphs, strong/emphasis, a, blockquote, and so on. Start with the most generic styles, where you can apply the style to the tag globally, with no class name or ID specified. Work your way in deeper, first with those cases where you can still identify with only tag names, but based on ancestry; for example, you may want list elements inside nav to look one way, list elements inside article to look another way, and list elements inside an aside to have a third, different styling. This is still global based on document structure, not based on classes or IDs.

View your document again; the layout still sucks, but the document should be readable, and your typography should be pretty close to what you want in the finished product. Identify the places where certain uses of an element – which should already be identified by semantic classes and IDs – should be styled a certain way, and start defining those styles in CSS. Avoid using IDs in your CSS; identifying elements by class rather than by ID lends more flexibility to your code.

Once you have your typography more or less like you want it (at least the font families and sizes), start thinking about layout. Your document is already well-organized, but the layout is very 1995. Now is the time to fix that. Presumably you already have a final design in mind, but if not, take the time to quickly sketch out a rough layout for the page, where you want everything to be, and how you want the document to flow in its final incarnation.

You should conveniently already have all of the blocks that you want to lay out in appropriate tags with appropriate classes, so it should be easy to identify them in CSS. If not, review your markup and clean it up. Again, start with the big chunks and work your way deeper from there. Adjust the layout of the main page elements: header, footer, body, columns/grid. View your page, and start tweaking the layout of the elements within those main containers; adjust the layout of inline elements like sidebars and images, adjust the layout of your navigation items, and so on.

Now that your typography is set, and your layout is looking good, you can start on the fancy stuff, like borders, backgrounds, rounded corners, drop shadows, spriting, and so on and so forth: all of the interface fluff that takes a site from usable to beautiful. We’re on the home stretch now!

If you’re building a modern website, you’re probably going to be implementing some fancy UI behaviors using something like jQuery. Depending on the complexity of what you want to achieve, this may be quick and easy, or it may be weeks worth of iteration. Regardless, you’ve already given yourself a significant advantage: all of that semantic markup, the careful selection and classing of elements, gives you a huge boost using a tool like jQuery, for a couple of reasons. First, it makes it easier to identify the elements you’re trying to control in your scripts. Second, it makes your code more readable automatically, because you can quickly tell from the way you’ve identified an element what you’re trying to do. “$(‘p.boldRed’)” doesn’t tell you much, but “$(‘p.callToAction’)” tells anyone reading the code that you’re manipulating the call to action paragraph on the page. They know what to look for in the HTML, they know what to look for when they’re looking at the page in the browser, it’s all immediately clear from the identifier used.

This is the basic process for building a semantic web page. This doesn’t cover the finer points of responsive design, which is a whole can of worms that I look forward to opening in a future post.

A Programmer’s Comparison of Java and C#/.NET

I’ve been developing in Java for almost ten years, and in C# for only a few months, so this comparison may not be as thorough as it could be. If I’ve missed something please feel free to comment.

Java is far more portable than C# for one. I’ve built applications on Windows, and ported them to Linux and Mac with minimal effort. With Android running on a Java-based platform, porting to Android is made far easier. There’s no fussing with Mono and the like, no making sure you don’t use any API functions that are only available in .NET.

Java also has a wide array of available IDEs with excellent features, and a huge, active community. There are libraries available for just about any technology you can think of, usually more than one, such that one of the options will likely fit your specific situation.

Java’s runtime generics seem to be easier to learn and work with than C#’s compile-time generics; however, compile-time generics are likely better performing. Java’s overridable-by-default scheme also makes development a lot easier in many cases. While I do understand the idea behind C#’s final-by-default scheme, I prefer a language that leaves those kinds of choices up to the developer rather than enforcing good development practices through language features.

The JVM is also now expanding to support other scripting languages, including PHP, Python, Ruby, Scala, and others.

C# has some excellent language features that I would like to see in Java, however. Extension methods are extremely useful for adding functionality to classes without having to subclass, particularly useful in adding functionality to library classes. C#’s delegate features are really excellent, and beat any workaround Java has for its lack of closures for callbacks and event handlers. The upcoming Java closure functionality looks like it will still pale in comparison to C#’s delegates.

LINQ is something I would love to see come to Java; the ability to query collections like databases is extraordinarily useful and eliminates a lot of tedious code iterating over collections. I’ve yet to use it for querying against a database, but it seems more than adequate for that purpose and likely much friendlier than JDBC. And while porting is more complicated, Mono is a very strong platform, and there’s even a Mono module for hosting web applications through Apache.

Speaking of web applications, I have no experience so far with building web applications in C# .NET, but I have done some research. Based on that research, I have to say I significantly prefer JSP/JSTL/EL over ASP.NET. I prefer the syntax, the workflow, and JSP’s tag library system over ASP.NET, which reminds me a little too much of PHP or old-school JSP (pre-JSTL/EL) for my tastes.

All in all, I can’t say one is superior to the other; like any development decision, it comes down to which option is best suited to the situation and the developer. If you’ve had the opportunity to make that choice, please leave a note in the comments and let me know what you chose and why, I’d be happy to hear it!

Video Game Business Models

I see an opportunity, particularly for indie game developers, in developing new business models for sellings games. There are currently three predominant business models in the gaming industry:

  1. The major retail model: release a game for $60 in major retail outlets, with a huge marketing push, looking for a big launch week payout. Steadily lower the retail price by $5 or $10 a couple of times a year as it ages, until it eventually ends up in the $10 bargain bin. In the meantime, release DLC or expansions to try to get more money out of existing players, and raise the total cost for those buying the game late for $20 at retail up to or above the original $60 price tag.
  2. The subscription model: the game itself is cheap or free, but players must pay a monthly fee (usually around $15) to play the game. This is most common in the MMO genre, but can be seen elsewhere as well.
  3. The “freemium” model: the game itself is free, but players pay for in-game items, bonuses, avatars, skins, or other unlockable content, on a per-item basis. This is most commonly done with a points system, where players buy points with cash, and then spend the points on in-game items. This is particularly popular with mobile games, but is fairly widespread in general.
All three have found great success with the big game publishing houses, and the last one has found a good deal of success for indie game developers. But that last option doesn’t work with all game types, and has two possible outcomes: either all the purchasable content is purely aesthetic, and doesn’t seem worth paying for, or it offers real in-game advantages, and gives players the option to “pay to win”, leaving those who can’t or don’t pay feeling unfairly handicapped.
I think there’s another option waiting in the wings, however; I call it the value model, for lack of a better term, and it works something like this: release a game at a very low price point, and do the exact opposite of the major retail model. Players can purchase the game at any time and gain access to all content, past, present, and future. As content is added through updates and expansions, the price goes up accordingly with value. This has several effects on the sales dynamic:
  • For indie developers, releasing at an initial low price point can help to boost sales when a large marketing budget is unavailable, and help to fund further development. It’s also easier to sell a game at a lower price point before it gets popular, and easier to set a higher price point as popularity increases.
  • For players, it helps to avoid feeling like they’re being swindled, or continuously squeezed for more money; they know up front what they’re paying, they know what they’re getting right away, and if it’s worth it, then whatever content (which is free for them) is a welcome bonus.
  • From a marketing perspective, it gives the opportunity for a reverse discount: if you announce ahead of time that new content will be released (and therefor the price will be going up), it can push people to make the purchase (to lock in the lower price while guaranteeing the upcoming content) the same way a true discount would, without actually having to lower the price. The price is effectively reduced because prospective buyers are aware that the price is about to increase.
Does anyone know of any examples of such a model being used for games? I’ve seen it occasionally in game content (e.g. Unity assets and the like), but I don’t think I’ve seen it for a public game release. I’d be happy to hear thoughts on the subject in the comments!