JComboBox with Custom ComboBoxModel Not Updating Value on setSelectedItem()

I wrestled with this issue for some time before figuring out the cause, so I hope this helps someone out there. I had a JComboBox with a custom ComboBoxModel. Everything was working fine, except a call to setSelectedItem would do everything it should (fire events, update the selected item property) except it wasn’t updating the displayed value in the JComboBox itself. Clicking the drop-down would even show the correct item selected, and getSelectedItem() returned the correct result; it was just the box itself that was wrong.

Apparently the displayed value in the JComboBox isn’t based on getSelectedItem(), but rather on the top item in the list. I don’t know how or why this is, or if it’s due to some intricacies of my GUI code, but bringing the selected item to the top of the ComboBoxModel‘s item list when calling setSelectedItem fixed the issue. Go figure.

If anyone has any insight into what causes this, please drop a comment!

Building a Foundation

It’s been said that pharmaceutical companies produce drugs for pennies per pill – except the first pill, which costs millions. Things aren’t so different in the land of software development: the first usage of some new functionality might take hours, building the foundation and related pieces. But it could be re-used a hundred times trivially, and usually expanded or modified with little effort as well (assuming it was well-written to start with).

This is precisely what you should be aiming for: take the time to build a foundation that will turn complex tasks into trivial ones as you progress. This is the main purpose behind design concepts like the single responsibility principle, the Hollywood principle, encapsulation, DRY, and so on.

This isn’t to be confused with big upfront design; in face, it’s especially important to keep these concepts in mind in an agile process, where you’re building the architecture as you go. It can be tempting to just hack together what you need at the moment. That’s exactly what you should be doing for a prototype, but not for real development. For lasting functionality, you should assemble a foundation to support the functionality you’re adding now, and similar functionality in the future.

It can be difficult to balance this against YAGNI – you don’t want to build what you don’t need, but you want to build what you do need in such a way that it will be reusable. You want to save yourself time in the future, without wasting time now.

To achieve a perfect balance would require an extraordinary fortune teller, of course. Experience will help you get better at determining what foundation will be helpful, though. The more experience you have and the more projects you work on, the better sense you’ll have of what can be done now to help out future you.

The Importance of Logging

Add more logging. I’m serious.
Logging is what separates an impossible bug report from an easy one. Logging lets you replace comments with functionality. I’d even go so far as to say good logging separates good developers from great ones.
Try this: replace your inline comments with equivalent logging statements. Run your program and tail the log file. Suddenly, you don’t need a step wise debugger for the vast majority of situations, because you can see, in the log, exactly what the program is doing, what execution path it’s taking, where in the source where each logging statement is coming from, and where execution stopped in the event of a crash.
My general development process focuses on clean, readable, maintainable, refactorable, self-documenting code. The process is roughly like this:
  1. Block out the overall process, step by step, in comments.
  2. Any complex step (more than five or ten lines of code), replace the comment with a clearly-named method or function call, and create a stub method/function.
  3. Replace comments with equivalent logging statements.
  4. Implement functionality.
    • Give all functions, methods, classes, parameters, properties, and variables clear, concise names, so that the code ends up in some semblance of readable English.
    • Use thorough sanity checking, by means of assertions or simple if blocks. When using if blocks, include logging for any failed checks, including what was expected and what was found. These should be warnings.
    • Include logging in any error/exception handling code. These should be errors if recoverable, or fatal if not. This is all too often the only logging a developer includes!
  5. Replace inline comments with equivalent logging statements. These should be debug or info/trace level; major section starts should be higher level, while mid-process statements should be lower level.
  6. Add logging statements to the start of each method/function. These should also be debug or info/trace level. Use higher-level logging statements for higher-level procedures, and lower-level logging statements for more deeply-nested calls.
  7. For long-running or resource-intensive processes, particularly long loops, add logging statements at regular intervals to provide progress and resource utilization details.
Make good use of logging levels! Production systems should only output warnings and higher by default, but it should always be possible to enable deeper logging in order to troubleshoot any issues that arise. However, keep the defaults in mind, and ensure that any logging you have in place to catch defects will provide enough information in the production logs to at least begin an investigation.
Your logging messages should be crafted with dual purpose in mind: first, to provide useful, meaningful outputs to the log files during execution (obviously), but also to provide useful, meaningful information to a developer reading the source – i.e., the same purpose served by comments. After a short time with this method you’ll find it’s very easy to craft a message that serves both purposes well.
Good logging is especially useful in an agile environment employing fast iteration and/or continuous integration. It may not be obvious why at first, but all the advantages of good logging (self-documenting code, ease of maintenance, transparency in execution) do a lot to facilitate agile development by making code easier to work with and easier to troubleshoot.
But wait, there’s more! Good logging also makes it a lot easier for new developers to get up to speed on a project. Instead of slogging through code, developers can execute the program with full logging, and see exactly how it runs. They can then review the source code, using the logging statements as waypoints, to see exactly how the code relates to the execution.

If you need a tool for tailing log files, allow me a shameless plug: try out my free log monitor, Rogue Informant. It’s been in development for several years now, it’s stable, it’s cross-platform, and it’s completely free to use privately or commercially. It allows you to monitor multiple logs at once, filter and search logs, and float a log monitoring window on top of other applications, to make it easier to watch the log while using the program to see exactly what’s going on behind the scenes.Give it a try, and if you find any issues or have feature suggestions, feel free to let me know!

Sanity Checks: Assumptions and Expectations

Assertions and unit tests are all well and good, but they’re too narrow-minded in my eyes. Unit tests are great for, well, testing small units of code to ensure they meet the basic requirements of a software contract – maybe a couple of typical cases, a couple of edge cases, and then additional cases as bugs arise and new test cases are created for them. No matter how many cases you create, however, you’ll never have a test case for every possible scenario.

Assertions are excellent for testing in-situ; you can ensure that unacceptable values aren’t given to or by a piece of code, even in production (though there is a performance penalty to enabling assertions in production, of course.) I think assertions are excellent, but not specific enough: any assertion that fails is automatically a fatal error, which is great, unless it’s not really a fatal error.

That’s where the concept of assumptions and expectations come in. What assertions and unit tests really do is test assumptions and expectations. A unit test says “does this code behave correctly when given this data, all assumptions considered?” An assertion says “this code assumes this thing, and will not behave correctly if it gets another, so throw an error.”

When documenting an API, it’s important to document assumptions and expectations, so users of the API know how to work with your code. Before I go any further, let me define what I mean by these very similar terms: to me, code that assumes something operates as if its assumptions are correct, and will likely fail if its assumptions turn out to be incorrect. Code that expects something operates as if its expectations are met, but will likely still operate correctly even if they aren’t. It’s not guaranteed to work, or guaranteed to fail; it’s likely to work, but someone should probably know about it and look into it.

Therein lies the rub: these are basically two types of assertions, one fatal, one not. What we need is an assertion framework that allows for warning-level assertion failures. What’s more, we need an assertion framework that is performant enough to be regularly enabled in production.

So, any code that’s happily humming along in production, that says:

Assume.that(percentage).isBetween(0,100);

will fail immediately if percentage is outside those bounds. It’s assuming that percentage is between zero or one hundred, and if it assumes wrong, it will likely fail. Since it’s always better to fail fast, any case where percentage is outside that range should trigger a fatal error – preferably even if it’s running in production.

On the other hand, code that says:

Expect.that(numRows).isLessThan(1000);

will trigger a warning if numRows is over a thousand. It expects numRows to be under a thousand; if it isn’t, it can still complete correctly, but it may take longer than normal, or use more memory than normal, or it may simply be that if it got more rows than that, something may be amiss with the query that got the rows or the dataset the rows came from originally. It’s not a critical failure, but it’s cause for investigation.

Any assumption or expectation that fails should of course be automatically and immediately reported to the development team for investigation. Naturally a failed assumption, being fatal, should take priority over a failed expectation, which is recoverable.

This not only provides greater flexibility than a simple assertion framework, it also provides more explicit self-documenting code.

Be Maxwell’s Demon

Source code tends to follow the second law of thermodynamics, with some small differences. In software, as in thermodynamics, systems tend toward entropy: as you continue to develop an application, the source will increase in complexity. In software, as well as in thermodynamics, connected systems tend toward equilibrium: in development, this is known as the “broken windows” theory, and is generally considered to mean that bad code begets bad code. People often discount the fact that good code also begets good code, but this effect is often hidden by the fact that the overall system, as mentioned earlier, tends toward entropy. That means that the effect of broken windows is magnified, and the effect of good examples is diminished.

In thermodynamics, Maxwell’s Demon thought experiment is, in reality, impossible – it is purely a thought experiment. However, in software development, we’re in luck: any developer can play the demon, and should, at every available opportunity.

Maxwell’s demon stands between two connected systems, defeating the second law of thermodynamics by selectively allowing less-energetic particles through only in one direction, and more-energetic particles through only in the other direction, causing the two systems to tend toward opposite ends of the spectrum, rather than naturally tending toward entropy.

By doing peer reviews, you’re doing exactly that; you’re reducing the natural entropy in the system and preventing it from reaching its natural equilibrium by only letting the good code through, and keeping the bad code out. Over time, rather than tending toward a system where all code is average, you tend toward a system where all code is at the lowest end of the entropic spectrum.

Refactoring serves a similar, but more active role; rather than simply “only letting the good code through”, you’re actively seeking out the worse code and bringing it to a level that makes it acceptable to the demon. In effect, you’re reducing the overall entropy of the system.

If you combine these two effects, you can achieve clean, efficient, effective source. If your review process only allows code through that is as good or better than the average, and your refactoring process is constantly improving the average, then your final code will, over time, tend toward excellence.

Without a demon, any project will be on a continuous slide toward greater and greater entropy. If you’re on a development project, and it doesn’t have a demon, it needs one. Why not you?

Real Sprints

Agile methodologies talk about “sprints” – workloads organized into one to four week blocks. You schedule tasks for each sprint, you endeavour to complete all of it by the end of the sprint, then you look back and see how close your expectations (schedule) were to reality (what actually got done).

Wait, wait, back up. When I think of a sprint, I think short and fast. That’s what sprinting means. You can’t sprint for a month straight; you’ll die. That’s a marathon, not a sprint.

There are numerous coding competitions out there. Generally, you get around 48 hours, give or take, to build an entire, working, functional game or application. Think about that. You get two days to build a complete piece of software from scratch. Now that’s what I call sprinting.

Of course, a 48 hour push is a lot to ask for on a regular basis; sure, your application isn’t in a competition, this is the real world, and you need to get real work done on an ongoing basis. You can’t expect your developers to camp out in sleeping bags under their desks. But that doesn’t mean turning a sprint into a marathon.

The key is instilling urgency, while moderating burnout. This is entirely achievable, and can even make development more fun and engaging for the whole team.Since the term sprint has already been thoroughly corrupted, I’ll use the term “dash”. Consider this weekly schedule:

  • Monday: Demo last week’s accomplishments for stakeholders, and plan this week’s dash. This is a good week to schedule any unavoidable meetings.
  • Tuesday and Wednesday: your 48 hours to get it done and working. These are crunch days, and they will probably be pretty exhausting. These don’t need to be 18-hour days, but 10 hours wouldn’t be unreasonable. Let people get in the zone and stay there as long as they can.
  • Thursday: Refactoring and peer reviews. After a run, athletes don’t just take a seat and rest; they slow to a jog, then a walk. They stretch. The cool off slowly. Developers, as mental athletes, should do the same.
  • Friday: Testing. QA goes through the application with a fine-toothed comb. The developers are browsing the web, playing games, reading books, propping their feet up, and generally being lazy bums, with one exception: they’re available at a moment’s notice if a QA has any questions or finds any issues. Friday is a good day for your development book club to meet.
  • By the end of the week, your application should be ready again for Monday’s demo, and by Tuesday, everyone should be well-rested and ready for the next dash.
Ouch. That’s a tough sell. The developers are only going to spend two days a week implementing features? And one basically slacking off? Balderdash! Poppycock!

Think about it, though. Developers aren’t factory workers; they can’t churn out X lines of code per hour, 40 hours per week. That’s not how it works. A really talented developer might achieve 5 or 6 truly productive hours per day, but at that rate, they’ll rapidly burn out. 4 hours a day might be sustainable for longer. Now, mind you, in those four hours a day, they’ll get more done, better, with fewer defects, than an army of incompetent developers could do in a whole week. But the point stands: you can’t run your brain at maximum capacity eight hours straight, five days a week. You just can’t – not for long, anyway.

The solution is to plan to push yourself, and to plan to relax, and to keep the cycle going to maximize the effectiveness of those productive hours. It’s also crucial not to discount refactoring as not being productive; it sets up the following weeks’ work, and reduces the effort required to get the rest of the development done for the rest of the life of the application. It’s a critical investment in the future.

Spending a third of your development time on refactoring may seem excessive, and if it were that simple, I’d agree. But if you really push yourself for two days, you can get a lot done – and write a lot of code to be reviewed and refactored. In one day of refactoring, you can learn a lot, get important work done, and still start to cool off from the big dash.

That lazy Friday really lets you relax, improve your craft, and get your product ready for next week, when you get to do it all over again.

The Development Stream

I was reading today about GitHub’s use of chat bots to handle releases and continuous integration, and I think this is absolutely brilliant. In fact, it occurs to me that using a chat bot, or a set of chat bots, can provide an extremely effective workflow for any continuous-deployment project. Of course, it doesn’t necessarily have to be a chat room with chat bots; it can be any sort of stream that can be updated in real-time – it could be a Twitter feed, or a web page, or anything. The sort of setup I envision would work something like this:

Everyone on the engineering team – developers, testers, managers, the whole lot – stay signed in to the stream as long as they’re “on duty”. Every time code is committed to a global branch – that is, a general-use preproduction or production branch – it shows up in the stream. Then the automated integration tests run, and the results are output to the stream. The commit is deployed to the appropriate environment, and the deployment status is output to the stream. Any issues that occur after deployment are output to the stream as well, for immediate investigation; this includes logged errors, crashes, alerts, assertion failures, and so on. Any time a QA opens a defect against a branch, the ticket summary is output to the stream. The stream history (if it’s not already compiled from some set of persistent-storage sources) should be logged and archived for a period of time, maybe 7 to 30 days.

It’s very important that the stream be as sparse as possible: no full stack traces with error messages, no full commit messages, just enough information to keep developers informed of what they will need to look into further elsewhere. This sort of live, real-time information stream is crucial in the success of any continuous-deployment environment, in order to keep the whole team abreast of any issues that might be introduced into production, along with when and how they were introduced.

Now, what I’ve described is a read-only stream: you can’t do anything with it. GitHub’s system of using an IRC bot allows them to issue commands to the bot to trigger deployments and the like. That could be part of the stream, or it could be part of another tool; as long as the deployment, and its results, are output to the shared stream for all to see. This is part of having the operational awareness necessary to quickly identify and fix issues, and to maintain maximum uptime.

There are a lot of possible solutions for this sort of thing; Campfire looks particularly promising because of its integration with other tools for aggregating instrumentation data. If you have experience with this sort of setup, please post in the comments, I’d love to hear about it!