New Laptop! ASUS ROG G750JW

I recently received the generous gift of an ASUS ROG (Republic of Gamers) G750JW laptop, and let me tell you, the thing is a beast. Seriously, it’s huge.

It’s a 17″ widescreen laptop (1920×1080 TN panel, no touch thankyouverymuch), with an extra two inches or so of chassis behind the hinge. It also weighs just short of ten pounds.

But, I wasn’t looking for an ultraportable. I wanted something that I could use around the house and on the road, primarily for software development, but also for occasional gaming. That meant I needed a comfortably-sized keyboard, trackpad, and display; that meant a 17″ laptop. I wanted decent battery life and decent performance, which meant it would be heavy for its size. And I got exactly what I asked for.

The G750JW runs a Core i7 at 3.2GHz, 12GB of RAM, an NVidia GeForce 765m, and a 750GB HDD. Step one was replacing the HDD with a 240GB Crucial M500 SSD I picked up for $135 on Amazon – less than half what I paid for a nearly identical drive just over a year ago. The difference in speed is truly staggering, going from a 5400 RPM laptop hard drive to a full-tilt SSD. It also cut a few ounces off the weight, and added a good half hour to hour of working time on the battery, so a win across the board.

I tried installing Windows 7 on it as I despise Windows 8, but kept running into an error during the “extracting files” stage of the installation. I found numerous posts online from people with the same problem, some of them with solutions, but none of those solutions worked for me; from what I can tell, it appears to be some conflict between the latest-and-greatest UEFI in the G750’s motherboard and the aging Windows 7 OS. It’s a shame, but I suppose being forced to gain more familiarity with Windows 8 isn’t all bad; I just wish I had the option to use something more, well… usable.

Other than the OS though, it’s been a joy. It performs extremely well, it has all the features and specs I need for what I’m using it for, and it’s a beast for gaming – more horsepower than I really need considering I’m not a huge gamer and gaming was not the primary purpose of the laptop to begin with. Part of its bulk comes from the two huge rear-venting fans in the thing, which do a good job of keeping it cool – something I’ve had problems with when using other laptops, and which was the ultimate bane of my wife’s old MacBook Air. I don’t think I need to worry about it overheating and locking up while playing video like the MBA did on a regular basis.

My only gripe at the moment is that it seems to be impossible to find a decent Bluetooth mouse. Sure, the market is flooded with wireless laptop mice; but 95% of them use a proprietary receiver (I’m looking at you, Logitech!) rather than native Bluetooth, which requires you to use the provided USB dongle. That seems like an utter waste considering the laptop has a built-in transceiver capable of handling mice without any USB dongle.

All I really want is a decent-sized (I have large hands) Bluetooth wireless mouse, with a clickable scroll wheel and back/forward thumb buttons. That doesn’t seem like too much to ask, but as far as I can tell, it just doesn’t exist. Thankfully the laptop has a very generous touchpad with multi-touch, and clicking both the left and right buttons together generates a middle-click. Still, I really hope Logitech gives up on the proprietary wireless idea and gets on board with the Bluetooth standard, because I’d like to have a decent mouse to use with it.

It’s telling that, on Amazon, you can find a discontinued Logitech Bluetooth mouse that meets my requirements – selling in new condition for a mere three hundred dollars. That’s three times what Logitech’s finest current proprietary wireless mouse costs, for an outdated, basic mouse. That’s how much standard Bluetooth wireless is worth to people. Wake up Logitech!

Any suggestions on a suitable mouse in the comments would be greatly appreciated…

Optimizing Entity Framework Using View-Backed Entities

I was profiling a Web application built on Entity Framework 6 and MVC 5, using the excellent Glimpse. I found that a page with three lists of five entities each was causing over a hundred query executions, eventually loading a huge object graph with hundreds of entities. I could eliminate the round trips using Include(), but that still left me loading way too much data when all I needed was aggregate/summary data.

The problem was that the aggregates I needed were complex and involved calculated properties, some of which were based on aggregates of navigation collection properties: a parent had sums of its children’s properties, which in turn had sums of their children’s properties, and in some cases parents had properties that were calculated partly based on aggregates of children’s properties. You can see how this quickly spun out of control.

My requirements were that the solution had to perform better, at returning the same data, while allowing me to use standard entity framework, code first, with migrations. My solution was to calculate this data on the server side, using entities backed by views that did the joining, grouping, and aggregation. I also found a neat trick for backward-compatible View releases:

IF NOT EXISTS (SELECT Table_Name FROM INFORMATION_SCHEMA.VIEWS WHERE Table_Name = 'MyView')
EXEC sp_executesql N'create view [dbo].[MyView] as select test = 1'
GO
ALTER VIEW [dbo].[MyView] AS
SELECT ...

It’s effectively upsert for views – it’s safe to run whether or not the view already exists, doesn’t ever drop the view if it does exist (leaving no period where a missing view might cause an error), and it doesn’t require keeping separate create and alter scripts in sync when changes are made.

I then created the entities that would represent the views, using unit tests to ensure that the properties now calculated on the server matched expected values the same way that the original, app-calculated properties did. Creating entities backed by views is fairly straightforward; they behave just like tables, but obviously can’t be modified – I made the property setters protected to enforce this at compile time. Because my View includes an entry for every “real” entity, any query against the entity type can be cast to the View-backed type and it will pull full statistics (there is no possibility of an entity existing in the base table but not in the view).

Next I had to create a one to one association between the now bare entity type and the view type holding the aggregate statistics. The only ID I had for the view was the ID of the raw entity it was connected to. This turned out to be easier said than done – entity framework expects that, in a one to one relationship, it will be managing the ID at one end of the relationship; in my case, the ID’s at both ends were DB-generated, even though they were guaranteed to match (since the ID in the view was pulled directly from the ID in the entity table).

I ended up abandoning the one-to-one mapping idea after a couple days’ struggle, instead opting to map the statistics objects as subclasses of the real types in a table per type structure. This wound up being relatively easy to accomplish – I added a table attribute to the sub type, giving the name of the view, and it was off to the races. I went through updating references to the statistics throughout LINQ queries, views, and unit tests. The unit and integration tests proved very helpful in validating the output of the views and offering confidence in the changes.

I then ran my benchmarks again and found that pages that had required over a hundred queries to generate now used only ten to twenty, and were rendering in half to a third the time – a one to two hundred percent improvement, using views designed purely to mimic the existing functionality – I hadn’t even gone about optimizing them for performance yet!

After benchmarking, it looks even better (times are in milliseconds, min/avg/max):
EF + LINQ EF + Views
3 lists of 5 entities (3 types) 360/785/1675 60/105/675
2 lists of 6 entities (1 type) 325/790/1935 90/140/740
1 entity’s details + 1 list of 50 entities 465/975/2685 90/140/650
These tests were conducted by running Apache JMeter on my own machine against the application running on Windows Azure, across a sampling of 500 requests per page per run. That’s a phenomenal 450 to 650 percent improvement across the board on the most intensive pages in the application, and has them all responding to 100% of requests in under 1 second. The performance gap will only widen as data sets grow; using views will make the scaling much more linear.
I’m very pleased with the performance improvement I’ve gotten. Calculating fields on the app side works for prototyping, but it just can’t meet the efficiency requirements of a production application. View-backed entities came to the rescue in a big way. Give it a try!

You’re Being Held Hostage and You May Not Even Know It

To me, net neutrality isn’t about fair business practices between businesses. That’s certainly part of it, but it’s not the crux of the issue. To me, net neutrality is about consumer protection.

Your broadband provider would like to charge companies – particularly content companies – extra in order to bring you their content. Setting aside the utterly delirious reasoning behind this for the moment, let’s think about this from the consumer’s perspective. You’re paying your ISP to provide you access to the internet – the whole thing. When you sign up for service, you’re signing up for just that: access to the internet. Period. What your ISP fails to disclose, at least in any useful detail, is how they intend to shape that access.

For your $40, $50, $60 or more each month, you might get high-speed access to some things, and not to others. You don’t get to know what ahead of time, or even after you sign up – the last thing your ISP wants is for you to be well-informed about your purchase in this regard. They’ll do whatever they can to convince you that your service is plain, simple, high-speed access to the whole internet.

Then, in negotiations behind closed doors, they’re using you as a hostage to extort money from the businesses you’re a customer of. Take Netflix as an example: you pay your ISP for internet service. Netflix also has an ISP, or several, that they pay for internet service. Those ISPs have what are called “peering arrangements” that determine who, if anyone, pays, and how much, when traffic travels between their networks on behalf of their customers. This is part and parcel of what you and Netflix pay for your service. You pay Netflix a monthly fee to receive Netflix service, which you access using your ISP. Netflix uses some part of that monthly fee to pay for their own internet service.

Your ISP has gone to Netflix and said “hey, if you want to deliver high-definition video to your customers who are also my customers, you have to pay me extra, otherwise my customers which are also your customers will receive a sub-par experience, and they might cancel their Netflix account.” They’re using you as a bargaining chip without your knowledge or consent, in order to demand money they never earned to begin with; everyone involved is already paying their fair share for their connection to the global network, and for the interconnections between parts of that global network.

To me, when a company I do business with uses me, and degrades my experience of their product, without my knowledge or consent, that’s fraud from a consumer standpoint. Whatever Netflix might think about the deal, whether Netflix is right or wrong in the matter, doesn’t enter into it; I’m paying for broadband so that I can watch Netflix movies, I’m paying for Netflix so that I can watch movies over my broadband connection, and my ISP is going behind my back and threatening to make my experience worse if Netflix doesn’t do what they want. Nobody asked me how I feel about it.

Of course, they could give full disclosure to their customers (though they never would), and it wouldn’t matter a whole lot, because your options as a broadband consumer are extremely limited; in the majority of cases, the only viable solution is cable, and when there is competition, it comes from exactly one place: the phone company. The cable companies and phone companies are alike in their use of their customers as hostages in negotiations.

What about fiber broadband? It’s a red herring – it’s provided by the phone company anyway. Calling fiber competition is like saying Coke in cans competes with Coke in bottles – it’s all Coke, and whichever one you buy, your money goes into Coke’s pocket.

What about wireless? Wireless will never, ever be able to compete with wired service, due to simple physics. The bandwidth just isn’t there, the spectrum isn’t there, there’s noise to contend with, and usage caps make wireless broadband a non-starter for many cases, especially streaming HD video. Besides, the majority of truly high-speed wireless service is provided by the phone companies anyway; see the previous paragraph.

Why aren’t they regulated? The FCC is trying, in its own way, but there’s little traction; the cable and telephone companies have the government in their collective pockets with millions of dollars of lobbying money, and We The People haven’t convinced Our Government that we care enough for them to even consider turning down that money.

In the United States, we pay many, many times what people pay in much of the developed world, and we get many, many times less for what we spend. On top of that, our ISPs are using us as bargaining chips, threatening to make our already overpriced, underpowered service even worse if the companies we actually chose in a competitive market – unlike our ISPs – don’t pay up. This is absolutely preposterous, it’s bordering on consumer fraud, and you should be angry about it. You should be angry enough to write your congressman, your senator, the president, the FCC, and your ISP (not that the last will do you much good, but it can’t hurt.)

Some excellent places to find more information:

Engineers, Hours, Perks, and Pride

This started as a Google+ post about an article on getting top engineering talent and got way too long, so I’m posting here instead.

I wholeheartedly agree that 18-hour days are just not sustainable. It might work for a brand-new startup cranking out an initial release, understaffed and desperate to be first to market. But, at that stage, you can expect the kind of passion and dedication from a small team to put in those hours and give up their lives to build something new.

Once you’ve built it, though, the hours become an issue, and the playpen becomes a nuisance. You can’t expect people to work 18-hour days forever, or even 12-hour days. People far smarter than I have posited that the most productive time an intellectual worker can put in on a regular basis is 4 to 6 hours per day; after that, productivity and effectiveness plummet, and it only gets worse the longer it goes on.

Foosball isn’t a magical sigil protecting engineers from burn-out. Paintball with your coworkers isn’t a substitute for drinks with your friends or a night in with your family. An in-house chef sounds great on paper, until you realize that the only reason they’d need to provide breakfast and dinner is if you’re expected to be there basically every waking moment of your day.

Burn-out isn’t the only concern, either. Engineering is both an art and a science, and like any art, it requires inspiration. Inspiration, in turn, requires experience. The same experience, day-in, day-out – interacting with the same people, in the same place, doing the same things – leaves one’s mind stale, devoid of inspiration. Developers get tunnel-vision, and stop bringing new ideas to the table, because they have no source for them. Thinking outside the box isn’t possible if you haven’t been outside the box in months.

Give your people free coffee. Give them lunch. Give them great benefits. Pay them well. Treat them with dignity and respect. Let them go home and have lives. Let them get the most out of their day, both at work and at home. You’ll keep people longer, and those people will be more productive while they’re there. And you’ll attract more mature engineers, who are more likely to stick around rather than hopping to the next hip startup as soon as the mood strikes them.

There’s a certain pride in being up until sunrise cranking out code. There’s a certain macho attitude, a one-upmanship, a competition to see who worked the longest and got the least sleep and still came back the next morning. I worked from 8am until 4am yesterday, and I’m here, see how tough I am? It’s the geek’s equivalent to fitness nuts talking about their morning 10-mile run. The human ego balloons when given the opportunity to endure self-inflicted tortures.

But I’m inclined to prefer an engineer who takes pride in the output, not the struggle to achieve it. I want someone who is stoked that they achieved so much progress, and still left the office at four in the afternoon. Are they slackers, compared to the guy who stayed another twelve hours, glued to his desk? Not if the output is there. It’s the product that matters, and if the product is good, and gets done in time, then I’d rather have the engineer that can get it done without killing themselves in the process.

“I did this really cool thing! I had to work late into the night, but caffeine is all I need to keep me going. I kept having to put in hacks and work-arounds, but the important thing is that it’s done and it works. I’m a coding GOD!” Your typical young, proud engineer. They’re proud of the battle, not the victory; they’re proud of how difficult it was.

“I did this really cool thing! Because I had set myself up well with past code, it was a breeze. I was amazed how little time it took. I’m a coding GOD!” That’s my kind of developer. That’s pride I can agree with. They’re proud because of how easy it was.

This might sound like an unfair comparison at first, but think about it. When you’re on a 20-hour coding bender, you aren’t writing your best code. You’re frantically trying to write working code, because you’re trying to get it done as fast as you can. Every cut corner, every hack, every workaround makes the next task take that much longer. Long hours breed technical debt, and technical debt slows development, and slow development demands longer hours. It’s a vicious cycle that can be extremely difficult to escape, especially once it’s been institutionalized and turned into a badge of honor.

My Personal Project Workflow/Toolset

I do a lot of side projects, and my personal workflow and tooling is something that’s constantly evolving. Right now, it looks something like this:

  • Prognosticator for tracking features/improvements, measuring the iceberg, and tracking progress
  • WorkFlowy for tracking non-development tasks (the most recent addition to the toolset)
  • Trac for project documentation, and theoretically for defect tracking, though I’ve not been good about entering defects in Trac recently; it doesn’t seem worth the effort on a one-person project, though with multiple people I think it would be a must
  • Trello for cross-cutting all the above and indicating what’s next/in progress/recently completed, and for quickly jotting down ideas/defects. Most of the defect tracking actually goes in here on one-man projects right now. This is a lot of duplication and the main source of waste in my current process.
  • Bitbucket for source control (I also use Atlassian’s excellent SourceTree as a Git/Hg client.)
It’s been working well for me, the only issue I have is duplication between the tools, and failing to consistently use Trac for defect tracking. What keeps me in Trello is how quick and easy it is to add items to it, and the fact that I’m using it as a catch-all – I can put a defect or an idea or a task into it in a couple of seconds; I just have to replicate it to the appropriate place later, which is the problem.
I think the issue boils down to being torn between having a centralized repository for “stuff to be done” (Trello) and having dedicated repositories catered to each type of thing to be done (Prognosticator, Trac, and WorkFlowy); and convenience. Trello is excellent for jotting something down quickly, but lacks the additional specific utility of the other tools for specific purposes.
I think what I’ll end up doing is creating a “whiteboard” list in WorkFlowy, and using that instead of Trello to jot down quick notes when I don’t have the time to use the individual tools; then I can copy from there to the other tools when I need to. That will allow me to cut Trello down to basically being a Kanban board.

Behold, My Newest Creation!

I’m entirely too proud too announce my latest creation, Rogue Prognosticator. This is a web-based application for doing project estimation and schedule for software development. I’ve written about these topics before, and rest assured I will again; you can count on the concepts you see discussed here being taken into account in the software.

Right now the site is in open beta, free for public use. As features are added, some may be subscriber-only, or may start out being subscriber-only.

This project breaks a lot of new ground for me, and I’ve learned a lot already.

  • It’s my first project from scratch using ASP.NET MVC or Entity Framework.
  • It’s my first personal project in production using C# or .NET.
  • It’s the first time I’ve used Windows Azure.
  • It’s the first time I’ve used UserVoice.
  • It’s the first time I’ve used continuous deployment from Git on a personal project in production.
  • It’s the first time I’ve used SQL Server on a personal project in production.
  • It’s the first time I’ve used WordPress in production.
This project was, as you may have guessed, the source for my post on entity framework model queries, as well as my post on value-based prioritization.
I’ve been using the project as I’ve been building it, and it’s already been an excellent tool for me. Prioritizing features by estimated return was a particularly enlightening experience; it really helped me to get an objective look at the near-term plan and organize development more effectively.
I’ll still talk here about the nitty-gritty of development but official product announcements will be coming through the product blog, Rogue Prognostications. I hope that others will find this project as useful as I have. Please feel free to drop any comments, questions, suggestions, or other feedback on the Rogue UserVoice.
More to come – watch this space!

Simple Entity Framework Model Structure

I’ll say right up front, I don’t have a lot of experience with Entity Framework, and this could either be a well-known solution or a completely foolish one. All I know is that, so far, it has worked extremely well for my purposes.

Coming from the Java world, I’m used to using DAO’s to serve as an abstraction layer between the controllers and the database, with the basic CRUD methods, plus additional methods for any specific queries needed for a given entity type.

Conveniently, entity framework provides a fairly complete DAO in the form of DbSet. DbSet is very easy to work with, and provides full CRUD functionality, and acts as a proxy for more complex queries. I wanted to keep queries out of my logic, however, and in the model.

Looking at it, I didn’t want to have to write an entire wrapper for DbSet, and subclassing it seemed like asking for trouble. That’s when it occurred to me to use extension methods for queries. It turns out you can define extension types against a generic type with a type argument specified (e.g. this IEnumerable). This not only allowed me to abstract out the queries and keep them in the model, without having to wrap or subclass anything; but by defining the extensions on IEnumerable instead of DbSet, I have access to my queries on any collection of the appropriate entity type, not just DbSet. I can then chain my custom queries in a very intuitive and fluid way, keeping all of the code clean, simple, and separate.

For example, I have a table of tags. I’ve created extension methods on IEnumerable to filter to tags used by a given user, and to filter by tags starting with a given string. I can chain these to get tags used by a given user and starting with a given string. I can also use these queries on the list of tags associated with another entity, as IList implements IEnumerable, and thus inherits my query extension methods.

I don’t know if this is the best way – or even a good way – but it’s worked for me so far. I do see some possible shortcomings; mainly, the extensions don’t have access to the context, so they can’t query any other DbSets, only the collection it’s called against. This means that only explicit relationships can be queried against, which hasn’t been a roadblock so far in my (admittedly simple) application. I’m not sure this is really a drawback though – you can still add a parameter to pass in an IEnumerable to query against, which again offers the flexibility to pass a DbSet or anything else.