Why more incompatible software might be easier to learn

July 14, 2010 2 comments

I’ve noticed an interesting phenomenon during this development cycle at work.  I’m working on a product that has 97% functionality in common with one of our older, more established products.  The differences, few though they may be, are necessary and not insignificant.  What I’ve noticed is because there are so few differences, people assume they can understand the product without learning the differences, then get frustrated and confused when those differences don’t behave exactly like the older product.

The interesting part is I’ve worked with these same people on other projects with much more extensive changes, and in general they have been able to follow them easily.  In other words, 50% changes are no problem, but 3% changes throw them all off.

It reminds me somewhat of uncanny valley. I wonder if it means certain open source software would have more success by completely breaking from their commercial counterpart’s interface rather than be a 97% clone.

Where else have you observed that principle in action?

Advertisements
Categories: User Interfaces

Why the Tip Should Always Compile

July 12, 2010 7 comments

Having a source control tip that always compiles is one of those software truths that I thought was self-evident, but today was reminded that it’s not self-evident for everyone. I was ready to check an important change, and as per our process, updated to the latest to make sure everything still worked before checking in. As it turns out, the guys one floor up were not so courteous. The build was broken in several places.

It took a few minutes to determine the offender, and I shot him a quick email in case he wasn’t aware of the issue. “Sorry for the inconvenience,” came the reply. “We’re not done checking it in yet.”

Multiple people coordinating check ins can be complicated, so I gave them another hour. Still didn’t compile, but for slightly different reasons. I then had a four hour training class. I came back from that expecting to be able to check in before I went home, and they had made progress but the build still didn’t compile.

At this point it was clear this wasn’t just a matter of coordinating check ins. They were using the main branch the entire building shares to integrate and debug their changes with each other. In case it’s not clear by now why that is bad, we will probably not have a working daily build tomorrow. If we do have a daily build tomorrow, a lot of important changes will not have made it in. Everyone’s testing will be set back by a day because a small group thought they would save a little effort by circumventing the process.

Early in the development cycle, this might not be a big deal, but we are very close to release.  If there’s one thing I’ve learned about software, it’s that integration and debug time is hard to predict.  Don’t ever think “just this once” you’ll check in something broken because you “know” it will only take a few minutes to fix.  Dealing with accidental breakage is difficult enough.  Having an unusable repository for 6 hours cannot be classified as an “inconvenience.”

On the other hand, it’s difficult to blame them for being tempted, given the source control tool we’re using. We’re evaluating alternatives, but in the mean time are stuck with what we’ve got. Take a look at the following characteristics of our version control process. If they look familiar, you might want to consider a version control change of your own.

  • Merging is difficult, so we have one big branch that everyone checks in to.
  • We have rules like don’t check in things that don’t compile, but no technological way to ensure they are followed.
  • We can either push our code to everyone in the building or no one. There is no in between without a lot of manual work.
  • We can check in even if our local copy isn’t updated to the latest.
  • We have no easy way of cherry-picking only code that is known to work.
  • There is no easy way of collecting related changes, then committing them all to the main branch in one operation once they are integrated.

If that list sounds familiar, and you haven’t looked at distributed version control yet, now is a good time to do so. At this point, we are simply struggling to maintain what I consider a bare minimum standard of having a tip that always compiles error free. I haven’t even touched on the ideal of having a tip that always passes a test suite. If your tests are automatable, your tools should be able to be set up to automatically reject changes that cause it to fail, same as a bad compile. If, like us, circumstances necessitate tests being run manually by a human, distributed version control can help with that too, with the right branching model. More on that later.

Why globals should be avoided

July 9, 2010 Leave a comment

I thought avoiding globals was so widely accepted now that people just did it without thinking, then I came across a new one during a code review and marked it as a defect, assuming it was fairly self explanatory.  To my surprise, the author disagreed strongly, defending his code like it was his firstborn child.  The other option would mean putting it in a header file, and since so many other files included that header, he didn’t think that would be safe this close to release.

Well, yeah.  What part of “global” made you think a lot of code wouldn’t be affected?  And you thinking adding a global is safe, but a one line change to a header file is dangerous frankly scares me a little.  Okay, that’s what I wanted to say, but I was stymied because I’d accepted avoiding globals as a best practice for so long, I had forgotten why. Then I came across this reddit post and thought this was probably something we all could use a refresher on, so here goes.

  • Namespace pollution.  Inside our home, everyone knows “Michael” refers to my son.  Anywhere else, we have to be more specific.  Only use a global if you’re absolutely sure no one else in 500,000 lines of code will ever want to use the same name for a different purpose, like for system-wide exception handling or something.
  • Errors don’t get caught until linking.  If you’re using dynamic linking that could be a big problem, and even with static linking error messages aren’t as easy to track down as a compiler’s.
  • Bigger chance of a semantic mistake like thinking you’re using the same global in two places, when you’re actually using two with slightly different spellings.
  • Bigger chance of multi-threading issues with two threads accessing the same variable at the same time.
  • Harder to determine where that variable is defined and everywhere it is used.
  • Harder to change implementation details because a global could be used anywhere instead of just easily defined boundaries.

Anything I missed?  Ideas on circumstances where globals are acceptable?  Let me know in the comments.

Categories: C++, programming

Why committing early is anti-social

July 8, 2010 1 comment

This article expresses concern that distributed version control encourages the anti-social behavior of developing code without community input. However, the proposed remedy, committing early and often to a central repository, is itself anti-social, albeit in the opposite direction.

Think about that situation for a moment. Everyone who has worked on a relatively large code base has experience with this. What happens when someone commits to a central repository? Everyone who wants any changes is forced to take all the changes, whether they want them or not, whether they work with your own local changes or not. For me, these problems always seem to occur when I’m rushing to check in before going home and I do a quick update to get in sync.

Yes, it’s possible to back those changes out, but it isn’t trivial to separate them, and one way or another, you have to deal with the changes, and you have to do it on the submitter’s timetable. It’s the equivalent of shouting in a movie theater because you want to hear what your friend sitting next to you thinks. It’s disruptive, and it’s only tolerated because it’s better than having everyone develop “in a cave.”

Notice it’s the committing of the code that’s disruptive, not sharing the code. With centralized version control, there isn’t much difference, and we’ve slowly come to accept that as the way it is. This leads to policies like having code reviews before code is checked in, completely bypassing version control, which means code reviews are usually conducted without the benefit of the reviewers actually running the new code.

The polite thing to do here is instead of foisting your changes on everyone, you invite them to take the changes, and they accept when it is convenient for them. You might share several times a day with a colleague or mentor you are working closely with, then less frequently with others working on the same feature, then others working in the subsystem, then perhaps testers, then everyone else.

It turns out that distributed version control is ideally suited to this approach of “politely share early and share often, but don’t push to trunk until it’s solid.” When something as paradigm shifting as distributed version control comes along, you have to reevaluate all your best practices to see if the fundamentals still make sense. You don’t compare a car to a bicycle by seeing how easy both are to push.