The Software Development Toolbox

Today we have more tools than ever to build software.  In fact, we have more tools today than we have ever had that I would say are required – as a basic minimum toolbox – to develop software.  At the start of my career, the toolbox used by the team I was on was minimal – an IDE/compiler, version control system, and some spreadsheets and documents.  If we needed something automated, we wrote a batch file.  We hijacked an old PC and made it our build server, which when we needed to do a build, we’d log into, get latest from source control, and run a batch file.  Deployment?  Manually copy files, run some database scripts. The tool landscape is much larger today…but what does a team minimally need?  It falls into six different categories, and I’ve described why you need each of these, and some potential options for your team.

(Note: I’m leaving out the obvious like a compiler/IDE)

Configuration management/version control

If you are doing software development, you of course need this.  You need this even if you are a solo developer.  If you are doing development, and don’t have a version control system, stop what you are doing, go download one (there are free, open source alternatives), and start using it.  Why?  Because in software development, the most important thing you create is source code, and you need to treat it well.  Version it, replicate it, provide a history log.  Use a version control system with an “edit/merge/commit” model, meaning that you and multiple other people can work on the same file at the same time. 

Options:  For the most part, it seems like two are emerging as front runners, especially in the corporate development space:  Git, Microsoft’s Team Foundation Server (mostly preferred by MS devs).  Interestingly, TFS is now supporting Git, so Git may end up the de facto standard sooner than later.  Side note – if you want to learn about Git for TFS, check out Esteban Garcia’s Pluralsight course on that topic.  Git has a learning curve.

Build Automation

Next on the list is a tool that can automate your build process.  As I mentioned above, the “old school” way of building your software was to do it by hand.  This is error-prone, and not repeatable.  If your build isn’t automated, you are doing it wrong.  Even if you create a script to automate your build, that’s at least better than doing it by hand and risking a mistake, a forgotten step, a corrupt environment, or a typo.  Luckily, if you don’t have a build automation system, and need one, your options are plentiful:

For Microsoft devs, TFS/MSBuild is typical go-to build system because it works so well right out of the box.  For everyone else, or those with more advanced or cross-platform needs, tools like Jenkins, TeamCity, and CruiseControl are popular options which support lots of different build engines.

Test-driven Development

If you don’t know exactly what Test-driven Development (TDD) is, or you aren’t sure how to use it, first go buy this book by Kent Beck: http://www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530 Read it, absorb it, believe it, practice it, and make it part of how you write code.  It’s critical, whether you are building a huge system or a small one.  Small systems sometimes become huge systems, and if you wait to use TDD until after you have a mass of code, you’ll be fighting an uphill battle to use TDD.

The most common TDD frameworks around (at least for C#/Java) are: JUnit, NUnit, MSTest, and xUnit.  TDD frameworks exist for many different languages and toolkits, although frankly, some proprietary app dev tools (take BizTalk for example) make it very difficult to practice TDD effectively.

Automated functional testing tools

Let’s face it – manually testing your application is not only boring, it’s a waste of time.  Especially when you make some fundamental change, and you need to go back and re-test the entire thing.  Or you have to test it on multiple browsers.  Or on multiple platforms.  Or both.  So while we have TDD tools for creating unit tests, those are not targeted at the functionality of your app, and almost never at the user interface.  This is where automated functional testing tools come into play.

For web apps, Selenium is a popular choice, and on the Microsoft stack, you have CodedUI tests.  If you are a big corporate shop, you might be using HP tools (surprisingly the company that makes your laser jet printer also makes high-end software testing tools).  TestComplete from SmartBear is also a compelling tool in this category.

Continuous deployment / continuous delivery tool

I’m classifying this tool as separate from a build automation tool.  While a build automation tool may help you implement continuous integration, they often aren’t good at continuous delivery.  I see the distinction primarily being that a continuous delivery tool will actually automate the deployment, configuration, and promotion of your application from one deployment pipeline stage to the next.  Why is this valuable?  Similar to build automation tools, it’s about risk reduction and time management (especially for complex applications).

This is more of an emerging space in the industry, and it’s an interesting one.   There are tools like Puppet Enterprise and Chef, and they are highly IT enterprise focused.  Tools like AnthillPro and Go are focused on the deployment pipeline scenario, which is what software teams are more likely to need.  Team Foundation Server has really lacked this feature for a while, but with Microsoft’s acquisition of InRelease, it now has a product to fill that gap.  There are also tools like Deployment Manager from RedGate which have deep database integrations.  You’ll want to do some signifiant research before selecting a tool if you don’t already have one.  These tools tend to be complicated to install, configure, and use as well – but they are extremely powerful, and will provide massive benefits over the long run.

Backlog tracking

Finally, in your toolbox, you need some mechanism for tracking what you are building.  All the tools above are about how you build it, but we need some sort of tool to know what we are working on. 

At a bare minimum – and often one I like to advise teams on – get yourself a stack of index cards, pins, and cork board.  Write stories on cards, acceptance criteria on the back, and move them across the board from “To Do”, to “In Progress”, to “Done”.  Simple and it works.  It’s worked for many teams, and sometimes teams prefer this.

But for more complicated scenarios, there are a huge number of tools to choose from for your toolbox.  I’m focused on tools that live in the agile/lean space.  Honestly I’m not even sure what tools there are for “waterfall requirements tracking”.  I’m sure there are some very pricey “enterprise” tools that have an incredible amount of features you’d never want to use.  But with respect to ones that you might want to use…there is Team Foundation Server (whose agile tools are growing organically and quickly), Rally and VersionOne in the “big enterprise” space, and dozens of tools like Pivotal Tracker, TargetProcess, GreenHopper, and many more.  There are tons of both commercial, cloud, and open source options to choose from.  Do you homework to find the one that works best for you.  A key element in finding a tool is one that doesn’t get in they way.  It should be intuitive, easy to use, and fit your workflow.  You should never find yourself fighting against it. 

There are many other tools I could mention as well, such as code review, code analysis, task board tracking, burndown tracking, and load/performance testing tools, I’m setting the categories above as the minimum baseline.  In order to do high-quality custom software development in today’s environment, you need to have a tool that fits into each of these categories.   Take a look in your toolbox – what’s missing? 

Advertisements

Detailed written requirements

I see our teams at AgileThought do the following every day, all without up-front detailed written requirements:

(in no particular order)

  • Craft well-tested code
  • Build flexible software architectures
  • Create beautiful user experiences
  • Estimate project efforts
  • Write comprehensive (and automated) test cases
  • Amaze our customers
  • and most importantly…ship high-quality valuable software.

It’s fantastic.

Writing Great User Stories

It’s been a while since I’ve done a public talk at a user group or conference, but I’m going to be doing a talk at the Tampa Bay Agile Meetup group on August 27th.  We are holding the meeting at the AgileThought Tampa office.  (See the meetup.com link to RSVP: http://www.meetup.com/Tampa-Agile-Software-Developers/events/133369272/)

User stories are a specific topic in agile that I enjoy talking about.  I think people don’t fully understand the power of them, and they get misused.  I get frustrated when I see user stories that start with “As a developer…” and the story goes on to talk about work a developer needs to do to get themselves out of technical debt.  Work to reduce technical debt is fine and all, but it shouldn’t be captured as a user story – it’s a technical debt reduction activity.  I think part of the issue people have working with user stories is that they are a bit of art form.   At their simplest, it’s a sentence on an index card.  There aren’t volumes of books on them. 

My hope is that this talk will impart some good information that others can use to help them when working with user stories.  See you there!

Incomplete stories

Over the years, I’ve heard a common recurring belief or practice related to incomplete stories in a Scrum sprint that I want to state my opinion on.  Here’s the scenario, as I’ve heard it described by others:

(upon asking someone how they handle when a team cannot complete a story in a sprint)

“Well, if we run into trouble in a sprint, and we can’t finish a one or more stories for whatever reason, we pull them out of the sprint backlog for this sprint.  Then they go into the next sprint backlog so we can finish them”.  

I inevitably stop the person right there…I say “well, I think what you actually mean is that you put those items back on the backlog for reordering…right?”.  “Oh yeah, that’s right” is the response I most often hear, but I’m not sure that’s genuine because that’s not how it was described to me in the first place. 

Continual reordering is critical to ensuring that what software is being delivered is of the highest possible value.  By taking an incomplete story, and automatically pushing it into the next iteration, you are effectively saying “this is the most important thing that we should work on next”.   Often, I find that’s not the case – if that thing were that important, it probably would have been completed in the prior iteration.  The mere fact that it’s not yet done signals to me that it might not be that important.  

Often though, because you have sunk some amount of work into an incomplete story, that you want to see some return on that investment.  Say across your entire team, you’ve spent 30 hours of effort building a story that was not completed in a sprint – that represents real money that your employer or customer has invested.  Say you believe that you have less than 10 hours left of effort across your team to complete that story.  It is awfully compelling to invest that 10 additional hours of effort, and see that value realized.  You have to balance that decision against not investing those 10 hours to start another story.

The next time you have an incomplete story in a sprint – I encourage you to think about the implications of completing that story over starting something new (that may not have existed when you started the last iteration).   Balance the remaining work and value to be delivered against the next highest ordered stuff on the backlog.

Respect the Sprint

You may have seen an AgileThought person around town wearing a nice black “Respect the Sprint” t-shirt…but what does “Respect the Sprint” mean?

Well, it has several meanings, and can be interpreted a few different ways, but here’s what it means to me:

Stick to the time box you set for yourself

There will be no “adding a few days” at the end of a sprint.  Believe it or not, I’ve seen teams do this before.  “Hey, we are almost done, so let’s just call next Tuesday the end of the sprint instead of this Friday”.   This is breaking a basic Scrum rule, and should not be done.   Period.

Take the “don’t change the sprint goal” rule seriously

Once you are in a sprint, do not fall to pressures of “hey can you do this story too?” and “can’t you just squeeze that story in”?  Or even worse “Forget those 4 stories you are in the middle of, drop everything and do these 3 stories you’ve never heard of instead”.  A reason for short time boxes is to resist change to the goal during the time box and focus on completing that goal.  At some point, we have to have something solid to shoot for, and we adjust our sprint length to the shortest possible time we can resist change. 

Play like you are behind

If you are working towards a major release, and that release consists of multiple sprints, you should have a sense of urgency even in early sprints.  Make sure you are hitting your goals you set each sprint, and getting things to done, not half-done or not even started.   Don’t have the feeling that those early sprints don’t count – if you get behind early, you aren’t going to make it up later by working harder.   Don’t let your team fall into the trap of looking at the release plan for a minimally viable product and thinking “It’s only sprint 2 – we have tons more time before we can release anyway – if we don’t meet our goal this sprint, we’ll make it up later.”  It’s easy to have the mindset that you have some huge cushion, and you then don’t meet your goals early on because you don’t feel the pressure of delivery – you must Respect the Sprint – get those early sprint goals done!  

That’s what Respect the Sprint means to me – it might mean something different to you. 

respect-the-sprint

Branching is easy, merging is hard

David Rubinstein of SD Times recently published an article on branching and merging titled “Branching and Merging: the heart of version control” at http://www.sdtimes.com/content/article.aspx?ArticleID=36328&page=1 

In this article, David discusses the rise of distributed version control systems (DVCS) within the software development community at large.  While I cannot disagree that DVCS systems are gaining in popularity, I personally have not yet seen them dig deep roots into the enterprise space yet.  I think there are multiple reasons for that, some perhaps that will never be overcome.

First, take the CTO/CIO perspective.  Your responsibility is the software systems of your company, and the source code of those systems is the most critical asset.  Are you going to store that critical asset in a system where there is no master centralized version?   Would you run an accounting system where there are many, many copies of the General Ledger?  (Well, maybe you do if you are doing something illegal!)  Especially those software systems that run financial transactions, or that may have compliance requirements, a DVCS solution for source code control is going to face a tough battle to get adopted because there is no central source.

From a configuration and release management perspective, there is a firm need to understand what code shipped when.  Companies need to be able to answer the question “what code did you ship to production” in a definitive and accurate way.  DVCSs need to be configured and governed such that that question can be answered, especially when the auditors come around asking questions.   The DVCS tools available today do not provide much help in answering this question, as they do not enforce a strict promotion model, and rely on the people using it to enforce that model.

From a developer perspective, I more often than not see enterprise developers struggle with merging.  Even simple merging, in my experience, is seen as a major overhead and a scary thing that can break code bases badly.  You simply cannot use a DVCSs without having a firm grasp on merging.  If developers are struggling with straightforward merging of with a centralized version control system (CVCS), it is not going to get any better for them with a DVCS.

As noted in David’s article regarding CVCS tools, there are several complaints that I’d like to address.

• Broken builds: With everyone ultimately working in the main line, code reconciliation on large projects is extremely difficult.
If code reconciliation is a concern, then this must be a bigger or at least the same concern with DVCSs, right?  With large, active projects, this is just the way of life, and a tool cannot solve it, centralized or distributed.  If you have a lot of code to merge, then you have a lot of code to merge.  Period. 

• Branching limitations: This prevents parallel and agile development from taking place.

Maybe this is just my experience, but I have not seen any modern CVCS tools that prevent parallel or agile development from taking place because of branching limitations.  When you need to create a branch, you create it, use it, and merge back regularly.  Once the branch isn’t needed anymore, get rid of it.  Have a plan for creating branches and stick with it.

• Arcane branching patterns: Users are locked in to their tool of choice.
Again, I’m not aware of any good CVCS tools that strictly enforce “arcane” branching patterns.  You define the branching pattern that you put into place in a CVCS tool.  I’m also not sure what is meant by “locked into their tool of choice”.  Doesn’t every version control system lock you into it?  

• Not distributed: Makes remote development difficult in terms of access to code.

Generally, yes, but CVCS vendors are addressing this.  If you have a centralized repository sitting in Atlanta, and have developers from India working on it, access could be slow.  Then again, the DVCS answer is to replicate the entire repository locally, making access fast, but you would think at some point you’d want to push code back to Atlanta?

A great example here is the upcoming release of Microsoft Team Foundation Server 2011 (TFS).  In these blog posts, Brian Harry of Microsoft talks about merging enhancements and offline workspaces that bring DVCS-like functionality to Microsoft’s CVCS product.

http://blogs.msdn.com/b/bharry/archive/2011/08/31/merge-enhancements-in-tfs-11.aspx

http://blogs.msdn.com/b/bharry/archive/2011/08/02/version-control-model-enhancements-in-tfs-11.aspx

• No flexible release cycle: Does not enable continuous integration and agile deployment of code.

I definitely have not seen this before.  At AgileThought, we are using a CVCS (TFS), and do extensive continuous integration, and agile deployment of code, through a structured promotion model.  I actually see this as a disadvantage for DVCS’s – since there is no central repository, where do you do continuous integration?  The answer I’ve seen often is to do define a single “blessed repository”, have someone sit in the role of “integration manager”, and routinely push to that repository so a continuous integration build can be run.  All in all, a very manual process, but CVCS tools support this concept very well. 

As pointed out in Mr. Rubinstein’s article, distributed systems have their drawbacks as well.  Git has an steep learning curve, and right now requires knowledge of a command line.  While “command prompt only” should never be a blocker for professional software developers, in my experience, it can stall adoption if developers are forced up a steep learning curve.  Also, the lack of a central source repository may not sit well with those responsible for the safekeeping of their companies source code. 

DVCS systems are extraordinarily powerful, and CVCS vendors have sat up and taken notice.  DVCS will likely always remain the de-facto standard for hugely distributed, contributory projects (such as open source projects).  However, I don’t believe that in the enterprise space, that a pure DVCS solution will be the future, but rather, a hybrid of CVCS with DVCS-like features.  Tools like Git will continue to innovate DVCS features,  and CVCS vendors will likely adopt the best features to address the needs of enterprises.

Microsoft ALM v.Next

At this week’s TechEd, there were many announcements around the next version of the tools under the Visual Studio brand, which make up Microsoft Application Lifecycle Management (ALM) platform.  As a Microsoft ALM partner, many of these were not surprises, as I had been working with the product teams over the past year to help vet some of these feature ideas.  I can tell you that a LOT of though and good debate went into what you see, as well a fair bit of engineering as well.  These guys take customer feedback very seriously, more than any major company I’ve ever seen.  In particular, watch from 15:00 to 21:00 in this video from TechEd 2011 to see the new agile project management tools.  I’m pleased to say that what they are producing is on par with what the many of the popular purpose-built agile tools are providing.  It’s a seamless experience now – while you are ultimately updating work items, it doesn’t feel like you are just managing work items via work item queries.  (If you don’t have time to watch the video, check out Jason Zanders’s post – key screenshots included)

Of the major scenarios and experiences that this vision will deliver on is collaboration.  In my mind, this is the foundation of an ALM platform.  If your ALM platform or tools do not let your teams collaborate, then you are doing it wrong.  I don’t think anyone would disagree with me in saying that software development is a collaborative exercise – then why do we put up with tools that don’t allow it, or even discourage it?  So many times we see organizations with very distinct silos between the groups involved in a large-scale ALM process.  To me, this is an organizational issue first, but the tools aren’t helping.  A common scenario I see is that developers have no visibility into the test cases QA is going to execute.  It’s not because they aren’t allowed necessarily, it’s just that the tools aren’t enabling that instant visibility.  Sometimes it’s hard-to-find Excel spreadsheets, or just organizations that have deployed HP Quality Center but didn’t buy licenses for anyone but testers.  Regardless, if developers can’t see what testers will be testing, how can they write code well enough to make sure all the test cases are covered?  This is one small example, and there are many others that compound the problem.  A collaborative ALM platform, supporting a unified repository where everyone can see everything is the answer.  Microsoft’s vision of ALM has been heading in that direction, and will keep doing so.