A draft of the blog post I’m writing for my current employer’s Development Team Blog:
They key to Rapid Development at Fortigent is our streamlined Software Factory. By “software factory” I mean the whole set of mechanisms, tools and processes involved in taking (Jamie’s) ideas from inception to production . A streamlined software factory is one that creates no obstacles, first and foremost for developers .
The point is to make the software easy to improve. If this most fundamental of all qualities is present, all other -ilities, like usability, scalability, performance, resource utilization, functional richness — will catch up. Optimizing our Software Factory for iterative, incremental improvements allowed Fortigent to reduce the cost of mistakes and minimize the risk associated with innovation.
Out of the endless variety of Best Practice memes floating on the Web, here are some of the main points we came to appreciate:
At the heart of Fortigent’s Software Factory is our Continuous Integration loop. Its purpose is to create the shortest possible feedback cycle for development to feed on. The traditional selling point of CI is its ability to catch conflicting changes early, but the real benefits of CI are far more fundamental than that.
At Fortigent, CI is not just a common development environment, it is a set of automated processes designed to break easily. Continuously rebuilding the entire software stack (including the database!) from source code creates constant pressure to keep our software in the known, functional state. This results in software that is always Ready-For-Release, allowing our team to react quickly to any new requirement or a change in priorities. This is the very definition of Agile.
Ad-Hoc Environment Allocation
We have 5 identical environments closely mimicking our production setup, named ENV1 to 5. Why are they made identical? So we can install the same exact binary package to any one of them! This saves a huge amount of energy otherwise required to manage all those installations and config files.
Different versions of the same application may be automatically deployed to multiple ENVs. For example, an actively developed application may have its Trunk build pushed to ENV3 while its more stable Production Maintenance Branch will be deployed to ENV4 and 5, to facilitate integration testing of other projects.
Database Change Management
Since our databases are always in flux, we had to devise a process to keep all those changes in control, while not restricting the developers’ freedom to write arbitrary SQL scripts to solve their data- and schema- migration problems. With this in mind, you can appreciate our DB Change Management process, which revolves around an idea I call “Delta Script Queues”.
A Delta Script queue is really just a directory in our Subversion repository, but here’s what makes it a queue: Every time a new DB change is required, it’s coded up in SQL, and the script is added at the end of the queue. Before the change will appear in Production, it will be deployed to several development databases, and because they get reset on a weekly basis, all SQL scripts will run again and again, always in the same sequence. To avoid surprises, an existing script already in the queue is considered immutable, but its effects can always be offset by adding a subsequent script. When the changeset is finally ready to go, the queue is flushed by executing it on Production, and a new queue is started for the next changeset.
One of our most important CI processes is the so-called “BoatSync/DbBuildBase”. The BoatSync process scripts the Production Database schema daily and checks any changes into the source control. The DbBuildBase rebuilds all 5 of our production databases from these scripts, and further applies any dev-in-progress scripts from the Delta Script folders. The resulting database files are then published to a network share. Those test builds that rely on db, start by downloading the database and reattaching it to a local SQL Server instance. This allows every test to run against a fresh copy of the database, with all Production and Development changes synced up!
Test Driven Development
Nested within the CI loop is the Red-Green-Refactor cycle of TDD, running on each individual developer’s workstation. We strive to keep both loops as short as possible, and for each individual iteration to contain as few changes as possible. Minimizing the amount of “balls in the air” (i.e., broken code) at any given time helps our developers stay in control, while avoiding the “stack overflows” with their panic, and the resulting cowboy coding episodes.
Our software project mechanics are designed to encourage the “tests-first” coding style. Whenever possible, the unit-tests run against a local database, reconstructed from production schema, with recent changes applied. Every application being developed is configured to run locally, without having to push a build to a common “development environment”.
Challenges and Next Steps
It took us years to come from where we were to where we are, but we are far from where we’re heading to
Here are some of the challenges we could use help with:
- Our legacy projects still rely on Project References for dependency management. We need to finish the process of NuGet-ization, switch everything to Binary References and stop committing binaries to the Source Control,
- Our Subversion repository is huge. Perhaps we should migrate to a-repository-per-project model? Should we adopt Git or Mercurial for new projects?
- Our RDBMS-centric architecture is reaching its limits. We are thinking along CQRS lines, with primed caches backing up the reads, and messaging-driven worker services handling the cache misses and the writes. This should make our logic less query-heavy, which would eliminate the need for ORM and reduce the number of unit-tests requiring a live db connection.
- Our backlog-management and work-initiation processes are still pretty immature, despite our partial success with Kanban.
- Need a lot more automated regression/integration testing done at the UI level.
- In general, need to increase our test coverage, and tighten up our TDD. While few of us have experimented with BDD, that whole area lies largely unexplored.
At this point I guess I need to fan out back to the abstract…