Every time the code changes, it traces back to some task being done. Think about it; Who’s feeding the process? How are tasks being created, prioritized and executed?
When a code change cannot be tied to a specific task, it’s likely something that has bypassed the normal chain of command. Somebody is working outside protocol.
Make all code changes part of an overall plan - simply pair commits with tasks. Besides generating valuable traces that can later be used in an audit or documentation of your trail, it also enables you to track the pace of the team and maintain a burn-down chart.
To integrate means to merge your code on to the same branch as the one your colleagues are working on.
So obviously if your code breaks something you are potentially jeopardizing the work space - and pace - of your team mates as well.
To have a pristine integration branch means that it is buildable at all times.
Code should be verified through some kind of toll-gate criteria, before it’s accepted on to the integration branch. Anything that doesn’t meet the toll-gate criteria is rejected and will not enter the mainline. It is simply impossible for a developer to break the build.
The release train branching strategy is similar to what is sometimes referred to as late branching or trunk based development. Essentially it implies that in your entire branch tree there is only one branch that is meant to be long-lived.
Consequently, there is only one way you can contribute to a product and have your code released, and that is to deliver your code to the mainline.
In a release train strategy, the mainline is on it’s way to production all the time, and therefore anything that isn’t meant for production as soon as possible, shouldn’t be delivered to the mainline.
Distributed version control systems are a faster, modern alternative with a healthy community. Due to its distributed nature, switching to git opens up many doors to automation. e.g.: It allows for automation tools to work in a local repository without jeopardizing the mainline.
Keep your project’s history clean and understandable. Make it easier to find specific commits and for others to review. When finishing up work in your short-lived branches, clean up your local commit history before merging back into the integration branch.
Versioning schemes are a powerful tool.
They give you a quick and accurate reference to when, where and by whom something was made, what it’s compatible with, etc.
It is the name of your release, the identifier of your component, the passport of your product.
Implement a well-defined versioning scheme for your components and releases.
An artifact is the output derived from your build process.
Sadly, artifacts are often built whenever they’re needed. A lot of the builds just build that which has been built before.
Even though this could be about contributing to avoiding the environmental crisis, it is also justifiable to save and manage artifacts simply to save wait-states and bottlenecks in the software development process.
Stop building things that haven’t changed and start reusing colleagues’ artifacts - install an artifact management system.
When code changes are committed to the repository, your CI server kicks in automatically and starts a build.
It might not even be an actual build, it can be any kind of automated action that is part of a verification process implied in the project’s “definition of done”.
If a build step fails, the developer is notified directly so he can start working on fixing the issue immediately. The shorter the feedback loop, the better.
Whenever you ship a new release you probably need a release note, a report listing the new version number, fixed issues, new features…
Why write all that manually? By building up your traces and recording your trails as you work, you can pull all this information out of your backend automatically.
You’ve written your last release note. From here on out it’s release notes as code.
Split up your builds and verifications into a pipeline consisting of multiple stages. Use this approach to keep your builds as fast as possible, your feedback loop as short as possible and your developers notified as quickly as possible despite having long-running builds.
In your pipeline, each step provides more confidence in your code than the previous one.
Take an arbitrary piece of hardware; As part of the Product Lifecycle Management the manufacturer has a complete trace to the individual versions of all the components that went into it.
Application Lifecycle Management means that you apply the same approach to software. You trace everything.
The requirements that created the tasks, the commits that resolved them, the compiler that built them, the tests you ran, the environment you ran them in, the test results and so on. When the software reaches production, you know the specifics of everything that created it.
Analyzing your code is cheap but valuable. Scouring your code and producing interesting metrics helps you keep check on all kinds of creeping issues.
Static code analysis, style checkers, cyclomatic complexity, code coverage and scanning for FIXMEs and TODOs are all examples of metrics that help you keep a watchful eye on codebase evolution. Adding thresholds to these measures as part of your verification protects it from corruption.
Monitor improvements but don’t waste time reaching arbitrary targets.
All software has dependencies; You may be using third party technology or you have a lot of individually released microservices, frameworks or libraries in your system architecture.
Make sure there are no moving targets and don’t rely on someone else’s master, latest or stable release. Cache everything you need in your own registry.
Optimize your link processes to use cached dependencies when available, optimize your compile processes to feed the registry when new versions are created, so others can benefit from them.
When something goes south - it’s usually in the production environment - where you don’t have access to debug or profiling information.
Design you code, so it can produce an audit trail - a complete profile of states, sequences, data in and out. That should give you clues, when you try to do your code-scene forensics.
At least you’ll get some clues on how to reproduce the error in your development environment.
This principle lends itself to common coding standards of high cohesion and low coupling.
Break down your monolith, identify all the nuts and bolts in your architecture that produce an actual artifact - like a binary executable from compilation or any other kind of package.
Make these components self-contained with it’s own definition of done, it’s own pipeline, it’s own interface - it’s own release process.
Treat it as inventory and manage your dependencies.
Whether or not a particular code snippet gets tested is often a matter of how easy it is to test.
Organize your code in easily accessible features. Make each feature available through one interface only.
Since the feature is only available through one interface, it’s safe to consider it tested, when you have massaged it.
At the end of the day, more code gets tested.
The ability of a release to be deployed is such an essential part of the delivery that the developer is expected to take full responsibility for this process.
The deployment should be automated because it’s a task that needs to be carried out often and is not necessarily trivial.
You want your deployment as code.
Every aspect of your entire development and release process can be traced back to some kind of version controlled code.
This can range from the versions of your dependencies to the configuration of your CI server pipeline.
In this context as code means that it’s persisted in files, it’s syntax can be checked, it has semantic meaning, it’s version controlled and can be executed.
Your software is in production, but how is it doing? You want to have insight into the runtime health of your system.
This includes easy access to runtime statistics such as feature usage, transaction throughput and error situations to ensure the service level. In addition, access to environment health like disk and memory usage, cpu load
Bonus points if your system can alert you before an error occurs.
Both successful deployment and functional testing are often considered part of the definition of done.
When we´re building continuous delivery pipelines it’s because we want our developers to have access to, and to execute, the full definition of done.
Use simulators, emulators, containers and VMs to verify your definition of done and ensure stable releases. The closer to production your testing environment, the more confidence you can have in your changes.
Groups of professionals contributing to the same product that are working in isolated silos and not talking to each other is not helping your project one bit.
Break down these silos by having contributors talk to each other and involving them in the bigger picture.
The term full-stack-developer is used to describe a developer whom is fully capable of doing whatever is required. This is the essence of not working in silos.
Agile processes defies phases in the software development process. The 1st principle in the Agile Manifesto refers to continuously delivering software.
An agile approach is not a small waterfall or a interactive and incremental process speeded up to 14 days intervals. It literally has no phases - only continuous integration and continuous delivery with focus on minimizing work in progress and get the right thing done - at a constant pace.
Initiatives, new tools and approaches are often of natural interest to developers and they might experiment and research without explicitly being told to do so.
But Continuous Delivery is a paradigm that strives to build quality into the product rather than gluing it on afterwards.
Transitioning is going to take time. It needs planning. It needs prioritization. It needs funding. You need a road map. Continuous Delivery is not a quilted patchwork. Be sure to make it a corporate thing, not just a neat idea.
Shared responsibility often leads to misunderstandings. When the people involved rely on the others to manage that responsibility.
Even if the responsibility is assigned to a role, and that role is given to one person only, it’s often the case that the person hasn’t allocated time to actually perform the duties.
Every process, that’s required, needs to be assigned to a role and that role needs to be assigned to a person, who is actually expected to responsibly spend time on performing these duties.
The bus factor measures how many people in your corporation need to be run over by a bus before you go out of business. If you have a key player who’s indispensable, then your bus factor is 1.
To raise the bus factor, you must make sure that important knowledge is shared and accessible to whoever needs it.
Don’t document your processes to the brink of boredom or maintain an internal wiki the size of Wikipedia itself. Build a learning organization that encourages people to share with colleagues, allocates time for research, designs for change and accepts automation as documentation.
Assignments should be prepared for working before they qualify as actual tasks. The goal of a task must be known to the person who is implementing it.
If a task is ambiguous it can not be estimated, and if it can not be estimated, it can not be prioritized.
If a task doesn’t have a clear definition of done, then it should be time-boxed.
When your test cases are self-contained with individual setups and tear-downs and they trace to related functions and features, you are able to analyze a given change set, place it in context of a limited amount of features and derive its relevant test cases.
Then you can construct an adaptive test suite on the fly and execute that on a production-like environment.
By running a small and relevant subset of functional tests, you can add functional testing to the short feed-back loop.
In a functional test, you test the features that the system offers as a whole, seen from the end user’s perspective.
In the previous millennium such a test would be planned by a person with domain knowledge, then executed prior to every release by testers performing manual operations based on written instructions.
A more contemporary strategy is to have the person with domain knowledge manage a team of developers, who are actually implementing the tests as code, and then give the software developers access to execute these test in their production-like environments.
Management and maintenance of your test data is considered part of your Quality Assurance strategy. Your test data is versioned and stored as an artifact.
This implies that you separate your test data from the actual tests, which in turn comes with the benefit of easily running test suites with different, versioned data sets.
Test suites becomes self-contained, each with their own easy reproducible setup and tear-down steps, something that will later enable you to run your test suites independently of each other - maybe even selected on output from previous verification steps in your pipeline.
A word of precaution; testing in production is not to be confused with releasing untested code.
It starts with acknowledgement that all serious problems are discovered in production and occurred because unforeseen things happened.
Deliberately go to your production environment and do unforeseen things like turn off a server, kill a process, pour coffee on your keyboard, upgrade a service during high-load.
If your system is built to survive it, then it should! You’re only sure it will if you (dare) test it.
Unit tests are used to test the semantics of your code; To verify it works as expected and keeps working as expected through changes.
Unit tests are light-weight and fast. Don’t get tangled up in hard-to-handle dependencies such as loading databases or instantiating long sequences of objects before you get to the actual testing, use mocks and stubs to simulate your first order dependencies. Or use Proxies to have non-local collaborators contribute to your test.
A unit test is quick to execute and it should be executable in the context of your development environment.