Advanced

Embrace the full power of modern versioning

Distributed version control systems are a faster, modern alternative with a healthy community. Due to its distributed nature, switching to git opens up many doors to automation. e.g.: It allows for automation tools to work in a local repository without jeopardizing the mainline.

Keep your project’s history clean and understandable. Make it easier to find specific commits and for others to review. When finishing up work in your short-lived branches, clean up your local commit history before merging back into the integration branch.

The best way to define, document and deliver your IT is with code

Every aspect of your entire development and release process can be traced back to some kind of version controlled code.

This can range from the versions of your dependencies to the configuration of your CI server pipeline.

In this context as code means that it’s persisted in files, it’s syntax can be checked, it has semantic meaning, it’s version controlled and can be executed.

A designated driver brings everybody home safe

Shared responsibility often leads to misunderstandings. When the people involved rely on the others to manage that responsibility.

Even if the responsibility is assigned to a role, and that role is given to one person only, it’s often the case that the person hasn’t allocated time to actually perform the duties.

Every process, that’s required, needs to be assigned to a role and that role needs to be assigned to a person, who is actually expected to responsibly spend time on performing these duties.

Run a tight ship - control software from cradle to grave

Take an arbitrary piece of hardware; As part of the Product Lifecycle Management the manufacturer has a complete trace to the individual versions of all the components that went into it.

Application Lifecycle Management means that you apply the same approach to software. You trace everything.

The requirements that created the tasks, the commits that resolved them, the compiler that built them, the tests you ran, the environment you ran them in, the test results and so on. When the software reaches production, you know the specifics of everything that created it.

Components should have low coupling and be self-contained

This principle lends itself to common coding standards of high cohesion and low coupling.

Break down your monolith, identify all the nuts and bolts in your architecture that produce an actual artifact - like a binary executable from compilation or any other kind of package.

Make these components self-contained with it’s own definition of done, it’s own pipeline, it’s own interface - it’s own release process.

Treat it as inventory and manage your dependencies.

Only run the required tests - you know which ones I'm talking about! Right?

When your test cases are self-contained with individual setups and tear-downs and they trace to related functions and features, you are able to analyze a given change set, place it in context of a limited amount of features and derive its relevant test cases.

Then you can construct an adaptive test suite on the fly and execute that on a production-like environment.

By running a small and relevant subset of functional tests, you can add functional testing to the short feed-back loop.