DevSecOps — What can we automate? — leave the machine-age thinking behind!

Matin Mavaddat
Nationwide Technology
6 min readNov 24, 2020

--

Introduction

Automation has always been a hot topic in almost every industry but, nowadays, it has resurfaced in the software industry due to the attention and significance DevSecOps is gaining.

In this article, I don’t want to praise DevSecOps and its principles, nor do I want to criticise its flaws, but I would like to focus solely on automation and discuss what can and can’t be automated.

On both extremes of the spectrum, there are so-called experts who either (as automation enthusiasts) believe everything can be mechanised, or (as the machine-foes) distrust them completely and think no critical decision must be left to these soulless devices.

Automation enthusiasts are drunk with the power that these efficient automatons have given them, and machine-foes are worried about the socio-cultural impacts of the machines, and if you are wondering, yes these machine-foes exist even in the software development world!

Background

To better understand the viewpoint of the automation enthusiasts (or better said, “maniacs”), we need to go a few centuries back to the renaissance era, when humans, officially, for the first time got the permission to think!

Joyous with the newfound freedom, we started exploring, trying to understand and comprehend the world around us, only to find out the complexities of this world are higher than what we can deal with in its entirety at once.

Intoxicated with the power of thinking, we assumed understanding the universe was an achievable possibility if we broke every concept down to smaller ones following our childhood intuition and, aggregate our understandings of the parts to an understanding of the whole. As a result, we invented science — partly, the crusade to find the indivisible parts called the elements.

We even broke down the relationships between phenomena to the most elementary one called “cause and effect”. “Cause” is necessary and sufficient for “effect”. The “cause” itself is an “effect” of another “cause”, and we went up until we reached the cause of all effects — which we named “God”.

For many years, causal thinking and the determinism emitted from it was the only scientific way of thinking, and it had generated a lot of achievements for science and humanity. Its success was to the extent that Newton called the world a “Hermetically sealed clock”. A deterministic machine that can be understood and explained fully by causal relationships (universal laws!), ignoring the concept of the environment completely. This was machine-age thinking.

Machine-age thinking eventually led humanity to the industrial revolution. When we thought, like our God, we need to build machines to do our work. We created factory lines and manufacturing processes and wherever we didn’t have the required technology or labour was cheaper than machines, we replaced machines with people! Therefore, in organisations, for many years, people were either advanced or cheap, easily replaceable machines.

Modern age findings have shown us that mechanistic thinking has been very limiting. We have learnt to question the understandability of the universe. We now know that causal relationships are not the only type of relationship possible, and universal laws do not exist. We also found a phenomenon called “a system” — that broke the back of the reductionist machine-age thinking and has led us into a new, more comprehensive thinking model called “systems thinking”.

A system’s defining properties are the ones that cannot be attributed to a single part, but they emerge from the interaction of different parts. They are the “product” of the interaction of the parts. These properties cannot be measured directly but through their manifestations.

Understanding a system also relies heavily on understanding the environment in which it operates and its defining properties.

Systems can be classified into the following types:

  • state-maintaining systems”: These systems react to change to maintain state. These systems usually have no learning or improvements, and their means and ends are externally defined for them.
  • goal-seeking systems”: These systems respond differently to different events in the same or different environments until they produce a particular outcome (state). It means they have the choice of different means, but the ends are defined external to the system.
  • purposeful systems”: Not only can these systems produce the same outcomes in different ways in the same environment, but also different outcomes in both the same and different environments.

It can be seen that mechanistic thinking was a special case of systems thinking, and machines were only one type of systems when causality was considered the only type of relationship, and the environment didn’t matter; therefore, determinism ruled.

“Machine” is a system that has no purpose of its own. It has a function which is to serve the purposes of something external to it. So machines are either state-maintaining or goal-seeking systems.

Automation — or not!

Now we have all the required conceptual foundations to try to answer our question once again. What can be automated?

Automation means replacing with a machine, and a machine is at best a goal-seeking system which means it can improve efficiency but not efficacy.

Suppose you are sure you are doing the right thing. In that case, a machine can help you do it more efficiently as it does not have some of the limitations of humans (e.g. it does not get tired easily, it does not need to go to the restroom as often and…). Still, if you are doing the wrong thing, it can enable you also to do it more efficiently and drives you off the cliff more rapidly. Russell Ackoff has a quote which says “The righter you do the wrong thing the wronger you become.”

Now, going back to DevSecOps, one thing is clear; the efforts that are about “doing the right things” or “efficacy” cannot be automated. Purposeful systems with free-will (human beings!) are needed to decide what is the right thing to do. The more you invest in your automated pipelines (the more you invest in “efficiency”), the more you must invest in “efficacy” or making sure you are “doing the right thing”, from business and solution decisions to technical designs.

It would be best if you also remembered, “the right thing” for a purposeful system is constantly changing due to the constant change in both the environment, its internal state and its interactions with the other systems (purposeful or not), and the machine (the automated pipeline) must be flexible enough to be able to cater for this constant changes in the external goal to which it must adapt.

There is another angle to the story of automation, and that is automated testing for assurance (e.g. security, resilience, etc.).

As was mentioned previously, defining properties of the system such as security and resilience are called type II properties of the system, and as they cannot be measured directly, they can also not be tested directly. This means we can only test the existence of some of their manifestations. Automating these tests obviously increases the efficiency of checking the existence of these indicators or manifestations, and can give the experts some invaluable insights, but relying too much on the results of these tests has two major drawbacks:

  1. These manifestations can be faked. It means all systems that have these “properties” exhibit these “manifestations” but not all systems that exhibit these manifestations have these properties necessarily.
  2. These manifestations are very context (environment) sensitive. Something that was a good indicator of a specific type II property of the system might no longer be a correct sign when the environment in which the system operates changes gradually.

As an example, for the first point, a secure system in a specific domain will pass all the regulatory security requirements. But, all systems that pass the regulatory security requirements are not necessarily secure.

As an example, for the second point, a specific key length for a specific cryptography algorithm used in a system might be tested as one of the criteria showing security of the system. The key length is very context-sensitive. A key length that was once considered secure can no longer be considered a correct metric for testing security as the processing powers increases, or flaws in the crypto-algorithms are identified. So what we test against must be constantly updated according to the constant changes in the environment.

Conclusion

In conclusion, we must embrace the new thinking tools that we have been provided and leave the mechanistic thinking behind. Everything cannot be automated, and automation is not the devil’s work. We must automate to increase efficiency and invest in skilled expert people to increase efficacy.

Machines have not been invented to do our jobs. They have been invented to free us from the mundane, inhuman tasks so that we can focus on what we are the only known race capable of doing. Thinking, innovating and creating purpose and meaning.

--

--