Don’t Have a DevOps Team? This Is Why You’re Wrong

Automating Software Delivery

We are now entering the realm of continuous integration (CI) and continuous delivery/deployment (CD) at the heart of DevOps. I will speak about them both separately below.

Continuous Integration

Technically speaking, CI is not part of DevOps but a technique that is part of agile software development (although DevOps engineers can contribute, for example, by automating the running of static analysis or unit tests as part of a CI pipeline). CI essentially means that developers quickly and often commit their changes to the main code branch.

In the past, developers often weeks spent or months working separately on different features. When the time to release the software came, they would need to merge all their changes. Usually, the differences would be substantial and lead to the dreaded “big bang merge,” where teams of developers would sometimes spend days trying to make each other’s code work together.

The main advantage of CI is that it avoids individual pieces of work diverting too much and becoming difficult to merge. If a CI pipeline is created with unit tests, static analysis, and other such checks, it allows for quick feedback to developers. It thus lets them fix issues before they cause further damage or prevent other developers from working.

Continuous Delivery/Deployment

CD can be considered part of DevOps and builds on CI. A CD pipeline automates software delivery by automatically building software whenever changes are committed to a code repository and making the artifacts available in the form of a software release. When the pipeline stops at this stage, we call it “Continuous Delivery.” Additionally, a CD pipeline can automatically deploy artifacts, in which case it is called “Continuous Deployment.”

In the past, building and deploying software were typically manual processes, tasks that were time-consuming and prone to errors.

The main advantage of CD is that it automatically builds deliverables using a sanitized (and thus entirely controlled) environment, thus freeing up valuable time for engineers to work on more productive endeavors. Of course, the ability to automatically deploy software is attractive too, but this may be one step outside the comfort zone for some engineers and managers. CD pipelines can also include high-level tests, such as integration tests, functional and non-functional tests, etc.

Automating Software Security

This sub-branch of DevOps is sometimes called DevSecOps. Its goal is to automate security and best software development and delivery practices. Also, it makes it easier to comply with security standards and produce and retain the evidence required to prove adherence to such measures.

Often, in software development, security is an afterthought, something that has to be done at some point but is often left to the last moment when there is no time to do it properly. Developers are under pressure to perform and deliver within timeframes that can typically be very tight. Introducing a DevSecOps team may thus be a positive contribution. It will establish which security aspects must be met and use various tools to enforce those requirements.

DevSecOps can be at all levels of the software lifecycle, for example:

  • Static analysis of code
  • Automatic running of tests
  • Vulnerability scanning of the produced artifacts
  • Threat detection (and possible automated mitigation) when the software is running
  • Auditing
  • Automatically checking that specific security standards are followed

Automating Reliability

DevOps is often tasked with ensuring that a given system is highly available, which is achieved using load balancers, application meshes, and other tools that automatically detect failed instances and take remedial action. Autoscaling is also an important aspect and is often implemented as an automated process by DevOps engineers.

The key to all of this is that the whole system must be designed so that each of its components is ephemeral. In this way, any component can instantly be replaced by a new, healthy one, rendering a self-healing system. Designing such a system is usually not the remit of developers but that of the DevOps team.

Traditionally, organizations used snowflake servers running monolithic software stacks, with everything on that single server. Such a design is very fragile, with everyone living in fear of the next breakdown and engineers on duty 24/7. Admittedly, you also need engineers on duty in an automated system, just in case, but they would typically seldom be used.

Automating Reproducibility

Various tools let you automate the configuration of servers and systems and the provisioning of infrastructure elements (networks, databases, servers, containers). Examples of these are configuration management and infrastructure-as-code (IaC) tools.

Leveraging these, you can ensure that an exact mirror of a given system can be instantly instantiated at a button’s press. They also let you deploy new software versions or keep the configuration of servers or serverless services up to date.

IaC often integrates with CD. Indeed, one of the final stages of a CD pipeline can be deploying a software release in a production environment using IaC.

When to Avoid DevOps Practices

Compared to traditional, manual software development, DevOps practices require much work upfront. This initial investment usually pays for itself over the long term; This is probably a wrong business decision if your project is short-lived.

So, in any situation where you want to achieve “good enough” software that won’t be used in production, blindly applying DevOps practices isn’t likely a great idea and will only increase your development time for little added benefit. Typical examples include:

  • Minimum viable product
  • Demonstration
  • Experiences
  • Proof of concept

In any of the above cases, moving to a production-ready product would usually require re-writing the software from scratch, in which case the DevOps practices can then be planned as part of the overall effort.


The most recurring word in the DevOps world is “automation,” as you probably noticed in this article. As a DevOps engineer, my motto is: “If you can’t reproduce it, you don’t own it.”

Compared to traditional development, DevOps usually requires more work upfront to establish the automation patterns. After this initial period, developers’ productivity is improved, and the effort needed by the operations team is significantly reduced.

Perhaps, you have also noticed that I didn’t mention the cloud. This is intentional because DevOps practices apply to both cloud and on-premises environments. However, in the case of cloud-based workloads, DevOps practices are pretty much mandatory for software teams today. This is because manually provisioning and managing cloud resources is cumbersome and prone to human error. Many aspects of cloud engineering are also intrinsically tied to DevOps practices.

In conclusion, it is fair to assume that unless you’re rushing to develop a minimum viable product, a DevOps team will allow you to structure your workloads more efficiently for both your developers and your operations team and make both groups happier. Remember: “DevOps” is a philosophy that encompasses both your development and operations teams, so “just” introducing a DevOps team won’t be enough. It would help if you implemented the necessary cultural changes across your company to make it—and your cloud environment—work.


Leave a Comment