Exemplo de fluxo conectando o versionamento, a integração e a entrega contínua

What does an application need to have DevOps?

It is possible to implement the concepts of DevOps in any application? Or only those with modern technologies, such as public cloud web systems or mobile applications? And legacy systems, how are they?

To answer these questions, let's review the basic engineering practices for the application lifecycle - development, delivery and support. We are also going to talk about the importance of portability in building resilient, portable and “cloud-ready” services. Finally, we will compare some scenarios and their degree of complexity, suggesting strategies for successful implementation.

Code Versioning

Code versioning is more than keeping the code in a centralized repository with version history. It is to allow one or more teams to collaborate with the same code base without the work of one person conflicting with that of the other, in a way in which the functionalities can be discarded, reversed, delivered in a different order than planned, or even left in stand-by temporarily, all of this efficiently and without technical complications.

For that, the code base needs to be at least complete, that is, to have at least one stable branch so that the team can develop from it, and be portable - to be downloaded, modified and executed on a compatible workstation without much complexity , such as difficulty during the initial configuration of the desktop or the need to modify the code to make it work.

Code versioning goes far beyond that, but we will delve into these two aspects: portability and code integrity.

Every system has prerequisites for software and even hardware to be coded. Nowadays development is increasingly multi-platform - the same code can be developed on Linux, macOS or Windows, for example, while older technologies work only in environments with specific characteristics. Most systems require some type of SDK (Software Development Kit), for example the JDK (Java), Node or .Net Framework. Some will need libraries and drivers installed on the machine, for example an Oracle or SQL Server driver or shared services (COM +, GAC, etc.). Others will be able to search and install the application's dependencies in the directory in which it is using, without prior configuration on the machine (npm install, nuget restore). Anyway, some kind of setup will be done in the work environment so that the person can start to develop.

To make this setup simpler, you can create a script responsible for checking the compatibility of the environment (operating system, minimum hardware requirements) and download, install and configure software dependencies, if they are not already available. This can be done with any scripting language (bash, python), or with some utility developed for this, for example PSake, in powershell, or Cake, in C#.

Now consider that the developer already has all the environment prerequisites properly configured, downloaded the latest version of the code but had problems with compilation or execution. The reason was probably one of the following:

  • There were notes of dependencies that could work on the machine of the last person who developed it, but that are in different locations on the current machine. For example, a reference that is in a directory on the “D:” partition on the other's machine, and on the “C:” partition on the current machine. To avoid this type of problem, it is important that all dependencies are obtained from a centralized repository for this purpose, such as a Nuget Server or Nexus, for example.
  • There were user-specific settings that are not supported on the current machine, such as local preferences, such as IDE options (Integrated Development Environment) or execution options that will not necessarily be compatible with all environments. Versioners usually have a control file of what should and shouldn't be versioned, such as .gitignore or .vsignore. By making good use of these files it is possible to avoid this problem.
  • The code had a bug. It is important that each person validates their code to ensure that it at least compiles, before submitting changes to the server. At the same time, it is important that changes are sent frequently to the server even if the functionality is not complete, because something can happen to the workstation, or the person may want to continue development from another machine. Creating branches for each person working on the code allows the person to publish their code frequently without compromising the integrity of the main branch, and when it is more stable, then these changes will be combined with the main branch. Still, this process of combining what is being developed by the team with the main branch needs to be validated. Hence the process of automation of build (construction).

Build automation

Build automation allows you to guarantee code integrity automatically. It is a service that performs validation procedures, which can be performed on a scheduled and periodic basis (every night, for example) or continuously whenever there is a change in the code base, in this case also being called Continuous Integration, or Continuous Integration (CI). Normally, the validation procedures follow a flow similar to that described below:

  1. Obtain the source code in a "clean" environment;
  2. Configure the software prerequisites;
  3. Download the application's dependencies;
  4. Compile the code;
  5. Perform automated tests (unitary, integration, acceptance, etc.);
  6. Perform the static analysis of the code (check if the code has the quality standards defined by the team);
  7. Package the output of the compilation (usually binary and static files generated from the compilation);
  8. Mark the package so that it can be tracked with the integration process and the version of the code;
  9. Apply a checksum or other way of ensuring that the package has not been tampered with or corrupted;
  10. Store the package in an appropriate artifact repository.

There are several platforms that allow build automation, for example, Azure Pipelines, Jenkings and Bamboo. The procedure is configured on platforms, and there may be several services connected responding to the platform to execute the procedure. These services are installed on servers whose environment is more suitable for their execution, that is, that meet the hardware and software pre-requisites for compiling the solution (Operating System, SDK's, etc.).

For this reason, it is important that the recommendations mentioned in the previous topic are followed - to set up a successful build automation process, it will be much simpler if the code base is portable, free of environment-specific notes and unnecessary files, and if it depends on a setup more complex, have a script that does it automatically.

With a well-configured build automation procedure, we can go further and set up a procedure to automate delivery (deploy). Preferably, for each new package in the artifact repository, perform delivery in a specific environment. This procedure is called Continuous Delivery.

Deployment automation

It is a delivery automation procedure configured on a platform (Azure Pipelines, Jenkings, Bamboo) connected to the artifact repository. It's called Continuous Delivery, or Continuous Delivery (CD), when, for each new package in the artifact repository, it performs delivery in a certain environment.

This procedure also has services that respond to it, responsible for either obtaining the package and sending it to the target environments, or running in the target environment itself and fetching the package. Normally the flow will consist of

  1. Get the package;
  2. Install / Publish at destination;
  3. Validate the installation;

Normally, the flow contemplates the installation in several validation environments until reaching the final environment (production). It is common for the deploy be done continuously in the development environment (DEV), then under approval for the approval environment (HML) and then under approval for the production environment (PRD), as shown in the figure below:

Exemplo de fluxo conectando o versionamento, a integração e a entrega contínua
Flow example connecting versioning, integration and continuous delivery

Note that the same package (#3) is promoted for each of the environments. A new package is not generated for the approval environment and another for production.

For this to occur, the package must be able to obtain the settings of the environment without having to be modified. One of the simplest ways is to prepare the application to read the notes from environment variables, that is, values pre-configured in the execution environment, for example, connections to databases or services that the application needs to run. For example, the address of the development database is likely to be different from the homologation and production addresses. If the value is fixed in the code, and consequently in the package, the package must be modified to work in different environments, and this causes it to lose its integrity - if it is modified, there is no guarantee that what the features validated and approved in development will continue to operate under approval, for example. An alternative is to keep the settings of the environments individually (DEV, HML and PRD) within the versioning of the code, but (1) this does not solve if there is confidential data, which cannot be versioned in any way, and (2) if any configuration change, the code must be updated, a new process of build must be triggered, a new package generated and a new deploy must be done to update this configuration. If the data is not changed frequently this is not such a problem, but addresses, users and settings in general usually change. Configuring applications so that the package is portable implies coding in a way that these configurations can be obtained from the environment, from a service or from an appropriate configuration repository.

It is important that the application provides some way of indicating that the installation was successful. This procedure is often called healthcheck. For an API, imagine an access route (ex: / ready) indicating that the application is live, fulfilling all the prerequisites for its operation (connectivity and permissions with the services it depends on, for example). This greatly helps the validation process. When a package works in DEV, and it doesn't work in HML, you need to quickly identify whether the failure is related to a bug of development or is it some incorrect configuration, or even a service that the application depends on and is down or without permissions. With this type of verification in the application itself, it is possible to rule out some types of infrastructure problem, contributing to a faster diagnosis. This type of verification can be integrated into the automation flow of deploy.


With the recommendations of the previous topics, we were able to guarantee versioning, integrity and delivery. But what do we need to keep in mind for the application to be sustainable?


The objective of most software solutions is to grow, that is, to serve a greater number of users.

In the case of web applications, in order to support a greater number of requests, it is necessary to scale the application. When it is not portable, that is, it is not easily replicable in several instances with load balancing, the scale form is usually vertical. This means an increase in hardware resources, such as memory and CPU, on the server that hosts the application.

The vertical scale is very common for legacy applications, which usually have a very complex environment configuration, with shared dependencies, specific prerequisites and other complications, making the configuration of a new environment a real challenge. Sometimes, the application design does not allow more than one instance to run on the same server, so the scale necessarily implies either increasing the capacity of the current server, or configuring a new environment.

Running more instances of the same application, either side-by-side or in isolated environments, is the horizontal scale. It’s a more interesting approach because it allows for elasticity, that is, making more instances available in a period of high demand, and fewer instances when demand is low. If large public cloud platforms are used, it is possible to configure this automatically, and pay for the use - only the resources that are used will be charged. The vertical scale, on the other hand, tends to be permanent. The infra-resources you purchase will be charged whether you use it or not, and often cannot be used efficiently.

To make the horizontal scale viable, some premises must be met. The application must be prepared for the load to be distributed - that is, if using session data, that data needs to be managed properly, because a transaction that starts at instance A may not be able to be completed at instance B, if the data are linked to instance A, for example. The application, preferably, should not use dependencies shared with others in the same environment, to avoid conflicts. Some level of isolation must be guaranteed even if the instances run on the same server.

Several technologies nowadays facilitate the configuration of self-hosted apps. In these scenarios, the web server is a dependency on the application itself, which runs as a process, instead of the web server containing the application. Running the application as a process facilitates the management of multiple instances side by side.

Another technology that favors horizontal scale and elasticity is Containers. They work similar to lighter virtual machines, and promote the isolation of dependencies, execution via process and load distribution by definition.

In short, one aspect of efficient application support is scale. The vertical scale tends to be permanent, and is an alternative when configuring new instances of the application represents a challenge. The horizontal scale works well when the application is portable - when the increase, decrease of instances and load distribution is easily configurable. Portability allows elasticity - the increase or decrease of resources and instances according to demand, enabling a more efficient use of resources, and even charging according to use.

Efficient diagnostics

Another very important aspect in supporting applications is logs. Applications need to provide logs quality throughout their life cycle, so that they can be collected, centralized and made available to the team on the most appropriate platform. Often, when development and infrastructure areas are separated, developers do not have direct access to production, but at least to logs they should have.

When the application does not provide logs of their own execution, or the monitoring platforms are more rudimentary (logs saved to disk in the execution environment, for example) the diagnosis is compromised. Probably, an operator with privileges in the execution environment will need to log in remotely and obtain the logs, access the terminal, observe server events and pass this information on to the development team. This takes time, and in the case of a bug critical in production, time can mean financial loss.

Any task related to the administration of the running application must be contained within the application itself. O healthcheck (mentioned in the topic of Automation of deploy), or other routines, such as cache clearing and restarting services, should be accessible without an operator having to connect remotely to the server on which it is running. This adds speed to maintenance in a way that application administrators do not necessarily need to be server administrators, reducing one of the barriers between development and operations.

Why these practices are important

To date, we have cited a series of recommendations on code versioning, build, deploy and application support. But why are they so important?

Basically, most application design failures are compensated with infrastructure resources and / or operational effort. Some examples:

  • When a code base depends on a setup complex environment for its compilation, the process automation of build it is difficult. Without the automation of build, the packaging of the solution to be delivered is done manually. It can be done on the developer's machine, which is already prepared with the prerequisites, and made available in a network directory. This procedure is unlikely to guarantee the integrity of the package and its tracking with the code version.
  • When the delivery procedure is not automated, an operator will be responsible for copying the package and installing it in the destination environment. Every manual procedure is subject to human error, and is usually slower than the automated one;
  • When the application is not portable, the scale is usually vertical, which is not always cost effective;
  • If the application does not provide logs Of Quality, healthchecks and other administrative tasks available remotely, it is necessary to access the environment in which the application is running to perform the diagnosis. The development team does not always have direct access to this.

This directly implies time, cost or both. Everything that is not within the direct reach of the development team needs to be requested from the operations team. This may involve opening a ticket, subject to a service queue, and it can be solved only by acquiring more infrastructure resources. If the company uses structure On Premises (local), purchasing new servers and platforms can take up to months.

The problem is that these concerns are often not prioritized. Legacy applications are kept as-is because “they will one day be replaced”. Technical charges for new applications are not prioritized because they “do not add direct value to the user”. New applications do not meet these requirements because they have an “emerging architecture”.

The problem is that all of these arguments are fallacies. Legacy applications are typically critical and vital applications in the company's solutions ecosystem, and often take years to migrate, that is IF they are migrated. Some technical debts hinder the maintenance and support of the product, and this is perceived by the user with delay in resolving bugs and unavailability. And “emerging architecture” is no excuse to stop worrying about portability, resilience and monitoring in the first versions. What managers and product owners do not prioritize is reversed in cost, delay in service, wear and tear between development and operations areas and even by the end user.

12 Factor Apps

One way to guide the team to develop portable and sustainable applications since its first versions is to study the 12 Factor Apps, a methodology for building resilient, portable and “cloud-ready” services, created by the Heroku platform team, based on lessons learned in supporting its platform, which consists of a set of guidelines to ensure that services are minimally portable, elastic, resilient and sustainable, regardless of technology.

The 12-factor methodology is widely used in the world to represent the concepts of DevOps. With it, it is possible to comply with all the recommendations mentioned so far, and more. In addition, most modern technologies, such as .net core and containers, encourages compliance with factors by design.

Implementation strategies

For new applications to be designed in a portable way, it is enough that the developers are aware of these concepts and worry about the 12 factors since the beginning of the development.
When applications are already in a cast, this technical debt must be understood by managers or product owners as a priority, as this reflects costs too, if not development, then for operations and infrastructure resources. If this is prioritized, with little effort the applications can be adapted, not least because modern technologies already favor the fulfillment of the 12 factors.

Legacy applications not only contain configurations scattered throughout the code, but also shared dependencies. As they tend to be “fragile” solutions and, at the same time, vital for the company, it is recommended to start with specific actions that enable the build it's the deploy automatic with the least impact on the code possible. Some actions consist of that the code base contemplates:

  • Centralized configurations, mapped by a file that obtains the values of the environment in which it is running.
  • Dependencies well mapped with their versions. Even if they use shared dependencies on the same server, if there is an appropriate versioning, there will be no conflict.
  • Scripts that configure the software prerequisites, so that it can be used both by developers and by the processes of build

Local applications and closed platforms, such as mainframes, can be challenging on their own.
When I say platforms, imagine that the platform code is not open for versioning as a whole, but you can change specific areas, for example SAP, Sharepoint, CRM Dynamics, or even Mainframes. In such cases, any changes that can be made via a script must have this script versioned in a parameterized way, that is, that it can be executed in different instances of the platforms. O deploy automated can be feasible in some scenarios, when the platform allows integration, in other cases not.

Local applications may have versioning and proper configuration management, but their distribution is hardly in charge of a deploy centralized automated system, but in the opposite flow - instead of deploy be done centrally, the application is responsible for updating itself. The application will periodically check if it has any updates available on the server, will request permission from the user and download and update itself. Monitoring occurs with the collection of data in the execution environments and periodic sending to the management system, if the user allows it.

Final considerations

  • In the same way that companies can implement one or more disciplines of DevOps depending on the limits of the organization, applications can have one or more disciplines.
  • The key to versioning, build and deploy occur is portability and proper configuration management.
  • Other factors such as logs via stream and administrative tasks embedded in the application itself facilitate application support
  • The 12-factor methodology is a good starting guide for applications to allow a minimum of portability, efficient operational management and resilience
  • Modern applications can easily fulfill the 12 factors because current technologies favor this, as containers and PAAS.

Recommended content


Powershell build automation language.


Build automation language in C#.

12 Factor Apps

This is the official website of the 12 Factor Apps methodology. In it you can understand how the Heroku team solved all the problems presented in this video.

12 Factor Apps & .Net Core

In this article, Ben Morris shows how to build services while fulfilling the 12 factors with .net core resources.

And if you want to check out the “video version” of this content, click on the link below!


Published by

Grazi Bonizi

I lead the .Net Architecture track at The Developers Conference, share code on GitHub, write on Lambda3 Medium and Blog, and participate in Meetups and PodCasts typically on DevOps, Azure, .Net, Docker / Kubernetes, and DDD

Leave a Reply