When thinking about improving the quality of a development process, the mind naturally heads in certain directions: requirements gathering and tracking, design, coding practices, and testing and quality assurance. All of these are vital components, but without a solid release management process, they can be insufficient.
Excellent code that is poorly delivered will be perceived as poor code. Faulty release management will even cause problems prior to go-live in that time spent correcting release issues will likely eat into that scheduled for testing efforts.
I won’t attempt to create the definitive work on release management and environments, but I will outline the system I helped create and have used over the last eleven years. It’s appropriate for development groups creating in-house applications, both internal and customer-facing. It works equally well with traditional desktop applications, smart clients, and web applications. It does not, however, encompass performance testing, which is outside the scope of this post. Performance testing will require its own dedicated environment that mirrors the production environment.
First and foremost is to understand that beyond the development environment, administrative access to both production environments and non-production environments should be restricted. Even if you don’t have a dedicated release management team, at least two people should have that role (one primary and a backup). Those performing the release management function should not be involved in coding.
Access restrictions should not be viewed as a matter of trust, but of accountability and control. Even as lead architect and manager of a development team, I lacked (and didn’t want) the ability make changes to environments under the control of the release management team. Aside from making the auditors happy, not having access to make changes outside of the process insulates you from accusations that you made changes outside the process. People sometimes forget to document changes, but if they lack the ability to make a change in the first place, then that consideration can be eliminated when troubleshooting a deployment.
The purpose of forcing all changes into a controlled framework is to promote repeatability. Automated build and deployment tools help in this regard as well. Each environment that a build must be promoted through provides another chance to get the deployment process perfect before go-live. The first environment should catch almost all possible deployment errors, with only configuration and/or data errors left for the succeeding environments.
The next step is to construct a set of environments around your development process and the number of versions you need to support. In our case, we only deal with two versions, so we have two pre-production environment branches: current, which is the same version as production and is used for any hotfixes that may be required, and future which hosts the release currently under development. To support our process, we have three to four (depending on the application) pre-production environments per branch as follows:
- Development: Used for coding, this environment typically consists of a shared database server and the virtual machines on the developers workstations that are used for web/application servers. As noted above, coders have unrestricted access to all components of this environment. All changes to code and database objects must originate in this environment and be promoted through the succeeding ones in order to be deployed to production.
- DevTest: This is the first controlled access environment and is used for integration testing of code by the development staff (for all applications with more than one developer assigned, we use a “no one tests their own code” rule). In addition to allowing the development team the ability to shake down the build as a whole, it verifies that the deployment instructions are complete. As noted previously, developers have no administrative access to the servers and have only read access to the database(s). This ensures that only documented changes made via the release process take place.
- Test: This environment is used for functional testing by the test staff. As with all controlled environments, developers have no administrative access to the servers and have only read access to the database(s). Since the deployment has been verified in the previous environment (with the exception of environment-specific configuration and data changes), the chance that testing will be delayed due to a bad release should be greatly minimized.
- UAT/Training: This environment is optional, based on the application and the preferences of the business owner(s). For those applications that use it, it allows for User Acceptance Testing and/or training to take place without impacting any functional testing that may still be under way.
These environments should share the same hardware architecture as the production environment, but need not be exact clones. For example, an application that consists of two web farms (one internal, one in the DMZ) and a common database server can have its pre-production needs be adequately served by a single database server and ten (fourteen if you include UAT/Training) web servers. Ideally, the database server should run a separate instance for each of the six (or eight) environments, but as long as the database name(s) are configurable, then they could all be handled by a single instance if absolutely necessary. The environments would look as follows:
|
Current |
Future |
Development |
database instance only |
database instance only |
DevTest |
internal web server, external web server, database instance |
internal web server, external web server, database instance |
Test |
internal web server, external web server, database instance |
two internal web servers, two external web servers, database instance |
UAT/Training |
internal web server, external web server, database instance |
internal web server, external web server, database instance |
If the production environment is load balanced, then that must be accounted for in at least one environment since it can lead to functional issues (losing web session state if the balancing isn’t set up properly is a classic one). My practice is to do so in the Test Future environment since the most comprehensive functional testing occurs there and it is on the branch where new functionality is introduced.
I would imagine that some might have choked on the 10-14 web servers. Remember, however, that absent conflicting dependencies, these environment can be shared across multiple applications and virtualization technology can drastically reduce the number of physical boxes needed. Cloud computing (infrastructure as a service) could also be used to reduce infrastructure costs significantly.
The last step is to make the process as smooth as possible. Practice makes perfect, automation makes it more so. Releases should be boring.
Like this:
Like Loading...