Skip to Content

From the Virtyx Blog

Network Continuous Deployment

By Ethan Mick • June 22, 2017

As a founding engineer of Virtyx, from day one we have pursued new software integration and development paradigms, such as continuous integration (CI) and Extreme Programming (XP) where unit and system testing are automated and a primary part of the development pipeline. This test driven development has spawned a plethora of products and tools that integrate and automated this strategy – tools such as Jenkins which allow an automated building, test and deployment cycle for software that we rely on at Virtyx.

The real power of these continuous integration cycles comes from the automation of unit and integration testing which is dependent on the availability of software environments which are substantially similar to the actual deployment environments. Server virtualization, software orchestration and the availability of deployment tools such as Salt, Puppet, Chef, Ansible, Docker and Vagrant make this possible for anyone moving to a cloud development model. Deploying these tools in the same environment for test and production allow an enormous amount of automation applied in test to be deployed and used for production.

However, as an environment scales and becomes more geographically diverse, it becomes significantly more complex and expensive to provide a development and test environment which is substantially similar to the production environment.

Most of the the time the biggest divergence in the environment is in fact geographic typically in the areas of bandwidth and delay to remote branch offices.

There is also a significant effort to manage the network infrastructure which supports this distributed environment, using the same tools used to create and manage the software defined datacenter environment. This means the same test drive approach should be taken for network changes that are implemented for software changes.

Herein lies the problem – how do you create network unit and system tests which measure the effects of network changes as seen from the end user perspective?

Testing from a single centralized location such as an SNMP manager only really tests the network in a single direction, from the manager to the SNMP devices, using a single protocol SNMP. The same testing is often done with ICMP PING as well. But both of these testing methods do not capture the complexity of modern web application delivery environments which may have different distributions of mid-boxes, such as NAT, Firewalls, Load Balancers and Proxies, which are not well tested with ICMP and SNMP.

They need to be tested with real HTTP and HTTPS traffic coming from all over the network.

So how would links between network change, deployment and network unit testing work? Each web application would need to have a specific crafted URL created for testing the availability of the application. Luckily, this is already done for both application unit testing and for the health testing of applications for things like elastic load balancing. The same health check URLs can be used and the reuse of these health checks assures that developers and operators (DevOps) have a common communication mechanism to say what exactly is not working correctly.

The next step in network deployment testing is to distribute across the network the validation and check of these health URLs. With the use of network mid-boxes and security segmentation of the network the behavior of a URL will change based on where it is being accessed from and when the network is changing, this effect can be even larger. So the URLs must be tested from locations across the network. With network security segmentation, two different ports on the same LAN switch may even have different access to a specific URL!

The last part of the continuous network testing puzzle is a way to measure a successful test versus a failed test. The challenge is that most networks are constantly changing due to current conditions such as failed carrier links, congestion due to time of day, or other factors which are not due to configuration changes. So a testing baseline needs to be a little more subtle – that is, everything the same before and after the network change, though even this test is often more than many environments can automate today.

So how do we implement network testing tied to software deployment testing?

  1. Same URL Unit tests for software, ELB Health and Network testing
  2. Distributed URL health checking across the internal and external networks.
  3. Fuzzy comparison of URL health check baseline responses for delay, and jitter, all content response must stay the same for a specific location but might vary by location.
  4. Continuous testing and communications of test status to all stakeholders (helpdesk, devops) as a common language for what it working and what is not.

By extending the DevOps practice of unit and integration testing to network scale deployment we get the benefits of flexible change and updates in a continuous cycle all within a manageable risk model. The legacy practice of fixed time change windows and acceptable outages during these windows does not match or, more accurately, keep up with the requirements for continuous availability and continuous deployment. By adopting continuous testing into the network there is a better match between the application development lifecycle and the lifecycle of the underlying network to support applications. This match in process assures the network can provide the same “speed to market” of project and availability that is necessary in today’s business environment.


Come join our team - we're hiring!

Get Started for Free
Read More

By Ben Burwell • October 22, 2018

By Ben Burwell • October 19, 2018

By Jim Maniscalco • September 27, 2018

By Ethan Mick • September 20, 2018

By Jim Maniscalco • September 14, 2018

Start using Virtyx today.

If you manage computers or servers, Virtyx can make your life easier.