Categories
Blog

Canary Deployment with Kubernetes – The Best Practice for Safe and Efficient Software Rollouts

Canary deployment is a technique used in software development and deployment, particularly with container orchestration platforms like Kubernetes. It allows teams to test new features, updates, or changes in a controlled and gradual manner, reducing the risk of introducing bugs or instability to production environments.

In a canary deployment, a small subset of users or traffic is directed to the new version of an application, while the majority continues to use the older version. This approach is similar to using canaries in mines to detect the presence of dangerous gases; if the canary remains unharmed, it is safe to proceed further. Similarly, if the canary deployment shows successful results, the new version can be promoted to the wider audience.

Kubernetes, an open-source container orchestration platform, provides native support for canary deployment strategies. Using the powerful capabilities of Kubernetes, developers can easily configure canary deployments, defining the traffic split, success metrics, and monitoring for the new version.

During a canary deployment in Kubernetes, a traffic routing technique called ingress is used to control the flow of requests to different versions of an application. By configuring various ingress rules, the infrastructure can direct a percentage of traffic to the canary version, while the remaining traffic continues to the stable version. This allows for real-time testing and monitoring of the new version’s performance and stability before rolling it out to all users.

Overall, canary deployment with Kubernetes enables organizations to release software updates with increased confidence and reduced risks. By gradually rolling out changes and closely monitoring the impact, teams can quickly detect and address any issues, ensuring a smooth and successful deployment process.

What is Canary Deployment?

In the context of Kubernetes, Canary Deployment is a technique used to release new software updates or features gradually and in a controlled manner. It involves creating a new version of an application and gradually routing a small percentage of users or traffic to the new version, while the majority of users continue to use the existing stable version.

This technique is called “Canary Deployment” as it is inspired by the practice of using canaries in coal mines to detect toxic gases. Just as canaries were used to warn miners of danger, Canary Deployment helps to detect issues or bugs in new software updates before they affect a larger user base.

Canary Deployment typically involves the following steps:

Step Description
1 Create a new version
2 Deploy the new version alongside the existing version
3 Gradually route a small percentage of traffic to the new version
4 Monitor the new version and collect data
5 Rollback or continue the deployment based on the collected data

Canary Deployment is beneficial for several reasons. It allows for faster feedback loops, as any issues or regressions can be detected early on in the deployment process. It also reduces the blast radius of potential issues, as only a small percentage of users are affected initially. Additionally, Canary Deployment provides a smoother transition for users, as they are gradually introduced to new features or changes.

In Kubernetes, Canary Deployment can be implemented using various tools and techniques, such as service mesh, ingress controllers, or deployment strategies like rolling updates or traffic splitting.

Why Choose Kubernetes for Canary Deployment?

Kubernetes is a powerful container orchestration platform that provides several benefits for canary deployment.

Firstly, kubernetes allows for easy management of containerized applications. With kubernetes, you can easily package your application into containers, which provide a lightweight and portable way to distribute and run your code. This makes it simple to create replicas of your application and scale it up or down as needed.

Secondly, kubernetes provides powerful tools for deploying and managing applications. The built-in deployment resources allow you to define complex rollout strategies, such as canary deployments, with ease. Kubernetes also provides robust monitoring and metrics capabilities, which are crucial for canary deployments as they enable you to closely monitor the performance and behavior of your new version.

Benefits of using kubernetes for canary deployment:
1. Easy management of containerized applications
2. Flexible deployment strategies
3. Robust monitoring and metrics capabilities
4. Scalability and resource optimization
5. Seamless integration with other kubernetes features

In conclusion, kubernetes provides a reliable and feature-rich platform for canary deployment. Its containerization capabilities, deployment resources, monitoring tools, and integration options make it an excellent choice for implementing canary deployments in your applications.

Benefits of Canary Deployment

A canary deployment is a technique used in Kubernetes to test new versions of an application in a controlled manner before deploying them to the entire cluster. This deployment strategy brings several benefits to the development process.

1. Risk Mitigation: Canary deployment reduces the risk of rolling out new versions by gradually exposing them to a small subset of users or nodes. If any issues or bugs are detected, the impact is limited to a smaller audience, minimizing the potential damage.

2. Early Feedback: By releasing a new version to a small group of users or nodes, developers can gather early feedback and identify any issues or performance bottlenecks. This feedback can be used to make necessary improvements before deploying the new version to the entire cluster.

3. Regression Testing: Canary deployment allows for thorough regression testing, where the new version is compared against the existing version for functionality, performance, and reliability. This ensures that the new version meets the desired requirements and performs as expected.

4. Controlled Rollout: With canary deployment, developers have full control over the release process. They can monitor the new version closely and make adjustments if needed before rolling out to the entire cluster. This level of control helps prevent potential disruptions and enables a smoother transition.

5. Optimal Resource Utilization: Since canary deployment targets a subset of users or nodes, it optimizes resource utilization. Only a fraction of the cluster is used to test the new version, reducing the overall resource consumption and allowing more efficient allocation of resources to other applications.

In conclusion, canary deployment in Kubernetes provides numerous benefits, including risk mitigation, early feedback, regression testing, controlled rollout, and optimal resource utilization. By leveraging these advantages, organizations can ensure smoother and more reliable deployments of their applications.

Step-by-Step Guide to Implement Canary Deployment

In this step-by-step guide, we will walk you through the process of implementing a canary deployment with Kubernetes. A canary deployment is a release management technique that allows you to test a new version of your application in a controlled manner before rolling it out to all users.

Step 1: Set Up a Kubernetes Cluster

The first step is to set up a Kubernetes cluster where you will deploy your canary and production versions of the application. If you don’t have a cluster already, you can use a cloud provider like Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS) to create one.

Step 2: Deploy the Canary Version

Next, you need to deploy the canary version of your application to the Kubernetes cluster. This can be done by creating a new deployment object with the desired configuration for the canary version.

Note: It is recommended to use a separate namespace or label for the canary deployment to keep it isolated from the production environment.

Step 3: Gradually Increase Traffic to the Canary

Once the canary version is deployed, you can start directing a small portion of your user traffic to it. This can be done by using a service mesh like Istio or by configuring a load balancer to distribute traffic between the canary and production versions.

Tip: It is important to monitor the canary version closely during this phase to catch any potential issues before they affect all users.

Step 4: Analyze Performance and User Feedback

While the canary version is receiving traffic, it is crucial to analyze its performance and gather user feedback. This can be done by monitoring metrics like response times, error rates, and user satisfaction surveys.

Tip: It is a good practice to set up alerts and notifications to quickly identify any performance degradation or critical issues.

Step 5: Rollback or Roll Forward

Based on the analysis of performance and user feedback, you can make an informed decision to either roll back the canary version if issues are detected or roll forward and promote it to the production environment if it performs well.

Tip: It is recommended to have a rollback plan in place before deploying the canary version to minimize downtime and disruption in case rollback is necessary.

Congratulations! You have successfully implemented a canary deployment with Kubernetes. By gradually introducing the new version to a subset of users, you can ensure a smooth transition and minimize the impact of any potential issues.

Preparing the Environment

Before starting with the deployment of canary with Kubernetes, it is important to properly prepare the environment. This includes setting up the necessary infrastructure and configuring the Kubernetes cluster.

Here are the key steps to prepare the environment:

Step Description
1 Set up the Kubernetes cluster
2 Install the necessary tools and dependencies
3 Create a namespace for canary deployments
4 Configure the desired canary deployment strategy
5 Prepare the container images for canary deployment

By following these steps, you will ensure that your environment is ready for canary deployments with Kubernetes.

Deploying Initial Version

When working with Kubernetes, one common approach to deploy a new version of an application is using canary deployments. A canary deployment is a technique that allows you to gradually roll out a new version of your application to a subset of users or nodes, while keeping the previous version running for the rest.

Before starting with the canary deployment strategy, you need to first deploy the initial version of your application onto the Kubernetes cluster. This initial version will serve as your baseline for the canary deployment.

Creating Kubernetes Resources

To deploy the initial version of your application with Kubernetes, you need to create the necessary Kubernetes resources such as pods, services, and deployments. These resources define how your application is structured and how it will run in the cluster.

Start by defining the configuration for your application in a YAML file. This configuration should include details such as the image name, port mappings, environment variables, and any other specific requirements for your application.

Once you have defined the configuration file, you can use the kubectl apply command to create the Kubernetes resources. This command will read the configuration file and deploy the resources onto the cluster.

Verifying Deployment

After deploying the initial version of your application, you should verify that it has been successfully deployed and is running as expected. You can use the kubectl command to check the status of the pods, services, and deployments.

For example, you can use the kubectl get pods command to list all the pods in the cluster and check their status. You can also use the kubectl describe command to get more detailed information about a specific pod or deployment.

Monitoring and Scaling

Once the initial version of your application is deployed, you can start monitoring its performance and scale it accordingly. Kubernetes provides various tools and metrics that can help you monitor the health and performance of your application.

You can use the Kubernetes dashboard or CLI commands to monitor the CPU and memory usage of your pods, as well as other relevant metrics. Based on the monitoring data, you can scale your application by adjusting the number of replicas or resource limits for the deployments.

Summary
Step Description
Creating Kubernetes Resources Create the necessary Kubernetes resources for your application using a YAML configuration file.
Verifying Deployment Use kubectl commands to check the status and details of the deployed resources.
Monitoring and Scaling Monitor the performance of your application and scale it based on the metrics.

Creating Canary Release

With canary deployment, you can easily perform controlled rollouts of new versions of your applications in a Kubernetes cluster. This allows you to test the new release in a small subset of your production environment to ensure its stability and performance before rolling it out to all users.

When creating a canary release, you need to:

  1. Define a canary version: Decide which version of the application you want to test in a canary deployment.
  2. Identify a subset of users: Choose a small group of users or a specific segment of your production environment to receive the canary version.
  3. Route traffic to canary: Use Kubernetes features like Ingress or Service Mesh to route a portion of the traffic to the canary version.
  4. Monitor and analyze metrics: Monitor the performance and health of the canary version using metrics and observability tools. Compare it with the baseline version to detect any issues or regressions.
  5. Gradual rollout: Once you are confident in the canary version, gradually increase the percentage of traffic routed to it, until it replaces the baseline version completely.

By following these steps, you can create a canary release with ease using Kubernetes. This allows you to minimize the risks associated with deploying new versions and ensure a smooth transition for your users.

Routing Traffic

When using canary deployment with Kubernetes, it is important to have a mechanism in place to route traffic between the canary and stable deployments. This allows for a controlled release of the canary deployment to a subset of users or traffic while ensuring that the majority of users still have access to the stable deployment.

In Kubernetes, there are several ways to route traffic to different deployments. One common approach is to use a service, such as a load balancer, to distribute traffic between the canary and stable deployments based on a set of rules or weights.

Service and Ingress

Kubernetes provides a service abstraction that can be used to route traffic to different deployments. A service is assigned a unique IP address, and it acts as a load balancer, distributing traffic to pods based on labels or selectors.

Using the service’s selector, it is possible to target specific pods in the canary or stable deployment. The service can then route traffic to these pods based on various routing rules or weights.

To expose the service externally, an ingress can be used. An ingress is a Kubernetes resource that defines rules for routing external traffic to the internal services. It acts as a layer 7 load balancer and allows for more fine-grained routing based on rules, such as URL paths or headers.

Canary Analysis Tools

Another approach to routing traffic during canary deployment is to use canary analysis tools. These tools provide more advanced traffic routing capabilities and can help in automatically analyzing and validating the canary deployment before routing traffic to it.

Tools like Istio, Flagger, and Linkerd offer canary analysis capabilities that allow for features such as automated rollbacks, gradual traffic shifting, and automated canary testing. These tools integrate with Kubernetes and provide additional control and visibility over the canary deployment.

Monitoring and Observability

Regardless of the approach used for routing traffic during canary deployment, it is important to have good monitoring and observability in place. This allows for real-time visibility into the canary deployment and helps identify any issues or anomalies.

Tools like Prometheus and Grafana can be used to monitor the performance and health of the canary deployment. Additionally, logging and tracing tools can help in debugging and understanding the behavior of the canary deployment.

Service Ingress Canary Analysis Tools Monitoring and Observability
Provides a load-balancing mechanism for routing traffic Allows for more fine-grained routing of external traffic Offers advanced traffic routing capabilities and canary analysis Enables monitoring and real-time visibility into the canary deployment
Uses selectors to target specific pods in the canary and stable deployment Defines rules for routing traffic based on URL paths or headers Automates analysis and validation of the canary deployment Helps identify and debug any issues or anomalies

Making Observations and Decisions

When using Kubernetes for deployment, it is essential to make observations and decisions along the way to ensure a smooth and successful process. By closely monitoring the canary deployment, you can gain valuable insights into its performance and make informed decisions regarding its future.

Monitoring the Canary Deployment

Monitoring the canary deployment involves analyzing various metrics and observing its behavior in the live environment. This can help you determine whether the new version of your application is functioning as expected, or if there are any issues that need to be addressed.

Some key metrics to consider when monitoring a canary deployment with Kubernetes include:

Metric Explanation
Response time Measures how long it takes for the application to respond to requests. High response times may indicate performance issues.
Error rate Tracks the percentage of requests that result in errors. A high error rate could indicate bugs or issues with the new version.
Throughput Measures the number of requests the application can handle per second. Low throughput may suggest scalability problems.
Resource utilization Tracks the utilization of CPU, memory, and other resources. High utilization could lead to performance bottlenecks.

Making Informed Decisions

Based on the observations made during the canary deployment, you can make informed decisions about the future of your application. If the canary version is performing well and meeting the desired benchmarks, you may choose to gradually shift more traffic towards it until it becomes the primary version.

On the other hand, if the canary version exhibits issues or doesn’t meet the desired performance criteria, you may decide to roll back to the previous version or make further changes to address the issues before proceeding.

By carefully analyzing the metrics and making informed decisions, you can ensure that your canary deployment with Kubernetes is successful and ultimately improves the overall reliability and performance of your application.

Scaling Up

Once the canary deployment with Kubernetes is successfully tested and verified, it’s time to scale up and roll out the changes to a larger portion of the deployment.

Scaling up refers to increasing the number of instances of the new version of the application in the deployment. This allows for more users to be redirected to the canary deployment and ensures that the new version can handle the increased load.

Before scaling up, it’s important to monitor the performance and metrics of the canary deployment. This can help identify any issues or bottlenecks that may arise when scaling up. It also provides valuable insights into the impact of the new version on the overall system and helps in optimizing the performance.

Steps for scaling up a canary deployment:

  1. Monitor the canary deployment’s performance and metrics.
  2. Gradually increase the number of instances of the new version.
  3. Continuously monitor the performance and metrics during the scaling process.
  4. Observe the impact of the new version on the system’s performance and user experience.
  5. If the performance and metrics are satisfactory, continue scaling up until the canary deployment becomes the primary deployment.

It’s important to note that scaling up should be done gradually and carefully to ensure that any issues or bottlenecks are detected and addressed before the canary deployment becomes the primary deployment. This helps minimize the impact on users and ensures a smooth transition to the new version.

By following these steps, the canary deployment with Kubernetes can be effectively scaled up, allowing for a controlled and monitored rollout of the new version of the application.

Handling Errors

When using Kubernetes for deployment, it’s essential to have a robust error handling mechanism in place. Errors can occur at various stages of the deployment process, and it’s crucial to handle them effectively to ensure the smooth operation of your application.

One common error that can occur during the deployment process is an unsuccessful container creation. This can happen due to various reasons, such as incorrect configuration settings or resource constraints. To handle this type of error, Kubernetes provides a mechanism to automatically restart the containers in case of failure.

Another type of error that can occur is related to the communication between containers. In a Kubernetes deployment, multiple containers often interact with each other to perform various tasks. If there is an error in this communication, it can lead to application failures. To handle this type of error, it’s important to implement proper error handling and logging mechanisms within your application code.

In addition to handling errors during deployment, it’s also important to monitor your application continuously to detect and address any errors that may occur in real-time. Kubernetes provides various built-in tools and features, such as monitoring and logging, to help you track and handle errors effectively.

In summary, when deploying applications with Kubernetes, it’s essential to have a robust error handling mechanism in place. This includes handling container creation failures, implementing proper error handling and logging within your application code, and continuously monitoring your application for errors. By doing so, you can ensure the smooth operation of your application in a Kubernetes environment.

Managing Rollbacks

Rollbacks are an essential part of the canary deployment process in Kubernetes. They allow you to revert to a previous version of your application if any issues arise during the canary deployment.

To manage rollbacks effectively, follow these steps:

1. Monitor the Canary Deployment

During the canary deployment, closely monitor the metrics and logs of the new version to identify any issues. Keep an eye on key performance indicators (KPIs) and user feedback to ensure the new version is running smoothly.

2. Define Rollback Criteria

Before starting the canary deployment, clearly establish the criteria that will trigger a rollback. This can include specific error rates, high latency, or any other metrics that indicate the new version is not performing as expected. Having predefined criteria will make the rollback decision objective and consistent.

3. Automate Rollback Process

Automating the rollback process is crucial to ensure a quick and seamless transition back to the previous version. Use orchestration tools like Kubernetes controllers or deployment strategies that support automatic rollbacks.

4. Perform a Controlled Rollback

Instead of immediately rolling back all user traffic, consider gradually redirecting traffic back to the previous version. This approach allows you to analyze the impact on performance and user experience, and make adjustments if necessary.

  • Start by redirecting a small percentage of traffic to the previous version.
  • Monitor the metrics and user feedback to assess the impact.
  • If the rollback is successful, gradually increase the percentage of traffic directed to the previous version.

This controlled rollback method minimizes the impact on users and provides a smoother transition back to the previous version.

5. Communicate with Stakeholders

During the rollback process, it is important to communicate with stakeholders, including users, product owners, and development teams. Provide regular updates on the status of the rollback, reasons for the decision, and any actions being taken to address the issues.

By effectively managing rollbacks in canary deployments with Kubernetes, you can mitigate risks and ensure a reliable and stable application environment.

Continuous Integration and Canary Deployment

Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository. This helps to detect any integration issues as early as possible. With CI, developers can quickly identify and fix any problems before the changes are fully integrated into the production environment.

Canary deployment is a strategy that allows you to test new versions of your application in a controlled and incremental manner. With canary deployment, you can release a new version to a small subset of users or servers and monitor its performance. If everything goes well, you can gradually roll out the new version to all users or servers.

Kubernetes is an open-source container orchestration platform that simplifies the management and deployment of containerized applications. Kubernetes provides various features and functionalities for deploying and scaling applications in a distributed environment.

By combining CI with canary deployment and utilizing Kubernetes, you can achieve a seamless and efficient deployment process. With CI, you can continuously test and build your application, ensuring that all code changes are properly integrated. Canary deployment allows you to release new features or versions of your application gradually, minimizing the impact of any potential issues.

Using Kubernetes for canary deployment provides additional benefits such as automated scaling, traffic routing, and rollback capabilities. Kubernetes allows you to easily create and manage multiple instances of your application, making it convenient to test and roll out changes in a controlled manner.

In summary, continuous integration and canary deployment, when combined with Kubernetes, offer a powerful approach to software development and deployment. This combination ensures that code changes are thoroughly tested and integrated, and new versions of the application are released in a controlled and gradual manner.

Monitoring and Metrics for Canary Deployment

Monitoring and metrics are crucial for ensuring the success and stability of a canary deployment in Kubernetes. By closely monitoring the performance and behavior of the canary release, you can quickly identify and address any issues that may arise, ultimately reducing the impact on your production environment.

Metrics to Monitor

When monitoring a canary deployment, it’s important to gather data on various metrics to ensure the health and performance of the new release. Some key metrics to monitor include:

Metric Description
Request latency Measure the time it takes for requests to be processed by the canary release compared to the stable release. Higher latency may indicate performance issues.
Error rate Monitor the rate of errors encountered by the canary release. An increase in errors may signal issues with the new release.
Resource utilization Track the utilization of CPU, memory, and other resources by the canary release. High resource usage may impact overall system performance.
Throughput Measure the number of requests successfully processed by the canary release. A decrease in throughput may indicate issues with the new release.

Monitoring Tools

To effectively monitor the canary deployment, you can use various tools and technologies. Here are some popular choices:

  • Prometheus: A widely used open-source monitoring and alerting toolkit specifically designed for Kubernetes.
  • Grafana: A visualization and analytics platform that works seamlessly with Prometheus, providing interactive and customizable dashboards.
  • Elasticsearch: A powerful search and analytics engine that can be used to collect and analyze logs generated by the canary deployment.
  • Kibana: A data visualization tool that integrates with Elasticsearch, allowing you to explore and visualize the logs in a user-friendly manner.

By leveraging these tools, you can gather and analyze the necessary metrics to gain insights into the performance and behavior of your canary deployment. This data can then be used to make informed decisions and quickly address any issues that may arise.

Common Challenges in Canary Deployment

When implementing a canary deployment with Kubernetes, there are several common challenges that can arise. These challenges include:

1. Configuration Management: Managing the configuration of the canary deployment can be complex, especially when it involves multiple replicas or different environments. It’s important to ensure that the configuration is consistent across all replicas and environments to avoid inconsistencies or unexpected behavior.

2. Monitoring and Metrics: Monitoring the performance and metrics of the canary deployment can be challenging. It’s important to have proper monitoring in place to detect any issues or abnormalities in the canary deployment, such as increased error rates or decreased response times.

3. Rollback Strategy: Having a reliable rollback strategy is crucial in case the canary deployment encounters any issues or failures. This includes being able to quickly revert to the previous stable version and having a well-defined plan for handling rollbacks.

4. Traffic Splitting: Splitting traffic between the canary deployment and the stable version can be tricky, especially when dealing with different user groups or different regions. It’s important to ensure that the traffic splitting is done in a controlled and gradual manner to minimize any negative impact on users.

5. Versioning and Compatibility: Ensuring compatibility between different versions of the application can be challenging, especially when introducing new features or making significant changes. It’s important to have a clear versioning strategy and to thoroughly test the compatibility of the canary deployment with other components or services.

6. Team Communication: Effective communication and collaboration between teams involved in the canary deployment process is crucial. This includes clear communication of goals, expectations, and timelines, as well as feedback and support from different teams, such as development, operations, and testing.

7. Security and Compliance: Ensuring the security and compliance of the canary deployment can be challenging, especially when dealing with sensitive data or regulatory requirements. It’s important to follow best practices for security and compliance and to regularly audit and monitor the canary deployment to identify and address any potential vulnerabilities.

In conclusion, while canary deployments have many benefits, they also come with their own set of challenges. By addressing these common challenges and following best practices, organizations can successfully implement canary deployments with Kubernetes.

Question-answer:

What is a canary deployment?

A canary deployment is a technique used in software development and release management. It involves deploying a new version of an application to a small subset of users or servers, and then gradually rolling it out to more users or servers over time.

How does canary deployment work in Kubernetes?

In Kubernetes, canary deployment can be achieved by using various techniques such as using multiple replicasets, traffic routing with ingress controllers, or using specialized canary deployment tools like Istio. These techniques allow you to gradually direct a small portion of the traffic to the new version while the majority of the traffic still goes to the previous version.

What are the benefits of canary deployment?

Canary deployment has several benefits, including minimizing the risk of introducing bugs or performance issues to all users at once, allowing for faster rollbacks in case of issues, and gathering feedback and metrics on the new version before rolling it out to all users. It also allows for easier A/B testing and gradual feature rollout.

What are some best practices for canary deployment in Kubernetes?

Some best practices for canary deployment in Kubernetes include using automated testing and monitoring to detect issues early, setting up proper rollback mechanisms, monitoring the performance and metrics of the canary version, and gradually increasing the traffic to the canary version based on the observed metrics and feedback.

Are there any tools or frameworks available for canary deployment in Kubernetes?

Yes, there are several tools and frameworks available for canary deployment in Kubernetes. Some popular ones include Istio, Linkerd, Flagger, and Argo Rollouts. These tools provide additional features and automation for managing canary deployments, such as automatic rollback, canary analysis, and traffic shifting.

What is Canary deployment?

Canary deployment is a deployment technique used in software development and release management. It involves rolling out a new version of an application or service to a small subset of users or infrastructure, and then gradually increasing the exposure to a larger audience if everything goes smoothly.

Why would I use Canary deployment?

Canary deployment allows you to test the new version of your application in a controlled manner before rolling it out to all users. By gradually increasing the exposure, you can monitor the performance and stability of the new version, and rollback if any issues arise. This approach reduces the risk of deploying a faulty version that could impact all users.

How does Canary deployment work with Kubernetes?

With Kubernetes, Canary deployment can be achieved by using features like traffic splitting and ingress controllers. Traffic splitting allows you to direct a percentage of traffic to the new version while the majority still goes to the stable version. Ingress controllers handle the routing of the traffic based on rules you define.