Showing posts with label netdevops. Show all posts
Showing posts with label netdevops. Show all posts

Tuesday, 12 January 2021

Network Security and Containers – Same, but Different

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

Introduction

Network and security teams seem to have had a love-hate relationship with each other since the early days of IT. Having worked extensively and built expertise with both for the past few decades, we often notice how each have similar goals: both seek to provide connectivity and bring value to the business. At the same time, there are also certainly notable differences. Network teams tend to focus on building architectures that scale and provide universal connectivity, while security teams tend to focus more on limiting that connectivity to prevent unwanted access.

Often, these teams work together — sometimes on the same hardware — where network teams will configure connectivity (BGP/OSPF/STP/VLANs/VxLANs/etc.) while security teams configure access controls (ACLs/Dot1x/Snooping/etc.). Other times, we find that Security defines rules and hands them off to Networking to implement. Many times, in larger organizations, we find InfoSec also in the mix, defining somewhat abstract policy, handing that down to Security to render into rulesets that then either get implemented in routers, switches, and firewalls directly, or else again handed off to Networking to implement in those devices. These days Cloud teams play an increasingly large part in those roles, as well.

All-in-all, each team contributes important pieces to the larger puzzle albeit speaking slightly different languages, so to speak. What’s key to organizational success is for these teams to come together, find and communicate using a common language and framework, and work to decrease the complexity surrounding security controls while increasing the level of security provided, which altogether minimizes risk and adds value to the business.

As container-based development continues to rapidly expand, both the roles of who provides security and where those security enforcement points live are quickly changing, as well.

The challenge

For the past few years, organizations have begun to significantly enhance their security postures, moving from only enforcing security at the perimeter in a North-to-South fashion to enforcement throughout their internal Data Centers and Clouds alike in an East-to-West fashion. Granual control at the workload level is typically referred to as microsegmentation. This move toward distributed enforcement points has great advantages, but also presents unique new challenges, such as where those enforcement points will be located, how rulesets will be created, updated, and deprecated when necessary, all with the same level of agility business and thus its developers move at, and with precise accuracy.

At the same time, orchestration systems running container pods, such as Kubernetes (K8S), perpetuate that shift toward new security constructs using methods such as the CNI or Container Networking Interface. CNI provides exactly what it sounds like: an interface with which networking can be provided to a Kubernetes cluster. A plugin, if you will. There are many CNI plugins for K8S  such as pure software overlays like Flannel (leveraging VxLAN) and Calico (leveraging BGP), while others tie worker nodes running the containers directly into the hardware switches they are connected to, shifting the responsibility of connectivity back into dedicated hardware.

Regardless of which CNI is utilized, instantiation of networking constructs is shifted from that of traditional CLI on a switch to that of a sort of structured text-code, in the form of YAML or JSON- which is sent to the Kubernetes cluster via it’s API server.

Now we have the groundwork laid to where we begin to see how things may start to get interesting.

Scale and precision are key

As we can see, we are talking about having a firewall in between every single workload and ensuring that such firewalls are always up to date with the latest rules.

Say we have a relatively small operation with only 500 workloads, some of which have been migrated into containers with more planned migrations every day.

This means in the traditional environment we would need 500 firewalls to deploy and maintain minus the workloads migrated to containers with a way to enforce the necessary rules for those, as well. Now, imagine that a new Active Directory server has just been added to the forest and holds the role of serving LDAP. This means that a slew of new rules must be added to nearly every single firewall, allowing the workload protected by it to talk to the new AD server via a range of ports – TCP 389, 686, 88, etc. If the workload is Windows-based it likely needs to have MS-RPC open – so that means 49152-65535; whereas if it is not a Windows box, it most certainly should not have those opened.

Quickly noticeable is how physical firewalls become untenable at this scale in the traditional environments, and even how dedicated virtual firewalls still present the complex challenge of requiring centralized policy with distributed enforcement. Neither does much to aid in our need to secure East-to-West traffic within the Kubernetes cluster, between containers. However, one might accurately surmise that any solution business leaders are likely to consider must be able to handle all scenarios equally from a policy creation and management perspective.

Seemingly apparent is how this centralized policy must be hierarchical in nature, requiring definition using natural human language such as “dev cannot talk to prod” rather than the archaic and unmanageable method using IP/CIDR addressing like “deny ip 10.4.20.0/24 10.27.8.0/24”, and yet the system must still translate that natural language into machine-understandable CIDR addressing.

The only way this works at any scale is to distribute those rules into every single workload running in every environment, leveraging the native and powerful built-in firewall co-located with each. For containers, this means the firewalls running on the worker nodes must secure traffic between containers (pods) within the node, as well as between nodes.

Business speed and agility

Back to our developers.

Businesses must move at the speed of market change, which can be dizzying at times. They must be able to code, check-in that code to an SCM like Git, have it pulled and automatically built, tested and, if passed, pushed into production. If everything works properly, we’re talking between five minutes and a few hours depending on complexity.

Whether five minutes or five hours, I have personally never witnessed a corporate environment where a ticket could be submitted to have security policies updated to reflect the new code requirements, and even hope to have it completed within a single day, forgetting for a moment about input accuracy and possible remediation for incorrect rule entry. It is usually between a two-day and a two-week process.

This is absolutely unacceptable given the rapid development process we just described, not to mention the dissonance experience from disaggregated people and systems. This method is ripe with problems and is the reason security is so difficult, cumbersome, and error prone within most organizations. As we shift to a more remote workforce, the problem becomes even further compounded as relevant parties cannot so easily congregate into “war rooms” to collaborate through the decision making process.

The simple fact is that policy must accompany code and be implemented directly by the build process itself, and this has never been truer than with container-based development.

Simplicity of automating policy

With Cisco Secure Workload (Tetration), automating policy is easier than you might imagine.

Think with me for a moment about how developers are working today when deploying applications on Kubernetes. They will create a deployment.yml file, in which they are required to input, at a minimum, the L4 port on which containers can be reached. The developers have become familiar with networking and security policy to provision connectivity for their applications, but they may not be fully aware of how their application fits into the wider scope of an organizations security posture and risk tolerance.

This is illustrated below with a simple example of deploying a frontend load balancer and a simple webapp that’s reachable on port 80 and will have some connections to both a production database (PROD_DB) and a dev database (DEV_DB). The sample policy for this deployment can be seen below in this `deploy-dev.yml` file:

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

Now think of the minimal effort it would take to code an additional small yaml file specified as kind:NetworkPolicy, and have that automatically deployed by our CI/CD pipeline at build time to our Secure Workload policy engine which is integrated with the Kubernetes cluster, exchanging label information that we use to specify source or destination traffic, indeed even specifying the only LDAP user that can reach the frontend app. A sample policy for the above deployment can be seen below in this ‘policy-dev.yml’ file:

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

As we can see, the level of difficulty for our development teams is quite minimal, essentially in-line with the existing toolsets they are familiar with, yet this yields for our organizations immense value because the policy will be automatically combined and checked against all existing security and compliance policy as defined by the security and networking teams.

Key takeaways


Enabling developers with the ability to include policy co-located with the software code it’s meant to protect, and automating the deployment of that policy with the same CI/CD pipelines that deploy their code provides businesses with speed, agility, versioning, policy ubiquity in every environment, and ultimately gives them a strong strategic competitive advantage over legacy methods.

Saturday, 16 May 2020

A Mindset Shift for Digitizing Software Development and Delivery

Cisco Study Materials, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Networking

At Cisco, my teams—which are part of the Intent-Based Networking Group—focus on the core network layers that are used by enterprise, data center, and service provider network engineering. We develop tools and processes that digitize and automate the Cisco Software Development Lifecycle (CSDL). We have been travelling the digitization journey for over two years now and are seeing significant benefits. This post will explain why we are working diligently and creatively to digitize software development across the spectrum of Cisco solutions, some of our innovations, and where we are headed next.

Why Cisco Customers Should Care About Digitization of Software Development and Delivery


Cisco customers should consider what digitization of software development means to them. Because many of our customers are also software developers—whether they are creating applications to sell or for internal digital transformation projects—the same principles we are applying to Cisco development can be of use to a broader audience.

Digitization of development improves total customer experience by moving beyond just the technical aspects of development and thinking in terms of complete solutions that include accurate and timely documentation, implementation examples, and analytics that recommend which release is best for a particular organization’s network. Digitization of development:

◉ Leads to improvements in the quality, serviceability, and security of solutions in the field.

◉ Delivers predictive analytics to assist customers to understand, for example, the impact an upgrade, security patches, or new functionality will have on existing systems, with increased assurance about how the network will perform after changes are applied. 

◉ Automates the documentation of each handoff along the development lifecycle to improve traceability from concept and design to coding and testing.

These capabilities will be increasingly important as we continue to focus on developing solutions for software subscriptions, which shift the emphasis from long cycles creating feature-filled releases to shorter development cycles delivering new functionality and customer-requested innovations in accelerated timeframes.

Software Developers Thrive with Digital Development Workflows


For professionals who build software solutions, the digitization of software development focuses on improving productivity, consistency, and efficiency. It democratizes team-based development—that is, everyone is a developer: solution architects, designers, coders, and testers. Teams are configured to bring the appropriate expertise to every stage of solution development. Test developers, for example, should not only develop test plans and specific tests, but also provide functional specifications and code reviews, build test automation frameworks, and represent customer views for validating solutions at every stage of development. Case in point, when customer-specific uses cases are incorporated early into the architecture and design phases, then the functionality of the intended features are built into test suites as code is being written.

A primary focus of digitization of development is creating new toolsets for measuring progress and eliminating friction points. Our home-grown Qualex (Quality Index) platform provides an automated method of measuring and interpreting quality metrics for digitized processes. The goal is to eliminate human bias by using data-driven techniques and self-learning mechanisms. In the past 2 years, Qualex has standardized most of our internal development practices and is saving the engineering organization a considerable amount of time and expense for software management.

­Labs as a Service (LaaS) is another example of applying digitization to transform the development cycle that also helps to efficiently manage CAPEX. Within Cisco, LaaS is a ready-to-use environment for sharing networking hardware, spinning up virtual routers, and providing on-demand testbed provisioning. Developers can quickly and cost effectively design and setup hardware and software environments to simulate various customer use cases, including public and private cloud implementations.

Digitization Reduces Development Workflow Frictions

A major goal of the digitization of software development is to reduce the friction points during solution development. We are accomplishing this by applying AI and machine learning against extensive data lakes of code, documentation, customer requests, bug reports, and previous test cycle results. The resulting contextual analytics will be available via a dashboard at every stage of the development process, reducing the friction of multi-phase development processes. This will make it possible for every developer to have a scorecard that tracks technical debt, security holes, serviceability, and quality. The real-time feedback increases performance and augments skillsets, leading to greater developer satisfaction. 

Cisco Study Materials, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Networking

Workflow friction points inhibit both creativity and productivity. Using analytics to pinpoint aberrations in code as it is being developed reduces the back and forth cycles of pinpointing flaws and reproducing them for remediation. Imagine a developer writing new code for a solution which includes historical code. The developer is not initially familiar with the process or the tests that the inherited code went through. With contextual analytics presenting relevant historical data, the developer can quickly come up to speed and avoid previous mistakes in the coding process. We call this defect foreshadowing. The result is cleaner code produced in less time, reduced testing cycles, and better integration of new features with existing code base.

Digitizing Development Influences Training and Hiring

Enabling a solution view of a project—rather than narrow silos of tasks—also expands creativity and enhances opportunities to learn and upskill, opening career paths. The cross-pollination of expertise makes everyone involved in solution development more knowledgeable and more responsive to changes in customer requirements. In turn everyone gains a more satisfying work experience and a chance to expand their career.

◉ Training becomes continuous learning by breaking down the silos of the development lifecycle so that individuals can work across phases and be exposed to all aspects of the development process.

◉ Automating tracking and analysis of development progress and mistakes enables teams to pinpoint areas in which people need retraining or upskilling.

◉ Enhancing the ability to hire the right talent gets a boost from digitization as data is continuously gathered and analyzed to pinpoint the skillsets that contribute the most to the successful completion of projects, thus refining the focus on the search for talent.

Join Our Journey to Transform Software Development


At Cisco we have the responsibility of carrying the massive technical debt created since the Internet was born while continuously adding new functionality for distributed data centers, multi-cloud connectivity, software-defined WANs, ubiquitous wireless connectivity, and security. To manage this workload, we are fundamentally changing how Cisco builds and tests software to develop products at web-scale speeds. These tools, which shape our work as we shape them, provide the ability to make newly-trained and veteran engineers capable of consistently producing extraordinary results.

Cisco is transforming the solution conception to development to consumption journey. We have made significant progress, but there is still much to accomplish. We invite you to join us on this exciting transformation. As a Cisco Network Engineer, you have the opportunity to create innovative solutions using transformative toolsets that make work exciting and rewarding as you help build the future of the internet. As a Cisco DevX Engineer, you can choose to focus on enhancing the evolving toolset with development analytics and hyper-efficient workflows that enable your co-developers to do their very best work. Whichever path you choose, you’ll be an integral member of an exclusive team dedicated to customer success.

Tuesday, 21 January 2020

CLEUR Preview! Source of Truth Driven Network Automation

It’s a new year and a new decade, so it’s time for a NEW BLOG about network automation. I am getting ready for Cisco Live Europe 2020 and want to give everyone a preview of some of what I’ll be talking about in my session How DevNet Sandbox Built an Automated Data Center Network in Less than 6 Months – DEVNET-1488. If you’ll be in Barcelona, please register and join me for a look at how we approached this project with a goal to “innovate up to the point of panic” and brought in a lot of new NetDevOps and network automation processes.  But don’t worry if you’re reading this from far off in the future, or aren’t attending CLEUR, this blog will still be wildly entertaining and useful!

Today’s blog I want to build on those by showing where the details that provide the INPUT to the automation comes from – something often called the Source of Truth.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

What is a Source of Truth?


If you’ve been kicking around the network automation space for awhile, you may have run across this term… or maybe you haven’t, so let me break it down for you.

Let’s say you are diving into a project to automate the interface configuration for your routers.  You’re going to need 2 things (well likely more than 2, but for this argument we’ll tackle 2).

First, you’ll need to develop the “code” that applies your desired standards for interface configuration.  This often includes building some configuration template and logic to apply that template to a device.  This code might be Python, Ansible, or some other tool/language.

Second, you need to know what interfaces need to be configured, and the specifics for each interface.  This often includes the interface name (ie Ethernet1/1), IP address info, descriptions, speed/duplex, switch port mode, and so on.  This information could be stored in a YAML file (a common case for Ansible), a CSV file, dictionaries and lists in Python, or somewhere else.

Then your code from the “first” will read in the details from the “second” to complete the project.

The “Source of Truth” is that second thing.  It is simply the details that are the desired state of the network.  Every project has a “Source of Truth”, even if you don’t think of it that way.  There are many different tools/formats that your source of truth might take.

Simple Sources of Truth include YAML and CSV files and are great for small projects and when you are first getting started with automation.  However, many engineers and organizations often find themselves reaching a point in their automation journey where these simple options are no longer meeting their needs.  It could be because of the sheer amount of data becomes unwieldy.  Or maybe it’s the relationships between different types of data.  Or it could be that the entire team just isn’t comfortable working in raw text for their information.

When text based options aren’t meeting the needs anymore, organizations might turn to more feature rich applications to act as their Source of Truth.  Applications like Data Center Infrastructure Management (DCIM) and IP Address Management (IPAM) can definitely fill the role of the Source of Truth.  But there is a key difference in using a DCIM/IPAM tool as an automation Source of Truth from how we’ve traditionally used them.

How a DCIM or IPAM becomes a Source of Truth


In the past (before automation), the data and information in these tools was often entered after a network was designed, built, and configured.  The DCIM and IPAM data was a “best effort” representation of the network typically done begrudgingly by engineers who were eager to move onto the next project.  And if we are honest with ourselves, we likely never trusted the data in there anyway.  The only real “Source of Truth” for how the network was configured was the actual network itself.  Want to know what the desired configuration for a particular switch was?  Well go log into it and look.

With Source of Truth driven network automation, we spin the old way on its head.  The first place details about the network are entered isn’t at the CLI for a router, but rather into the IPAM/DCIM tool.  Planning the IP addresses for each router interface – go update the Source of Truth.  Creating the list of VLANs for a particular site – go update the Source of Truth.  Adding a new network to an existing site – go update the Source of Truth.

The reason for the change is that the code you run to build the network configuration will read in the data for each specific device from the Source of Truth at execution time.  If the data isn’t in your DCIM/IPAM tool, then the automation can’t work.  Or if the data isn’t correct in the DCIM/IPAM tool, then the automation will generate incorrect configuration.

It’s also worth noting now that a Source of Truth can be used as part of network validation and health tests as well as for configuration.  Writing a network test case with pyATS and Genie to verify all your interfaces are configured correctly?  Well how do you know what is “correct”?  You’d read it from your Source of Truth.  I call that “Source of Truth driven Network Validation” and I’ll tackle it more specifically in a future blog post.

Source of Truth Driven Automation in Action!


Enough exposition, let’s see this in action.

The Source of Truth that we use in DevNet Sandbox for most information is Netbox.  Netbox is an open source DCIM/IPAM tool originally developed by the network automation team at Digital Ocean for their own needs, and has been popular with many engineers and enterprises looking for a tool of their own.

Let’s suppose we need to add a new network to our main internal admin environment in the Sandbox with the following basic information.

◉ The name of the network will be demo-sourceoftruth
◉ It will need an IP prefix assigned to it from the appropriate IP space
◉ Ethernet 1/3 on the switch sjcpp-leaf01-1 needs to be configured as an access port for this network

The automation to do the actual configuration of the VLAN, SVI, interface config, etc is already done, what I need to do is update the Source of Truth that will drive the automation. This involves the following steps:

1. Creating a new VLAN object

2. Allocating an available prefix and assigning to the new VLAN

3. Updating the details for port Ethernet 1/33 to be an access port on this VLAN

Note: You can click on the screen images that follow to enlarge for easier viewing.

Step 1: Creating a new VLAN object


I start in Netbox at the Tenant view for our Admin environment. From here I can easily view all the DCIM and IPAM details for this environment.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

I click on VLANs to see the current list of VLANs.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

The “Group” column represents the VLAN Group in Netbox – which is a way to organize VLANs that are part of the same switching domain where a particular VLAN id has significance.  This new network will be in the “Internal” group.  I click on “Internal” on any of the VLANs to quickly jump to that part of Netbox so I can find an available VLAN id to use.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

I see that there are 4 VLANs available between 25 and 30, and I click on the green box to add a new on in that space.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

I provide the new name for this network, and indicate it’s role will be for “Sandbox Systems”.  As this new network will be part of the Admin Tenant, I select the proper Group and Tenant from the drop downs.  Netbox supports creating custom fields for data that you need, and we’ve created a field called “Layer3 Enabled on Switched Fabric” to indicate whether SVIs should be setup for a network.  In this case that is True.  After providing the details, I click “Create” to create this new VLAN.

Step 2: Allocating an available prefix and assigning to the new VLAN


Netbox is a full featured IPAM, so let’s walkthrough allocating a prefix for the VLAN.

I start at the Supernet for admin networks at this site, 10.101.0.0/21 to find an available prefix.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

I click on the Available range, to jump to the “Add a New Prefix” interface.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

I start by updating the Prefix to be the proper size I want, picking the Role (this matches the VLAN role), providing a good description so folks know what this is for.  I then choose the new VLAN we just created to associate this prefix to using the drop downs and search options provided in the UI.  Lastly I pick the Admin tenant and click “Create”

Now if I go back and look at the VLANs associated with the Admin Tenant, I can see our new VLAN in the list with the Prefix allocated.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

Step 3: Updating the details for port Ethernet 1/3 to be an access port on this VLAN


The final step in Netbox is to indicate the physical switch interfaces that will have devices connected to this new VLAN.

I navigate in Netbox to the device details page for the relevant switch.  At the bottom of the page are all the interfaces on the device.  I find interface Ethernet 1/3 and click the “Edit” button.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

I update the interface configuration with an appropriate Description, set the 802.1Q Mode to Access, and select our new VLAN as the Untagged VLAN for the port.  Then click “Update” to save the changes.

Cisco Study Materials, Cisco Online Exam, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

Applying the New Network Configuration


With our Source of Truth now updated with the new network information, we simply need our network automation to read this data in and configure the network.  There are many ways this could be done, including a fully automated option where a webhook from Netbox kicks off the automation.  In our environment we are adopting network automation in stages as we build experience and confidence.  Our current status is that we execute the automation to process the data from the Source of Truth to update the network configuration manually.

When I run the automation to update the network configuration with the new Source of Truth info, here are the changes to the vlan-tenant configuration for our admin environment.

hapresto@nso1-preprod(config)# load merge nso_generated_configs/vlan-tenant_admin.xml
Loading.
1.78 KiB parsed in 0.00 sec (297.66 KiB/sec)

hapresto@nso1-preprod(config)# show configuration
vlan-tenant admin
  network demo-sourceoftruth
    vlanid 26
    network 10.101.1.0/28
    layer3-on-fabric true
    connections switch-pair sjcpp-leaf01
      interface 1/3
      description "Demonstration VLAN for Blog - Interface Config"
      mode access

Here you can see the new network being created, along with the VLAN id, prefix, and even the physical interface configurations.  All this detail was pulled directly from Netbox by our automation process.

And if you’d like to see the final network configuration that will be applied to the network after processing the templates in our network service by NSO, here it is.

    device {
        name sjcpp-leaf01-1
        data vlan 26
              name demo-sourceoftruth
             !
             interface Vlan26
              no shutdown
              vrf member admin
              ip address 10.101.1.2/28
              no ip redirects
              ip router ospf 1 area 0.0.0.0
              no ipv6 redirects
              hsrp version 2
              hsrp 1 ipv4
               ip 10.101.1.1
               preempt
               priority 110
              exit
             exit
             interface Ethernet1/3
              switchport mode access
              switchport access vlan 26
              no shutdown
              description Demonstration VLAN for Blog - Interface Config
              mtu 9216
             exit
    }
    device {
        name sjcpp-spine01-1
        data vlan 26
              name demo-sourceoftruth
             !
    }

Note: The service also updates vCenter to create a new port-group for the vlan, as well as Cisco UCS, but I’m only showing the typical network configuration here.

Finishing Up!


Hopefully this gives you a better idea about how a Source of Truth fits into network automation projects, and how a tool like Netbox provides this important feature for enterprises.

Sunday, 29 July 2018

Render your first network configuration template using Python and Jinja2

We all know how painful it is to enter the same text in to the CLI, to program the same network VLANs, over, and over, and over and over, and over…. We also know a better way that exists, with network programmability, but this solution could be a few years out before your company adopts the newest network programmability standards.  What are you to do???

Cisco Guides, Cisco Study Material, Cisco Learning, Cisco Tutorial and Material

Using Python and Jinja2 to automate network configuration templates is a really useful way to simplify repetitive network tasks, that as engineers, we often face on a daily basis. In using this alternative method to automate our tasks we can remove the common error mistakes experienced in the copying/pasting of commands into the CLI (command line interface). If you are new to network automation, this is a fantastic way to get started with network programmability.

Firstly, let’s cover the basic concepts we will run over here.

◈ What are CLI Templates? CLI templates are a set of re-usable device configuration commands with the ability to parameterize select elements of the configuration as well as add control logic statements. This template is used to generate a device deployable configuration by replacing the parameterized elements (variables) with actual values and evaluating the control logic statements.
◈ What is Jinja2? Jinja2 is one of the most used template engines for Python. It is inspired by Django’s templating system but extends it with an expressive language that gives template authors a more powerful set of tools.

Prerequisites: 


Jinja2 works with Python 2.6.x, 2.7.x and >= 3.3. If you are using Python 3.2 you can use an older release of Jinja2 (2.6) as support for Python 3.2 was dropped in Jinja2 version 2.7. To install this use pip.

pip install jinja2

Now we have Jinja2 installed let us take a quick look at this with a simple “Hello World” example with Python. To start with, create a Jinja2 file with “Hello World” inside (I am saving this into the same directory I am going to write my python code in). A quick way to create this file is with echo.

echo "Hello World" > ~/automation_fun/hello_world.j2

Now let us create our python code. We import Environment and FileSystemLoader, which allows us to use external files with the template. Feel free to create your python code in the way you feel is best for you. You can use the python interpreter or an IDE such as PyCharm.

from jinja2 import Environment, FileSystemLoader

#This line uses the current directory
file_loader = FileSystemLoader('.')

env = Environment(loader=file_loader)
template = env.get_template('hello_world.j2')
output = template.render()
#Print the output
print(output)

Use the following command to run your python program.

STUACLAR-M-R6EU:automation_fun stuaclar$ python hello_template.py
Hello World

Congratulations, your first template was a success!

Next, we will look at variables with Jinja2.

Variables With Jinja2


Template variables are defined by the context dictionary passed to the template. You can change and update the variables in templates provided they are passed in by the application. What attributes a variable has depends heavily on the application providing that variable. If a variable or attribute does not exist, you will get back an undefined value.

Cisco Guides, Cisco Study Material, Cisco Learning, Cisco Tutorial and Material

In this example, we will build a new BGP neighbor with a new peer. Let’s start by creating another Jinja2 file, this time using variables.  The outer double-curly braces are not part of the variable, what is inside will be what is printed out.

router bgp {{local_asn}}
 neighbor {{bgp_neighbor}} remote-as {{remote_asn}}
!
 address-family ipv4
  neighbor {{bgp_neighbor}} activate
exit-address-family

This python code will look similar to what we used before, however, we are passing three variables

from jinja2 import Environment, FileSystemLoader
#This line uses the current directory
file_loader = FileSystemLoader('.')
# Load the enviroment
env = Environment(loader=file_loader)
template = env.get_template('bgp_template.j2')
#Add the varibles
output = template.render(local_asn='1111', bgp_neighbor='192.168.1.1', remote_asn='2222')
#Print the output
print(output)

This will then print this output, notice that as we have repetitive syntax (the neighbor IP address), the variable is used again.

STUACLAR-M-R6EU:automation_fun stuaclar$ python bgp_builder.py
router bgp 1111
 neighbor 192.168.1.1 remote-as 2222
!
 address-family ipv4
  neighbor 192.168.1.1 activate
exit-address-family

If we have some syntax that will appear multiple times throughout our configuration, we can use for loops to remove redundant syntax.

For Loops with Jinja2


The for loop allows us to iterate over a sequence, in this case, ‘vlan’. Here we use one curly brace and a percent symbol. Also, we are using some whitespace control with the minus sign on the first and last line.  By adding a minus sign to the start or end of a block the whitespaces before or after that block will be removed. (You can try this and see the output difference once the Python code has been built). The last line tells Jinja2 that the template loop is finished, and to move on with the template.

Create another Jinja2 file with the following.

{% for vlan in vlans -%} 
    {{vlan}}
{% endfor -%}

In the python code, we add a list of vlans.

from jinja2 import Environment, FileSystemLoader

#This line uses the current directory
file_loader = FileSystemLoader('.')
# Load the enviroment
env = Environment(loader=file_loader)
template = env.get_template('vlan.j2')
vlans = ['vlan10', 'vlan20', 'vlan30']
output = template.render(vlans=vlans)
#Print the output
print(output)

Now we can run with python code and see our result.

STUACLAR-M-R6EU:automation_fun stuaclar$ python vlan_builder.py
vlan10
vlan20
vlan30

All of the code for these examples can be found on my GitHub https://github.com/bigevilbeard/jinja2-template

Wednesday, 14 February 2018

Continuous Integration and Deployment for the Network

The requisite for Continuous Integration and Deployment (CI/CD) pipeline being used in the network is growing. Continuous Integration and deployment helps counteract inaccuracies in daily network deployments and changes, hence is critically required. The upshot for change release and automated network testing is that changes are simplified, done more quickly and more streamlined, with CI/CD.