Tuesday, 12 January 2021

Network Security and Containers – Same, but Different

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

Introduction

Network and security teams seem to have had a love-hate relationship with each other since the early days of IT. Having worked extensively and built expertise with both for the past few decades, we often notice how each have similar goals: both seek to provide connectivity and bring value to the business. At the same time, there are also certainly notable differences. Network teams tend to focus on building architectures that scale and provide universal connectivity, while security teams tend to focus more on limiting that connectivity to prevent unwanted access.

Often, these teams work together — sometimes on the same hardware — where network teams will configure connectivity (BGP/OSPF/STP/VLANs/VxLANs/etc.) while security teams configure access controls (ACLs/Dot1x/Snooping/etc.). Other times, we find that Security defines rules and hands them off to Networking to implement. Many times, in larger organizations, we find InfoSec also in the mix, defining somewhat abstract policy, handing that down to Security to render into rulesets that then either get implemented in routers, switches, and firewalls directly, or else again handed off to Networking to implement in those devices. These days Cloud teams play an increasingly large part in those roles, as well.

All-in-all, each team contributes important pieces to the larger puzzle albeit speaking slightly different languages, so to speak. What’s key to organizational success is for these teams to come together, find and communicate using a common language and framework, and work to decrease the complexity surrounding security controls while increasing the level of security provided, which altogether minimizes risk and adds value to the business.

As container-based development continues to rapidly expand, both the roles of who provides security and where those security enforcement points live are quickly changing, as well.

The challenge

For the past few years, organizations have begun to significantly enhance their security postures, moving from only enforcing security at the perimeter in a North-to-South fashion to enforcement throughout their internal Data Centers and Clouds alike in an East-to-West fashion. Granual control at the workload level is typically referred to as microsegmentation. This move toward distributed enforcement points has great advantages, but also presents unique new challenges, such as where those enforcement points will be located, how rulesets will be created, updated, and deprecated when necessary, all with the same level of agility business and thus its developers move at, and with precise accuracy.

At the same time, orchestration systems running container pods, such as Kubernetes (K8S), perpetuate that shift toward new security constructs using methods such as the CNI or Container Networking Interface. CNI provides exactly what it sounds like: an interface with which networking can be provided to a Kubernetes cluster. A plugin, if you will. There are many CNI plugins for K8S  such as pure software overlays like Flannel (leveraging VxLAN) and Calico (leveraging BGP), while others tie worker nodes running the containers directly into the hardware switches they are connected to, shifting the responsibility of connectivity back into dedicated hardware.

Regardless of which CNI is utilized, instantiation of networking constructs is shifted from that of traditional CLI on a switch to that of a sort of structured text-code, in the form of YAML or JSON- which is sent to the Kubernetes cluster via it’s API server.

Now we have the groundwork laid to where we begin to see how things may start to get interesting.

Scale and precision are key

As we can see, we are talking about having a firewall in between every single workload and ensuring that such firewalls are always up to date with the latest rules.

Say we have a relatively small operation with only 500 workloads, some of which have been migrated into containers with more planned migrations every day.

This means in the traditional environment we would need 500 firewalls to deploy and maintain minus the workloads migrated to containers with a way to enforce the necessary rules for those, as well. Now, imagine that a new Active Directory server has just been added to the forest and holds the role of serving LDAP. This means that a slew of new rules must be added to nearly every single firewall, allowing the workload protected by it to talk to the new AD server via a range of ports – TCP 389, 686, 88, etc. If the workload is Windows-based it likely needs to have MS-RPC open – so that means 49152-65535; whereas if it is not a Windows box, it most certainly should not have those opened.

Quickly noticeable is how physical firewalls become untenable at this scale in the traditional environments, and even how dedicated virtual firewalls still present the complex challenge of requiring centralized policy with distributed enforcement. Neither does much to aid in our need to secure East-to-West traffic within the Kubernetes cluster, between containers. However, one might accurately surmise that any solution business leaders are likely to consider must be able to handle all scenarios equally from a policy creation and management perspective.

Seemingly apparent is how this centralized policy must be hierarchical in nature, requiring definition using natural human language such as “dev cannot talk to prod” rather than the archaic and unmanageable method using IP/CIDR addressing like “deny ip 10.4.20.0/24 10.27.8.0/24”, and yet the system must still translate that natural language into machine-understandable CIDR addressing.

The only way this works at any scale is to distribute those rules into every single workload running in every environment, leveraging the native and powerful built-in firewall co-located with each. For containers, this means the firewalls running on the worker nodes must secure traffic between containers (pods) within the node, as well as between nodes.

Business speed and agility

Back to our developers.

Businesses must move at the speed of market change, which can be dizzying at times. They must be able to code, check-in that code to an SCM like Git, have it pulled and automatically built, tested and, if passed, pushed into production. If everything works properly, we’re talking between five minutes and a few hours depending on complexity.

Whether five minutes or five hours, I have personally never witnessed a corporate environment where a ticket could be submitted to have security policies updated to reflect the new code requirements, and even hope to have it completed within a single day, forgetting for a moment about input accuracy and possible remediation for incorrect rule entry. It is usually between a two-day and a two-week process.

This is absolutely unacceptable given the rapid development process we just described, not to mention the dissonance experience from disaggregated people and systems. This method is ripe with problems and is the reason security is so difficult, cumbersome, and error prone within most organizations. As we shift to a more remote workforce, the problem becomes even further compounded as relevant parties cannot so easily congregate into “war rooms” to collaborate through the decision making process.

The simple fact is that policy must accompany code and be implemented directly by the build process itself, and this has never been truer than with container-based development.

Simplicity of automating policy

With Cisco Secure Workload (Tetration), automating policy is easier than you might imagine.

Think with me for a moment about how developers are working today when deploying applications on Kubernetes. They will create a deployment.yml file, in which they are required to input, at a minimum, the L4 port on which containers can be reached. The developers have become familiar with networking and security policy to provision connectivity for their applications, but they may not be fully aware of how their application fits into the wider scope of an organizations security posture and risk tolerance.

This is illustrated below with a simple example of deploying a frontend load balancer and a simple webapp that’s reachable on port 80 and will have some connections to both a production database (PROD_DB) and a dev database (DEV_DB). The sample policy for this deployment can be seen below in this `deploy-dev.yml` file:

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

Now think of the minimal effort it would take to code an additional small yaml file specified as kind:NetworkPolicy, and have that automatically deployed by our CI/CD pipeline at build time to our Secure Workload policy engine which is integrated with the Kubernetes cluster, exchanging label information that we use to specify source or destination traffic, indeed even specifying the only LDAP user that can reach the frontend app. A sample policy for the above deployment can be seen below in this ‘policy-dev.yml’ file:

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

As we can see, the level of difficulty for our development teams is quite minimal, essentially in-line with the existing toolsets they are familiar with, yet this yields for our organizations immense value because the policy will be automatically combined and checked against all existing security and compliance policy as defined by the security and networking teams.

Key takeaways


Enabling developers with the ability to include policy co-located with the software code it’s meant to protect, and automating the deployment of that policy with the same CI/CD pipelines that deploy their code provides businesses with speed, agility, versioning, policy ubiquity in every environment, and ultimately gives them a strong strategic competitive advantage over legacy methods.

Monday, 11 January 2021

McMahons Builders Providers Deliver Exceptional Customer Experiences for Another 190 Years

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

McMahons Builders Providers is one of Ireland’s largest independent building providers, offering quality building supplies and do-it-yourself materials to the trade and public since 1830.

Reliable and secure WAN connectivity key to continued success

With 14 retail stores spread across the Republic and Northern Ireland plus a Roof Truss manufacturing plant, WAN connectivity is critical to McMahons Builders Providers operations. WAN outages result in orders that can’t be taken through its centralized point of sales system, disrupting sales and impacting customer experiences. McMahons needs WAN connectivity that is fully redundant, secure, and manageable. And for this, McMahons turned to Logicalis and Cisco.

Logicalis Managed Services provided a one-stop shop to assess, design, and build a new WAN and server environment, greatly improving McMahons’ network reliability and security while simplifying overall manageability.

McMahons was due to upgrade its aging connectivity and server environment. Its new IT manager requested a move to a more centralized environment. McMahons no longer wanted server infrastructure in its retail stores. Instead, management wanted a fully redundant and secure centralized system that would allow expansion while reducing cooling and power needs.  McMahons also wanted offsite backup and disaster recovery, all at an affordable price. Decision makers looked at cloud solutions but favored the Logicalis and Cisco design.

A solution, not boxes

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

In answer to these requests, Logicalis didn’t just sell McMahons a host of boxes, leaving it up to the company to figure out how to assemble and manage an optimized solution. Rather, Logicalis worked with McMahons Builders Providers to truly understand its business and technical challenges and then designed and implemented a Cisco-based solution to meet its needs.

For example, all site-to-site traffic flows through Cisco Meraki firewalls, adding high availability to the McMahons WAN; the Meraki solution decides to route traffic either over MPLS or VPN at each site. The Meraki solution also helps to ensure that McMahons has improved visibility into what is happening across its network, including all store and corporate locations. Security and performance of the network has been greatly enhanced with the adoption of Meraki as the standard site firewall at McMahons.

As another example, Logicalis implemented a Cisco HyperFlex solution with offsite Cisco servers for backup and disaster recovery. This helped reduce rack space, cooling requirements and power consumption, while minimizing any day-to-day management overhead. With this onsite steady state environment, McMahons gained more control of its IT resources while also reducing overall costs.

Logicalis also leveraged Cisco’s broad network of partners to enhance the overall solution. Consider this scenario: A Cisco server running VMware Hypervisor is located in a remote disaster recovery site, providing offsite disaster recovery. In addition, a separate Cisco server provides all system backups, running Veeam Backup and Replication software. Together, the Cisco and Veeam solution helps keep McMahons applications and data available 24/7, giving the company a reliable backup and recovery solution that simply works, requiring limited IT staff intervention.

Finally, as part of its fully managed service offering, Logicalis also provides on-going Tier 3 support, helping to ensure the reliability and security of the infrastructure.

Laying the foundation for continued years of success

With its new connectivity and server environment, store associates don’t experience point of sales downtime that might inhibit their ability to process transactions. That means customers can buy merchandise any time the store is open. In addition, McMahons has experienced increased performance and reliability of its infrastructure, along with reduced cooling and power consumption.

The McMahons IT team manages the new Meraki- and HyperFlex-based environments on a day-to-day basis. They are much more easily managed than the earlier environment, which is important to McMahons because it has a small IT department. For example, in the past, IT staff would have had to come in over the weekend to do a firewall upgrade. Now, an IT staff member can perform a firewall upgrade remotely through an app on an iOS device.

IT staff productivity has increased as well. Trips to the computer room are now a rare occurrence, and visits to remote sites to manage IT infrastructure have significantly decreased. All these factors help to reduce ongoing IT costs and enable IT to focus on new projects and customer service improvements.

Staying in business for 190 years is no small feat. No doubt, continuing to satisfy customers day in and day out is key to this success. By partnering with Logicalis Managed Services and Cisco, McMahons Builders Providers is at the cutting edge of its digital journey to providing exceptional customer experiences.

Sunday, 10 January 2021

Security Outcomes Report: Top Findings from Around the World

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

The Security Outcomes Study has been out for a few weeks now and I’ve had time to sit back and read it over with coffee in hand. The report empirically measures what factors drive the best security outcomes. The part that really caught me from the outset was the fact that this was based on a survey wherein the respondents didn’t in fact know that it was for Cisco. I think this is a point that absolutely must be highlighted right from the beginning. It was interesting to look at how the respondents set themselves apart from each other when a geographic lens was focused on the collected data.

To be quite clear, there were many similarities between the different regions around the world. Whether in APJC, EMEAR or the Americas it showed that there is in fact a significant push towards technology refresh in every region. The study shows a significant improvement in security when organizations have a proactive approach to refreshing their IT and security technology. This makes sense because rather than continuing to operate on systems and software that may be deprecated, the study shows that by creating refresh projects, organizations could mitigate a significant amount of security issues that had been lingering for a multitude of reasons. This helped organizations to alleviate some of the accumulated security debt.

Now as we break out into different regions, we see that the priorities tend to diverge. When we look at the data collected from APJC we see that some of the focal points (the squares in the matrix with the darkest shades of blue) such as building executive confidence on threat detection so as to secure more budget are a challenge. This is the top-rated point for the survey from respondents in Asia for this report.

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

The data from EMEAR however shows an increase in focus on proactive tech refresh for the goals of satisfying meeting compliance regulations. Here too, as we see in APJC, that cost effectiveness is also important. Timely incident response also registers high on the ranking for working to manage the top security risks facing organizations. The top listed data point for the EMEAR is hands down on working to meet compliance regulations at 11.2%.

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

Now as we shift our discussion to the Americas, we see that the priorities shift. In contrast to APCJ and EMEAR regions, for the Americas this doesn’t register in the data as it pertains to threat detection and security budgeting. There are two items that leap off the page are for priorities in the Americas. First is a focus on running a cost-effective shop with well-integrated technology. The second point which ranks highest overall is the need to retain security talent to help manage the well-integrated technology deployments.

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

This survey was a bit of an eye opener for me personally as I did not expect that a proactive technology refresh program would be as much of a focus for organizations as it is. However, it does make sense. To help manage the accrual of security debt a tech refresh program will go a long way to helping to alleviate the issues introduced by risk management that has not been able to close out issues.

This was really rather amazing reading for a survey driven study and my hat is off to the team who drove this project and the incredible insights that it provides, not only from a sheer statistical point of view but also from the perspective of a regional break out.

Saturday, 9 January 2021

Trustworthy Networking is Not Just Technological, It’s Cultural – Part 3

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Networking

Part 3: Developing a Culture of Trust

In my two previous posts on the topic of trustworthy networking, I’ve focused on the multiple technologies Cisco designs and embeds into all our hardware and software and how they work together to defend the network against a variety of attacks. I explored how it’s not just about the trust technologies but also about the culture of trustworthy engineering that is the foundation of all that we do. In this post I’ll focus on how Cisco builds and maintains a culture of trustworthiness.

But first, what is culture? What is does trustworthy mean? Just as there are a diversity of human societies, there are different characterizations of culture and trust.

Fusing several definitions, we can summarize culture as:

◉ The quality in a person or society that arises from a concern for what is regarded as excellent in arts, letters, manners, scholarly pursuits, etc. and provides important social and economic benefits.

◉ Culture enhances our quality of life and increases overall well-being for both individuals and communities.

Trustworthy is another word with a variety of implications:

◉ Trust describes something you can rely on, and the word worthy, describes something that deserves respect.

◉ Trust is intangible – it is an intellectual asset, a skill, and an influencing power for leaders. Showing trustworthiness by competence, integrity, benevolence, and credibility makes a difference in daily leadership work.

◉ Trustworthy describes something you can believe in — it’s completely reliable.

Therefore, a culture of trustworthiness provides a consistent approach to designing, building, delivering, and supporting secure products and solutions that customers can rely on to “do what they are expected to do in a verifiable way”. When engineers approach product design and development with integrity and security of product functionality and ensure the safety of customer data from day one of a project, then the outcome has an excellent chance of being trustworthy. Let’s look at how Security Leadership permeates Cisco culture with reliability and credibility through education, social contracts, and a strict adherence to Cisco Secure Development Lifecycle (CSDL).

A Culture of Trustworthiness Starts with Continuous Security Education

Designing trustworthy networks requires a commitment to professional improvement with deep learning into secure technologies, threat awareness, and industry-standard principles. At Cisco this education starts with levels of Cisco Security Space Center program that every employee and contractor must complete to various levels of proficiency depending on their jobs. To date, over 75,000 people in the Cisco workforce have completed the required levels of Security training. This greatly increases security awareness throughout the organization. It also gives the workforce a common language to discuss the principles of trustworthy design and support.

Pervasive cultural security also requires a legion of advocates inclusive of Cisco employees, vendors, partners, and customers. For example, embedded in every aspect of engineering are Security Advocates who advise, monitor, and report on the implementation of trustworthy security processes. Advocates pride themselves as having a thorough understanding of Cisco Security Space Center training. Security and Vulnerability Audits provide assurance that CSDL is followed and as problems are uncovered during the development and testing cycle they cannot be ignored. Audit teams reports not to engineering management but to the C-suite to ensure that problems are completely fixed or a release red-lighted until they are remediated. This is another example of a culture of trust that permeates across functional departments all the way to the C-level—all in service of protecting the customer.

Threat modeling is another skillset reinforced through training and applied consistently throughout the development lifecycle. It represents a repeatable process for identifying, understanding, and prioritizing solution security risks. Engineers analyze external interfaces, component interactions, and the flow of data through a system to identify potential weaknesses where solutions might be compromised by external threats.

Development security policies not only set the rules for protecting the organization, but also protect investments across people, processes, and technology.

◉ Employee and supplier codes of conduct are signed annually to keep people focused on the importance of trust and their promise to deliver secure products across the value chain and never intentionally do harm.

◉ Enterprise information security and data protection policies are aligned with security standards like ISO 27001.

◉ Using site audits to continuously monitor Cisco and partner development properties ensures that physical security policies—such as camera monitoring, security checkpoints, alarms and electronic or biometric access control—are being maintained.

◉ Data protection and incident response policies are available to customers to help them understand the processes Cisco has in place to protect their data privacy and the actions that will be taken should a data breach occur.

◉ The Product Security Incident Response Team (PSIRT) is independent from engineering and is critical to keeping an unbiased watchful eye on all internally and externally developed code. Anyone at Cisco, customers, and partners can report security issues in shipping code and be assured that they will be logged and addressed appropriately.

Tailoring Cisco Secure Development Lifecycle (CSDL) to Solution Type

We examined the Cisco Secure Development Lifecycle in Part 1 of this series but considering how rapidly networks are evolving to accommodate “data and applications everywhere” and the dispersal of the workforce from campus environments, it deserves another look relating to the culture of trust. The constantly evolving development techniques that are needed to address emerging security threats resulting from these increasingly dispersed workplace.  The evolving workforce means that secure development processes must be adapted depending on the type of solution and where they are deployed:

◉ on-premises networking device

◉ appliance running application

◉ network controller/management

◉ application running in the cloud

◉ combination of on-prem and cloud; aka hybrid cloud.

During development, engineers are trained to approach each of these according to the end deployment. For example, standardized toolsets, such as Cisco Cloud Maturity Model (CCMM), provide a consistent method to assess the quality of all of Cisco’s SaaS offerings. It includes evaluations of many quality attributes, such as availability, reliability, security, scalability, etc. CCMM provides a quantitative and standardized method to gauge the health of all Cisco cloud offerings.

Infusing a Culture of Trust Throughout the Value Chain

If a trustworthy culture stopped at the walls of Cisco and the minds of our employees, there would still be room for bad actors and malicious code to wreak havoc. That’s why Cisco extends our trustworthy principles to partners and suppliers throughout the value chain. We strive to put the right security in the right place at the right time to continually assess, monitor, and improve the security of our value chain throughout the entire lifecycle of Cisco solutions.

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Networking
Cisco Trust Value Chain

Cisco value chain security continually assesses, monitors, and improves the security of our partners who are third-party providers of hardware components, assembly, and open-source software that are an integral part of our solutions’ life cycles.

We strive to ensure that our solutions are genuine and not counterfeited or tainted during the manufacturing and shipment processes. The steps Cisco and our partners adhere to ensure that our solutions operate as customers direct them to and are not controlled or accessible by unauthorized rogue agents or software threats.

These investments in our people and partners, along with services like Technology Verification, help Cisco provide a comprehensive plan that covers how and what we are doing to support the security, trust, privacy, and resiliency of our customers. Earning customer trust is about being transparent and accountable as we strive to connect everything securely.

To understand our complete Trustworthy Networking story, please refer to Part 1: The Technology of Trust and Part 2 How Trustworthy Networking Thwarts Security Attacks of this blog series, as well as The Cisco Trust Center web site.

Thursday, 7 January 2021

Network Automation with Cisco DNA Center SDK – Part 2

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certifications

In Part 1 of this series we went through setting up the SDK; importing into your python project; and making your first call using the SDK to authenticate against an instance of the DevNet Sandbox;

Let’s roll up our sleeves and dig into the next part of utilizing the SDK. In this installment of the blog series, we will look at how simple it is to leverage python to programmatically run IOS commands throughout your entire infrastructure with DNAC’s command runner APIs. We will also assume you already have installed the SDK and understand how authentication works.

What is Command Runner?

Command Runner is a feature in Cisco DNA Center that allows you to execute a handful of read-only (for now) IOS commands on the devices managed by DNAC

Here is how you can get a list of all supported IOS commands.

commands = dnac.command_runner.get_all_keywords_of_clis_accepted()

dnac – Our connection object we created from Part 1 of the series

command_runner – Command runner class. Calling it will allow us to access the underlying methods

get_all_keywords_of_clis_accepted() – This is the method we are after to display a list of all supported keywords.

Makes sense? Now that we understand what Command Runner is, let’s dig into using the APIs to build a simple use case.

Command Runner flow

The use case we are about to build together is a simple configuration backup. In order to accomplish this task we will need to:

1. Retrieve a list of all managed devices

2. Execute a `show run` on each device using Command Runner APIs

3. Retrieve the results and save them to file

But before we do so, understanding Command Runner flow is prudent.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certifications

Cisco DNA Center API calls are asynchronous, which means for each executed task a Task id is created. Upon task completion content can be retrieved from /file endpoint.

Endpoints and methods used

◉ POST /dna/system/api/v1/auth/token
◉ GET /dna/intent/api/v1/network-device
◉ POST /dna/intent/api/v1/network-device-poller/cli/read-request
◉ GET /dna/intent/api/v1/task/{task_id}
◉ GET /dna/intent/api/v1/file/{file_Id}

This sounds too complex, right? Not really, with the help of our handy dandy SDK we are able to handle all of this very easily.

Let’s take it step by step


Authenticate

– Create a new connection object and assign it to a variable

dnac = DNACenterAPI(username=dnac_creds['username'], password=dnac_creds['password'], base_url=dnac_creds['url'])

Retrieve list of devices

– Using the devices class to call get_device_list() method to retrieve a list of all managed devices.
– Upon 200 OK loop through the list of Switches and Hubs and extract the device id, we need to leverage it to programmatically run the command on each device
– Access device id variable via device.id pass it to and call cmd_run() function

def get_device_list():
    devices = dnac.devices.get_device_list()
    devicesuid_list = []
    for device in devices.response:
        if device.family == 'Switches and Hubs':
            devicesuid_list.append(device.id)
    print("Device Management IP {} ".format(device.managementIpAddress))
    cmd_run(devicesuid_list)

Execute Command Runner

– As we iterate over each device, we will need to execute show run command. to do so use command_runner class and call read run_read_only_commands_on_devices() method. This method requires two inputs of type list: commands and deviceUuids
– Upon execution DNAC will return a taskId (asynchronous, remember?)
– Check its progress via task class by calling get_task_by_id() method. Once the task has been successfully executed (you can use the built-in error handling within the SDK to check but that’s for another blog post) grab the returned fileId
– Now simply access the file class and call the download_a_file_by_fileid() method et VOILA!

def cmd_run(device_list):
    for device in device_list:
        print("Executing Command on {}".format(device))
        run_cmd = dnac.command_runner.run_read_only_commands_on_devices(commands=["show run"],deviceUuids=[device])
        print("Task started! Task ID is {}".format(run_cmd.response.taskId))
        task_info = dnac.task.get_task_by_id(run_cmd.response.taskId)
        task_progress = task_info.response.progress
        print("Task Status : {}".format(task_progress))
        while task_progress == 'CLI Runner request creation':
            task_progress = dnac.task.get_task_by_id(run_cmd.response.taskId).response.progress
        task_progress= json.loads(task_progress)
        cmd_output = dnac.file.download_a_file_by_fileid(task_progress['fileId'])
        print("Saving config for device ... \n")

Congratulation!

That was a lot! Luckily the SDK handled a lot of the heavy lifting for us here. This is a great example of configuration management. You could use this as a base to start building out a simple configuration drift monitoring tool given that the config is returned as JSON data. We can easily use JSON query to check for any configuration drift and automatically rebase it to the original config. This can be taken a step further even by leveraging Git for version control of your device config.

Wednesday, 6 January 2021

Network Automation with Cisco DNA Center SDK – Part 1

Cisco Exam Prep, Cisco Learning, Cisco Certification, Cisco Career, Cisco Prep

From orchestrating deployment automation to network management and assurance, Cisco DNA Center controller is the brain of network automation. With Cisco’s API-first approach in mind, Cisco DNA Center enables developers to build network applications on top of DNA Center.

Let’s start from the beginning

For this blog, I’m going to start from the very beginning:

◉ you don’t need prior knowledge of Cisco DNA Center

◉ you don’t need to know advanced coding and networking concepts

◉ you will need basic Python and API knowledge to understand the utilization of the SDK

If you do not have basic Python and API knowledge … don’t worry. DevNet has some great resources to get you started with the basics.

Start Now – Cisco DNA Center SDK

At this point you should have you developer environment ready to go, if you don’t, this DevNet module should help

Now let’s make sure we install our SDK before we start using it.

DNA Center SDK is available via PIP and the Python Package Index (PyPI). To install it simply run:

$ pip install dnacentersdk

Working with the DNA Center APIs directly without the help of the Python SDK is pretty straight forward, however, when you are looking to write a network automation, the code can become rather repetitive.

import requests

         DNAC_URL = [DNA Center host url]

         DNAC_USER = [username]

         DNAC_PASS = [password]

         def get_auth_token():

             """

             Building out Auth request. Using requests.post to make a call to the Auth Endpoint

             """

             url = 'https://{}/dna/system/api/v1/auth/token'.format(DNAC_URL) 

             hdr = {'content-type' : 'application/json'} 

             resp = requests.post(url, auth=HTTPBasicAuth(DNAC_USER, DNAC_PASS), 

         headers=hdr) 

             token = resp.json()['Token'] 

             print("Token Retrieved: {}".format(token)) 

             return token

Using the `requests` library we make a POST call to the /auth/token endpoint. The result, if 200OK, will return an authentication token that will need to be utilized for all subsequent API calls as part of `X-Auth-Token` header call. Which means every time we have to call the get_auth_token() function to refresh the token. This is what we are trying to avoid as you can see how repetitive this could get.

Life with Cisco DNAC Center SDK

The DNA Center SDK saves you an insane amount of time by not requiring you to:

◉ Setup the environment every time

◉ Remember URLs, request parameters and JSON formats

◉ Parse the returned JSON and work with multiple layers of list and dictionary indexes

Enter dnacentersdk (queue The Next Episode and drop the shades 🕶 )

With dnacentersdk, the above Python code can be consolidated to the following:

from dnacentersdk import DNACenterAPI

DNAC_URL = [DNA Center host url]

DNAC_USER = [username]

DNAC_PASS = [password]

dnac = DNACenterAPI(username= DNAC_USER, password= DNAC_PASS, base_url= DNAC_URL)

Cisco Exam Prep, Cisco Learning, Cisco Certification, Cisco Career, Cisco Prep
Let’s dig into the code here to see how simple it is:

1. Make the SDK available in your code by importing it

from dnacentersdk import DNACenterAPI

2. Define your DNA Center’s host url, username and password

DNAC_URL = [DNA Center host url]

DNAC_USER = [username]

DNAC_PASS = [password]

3. Create a DNACenterAPI connection object and save it

dnac = DNACenterAPI(username= DNAC_USER, password= DNAC_PASS, base_url= DNAC_URL)

From this point on, in order to access the subsequent API calls, you don’t have to worry about managing your token validity, API headers or Rate-Limit handling. The SDK does that for you.

Another great feature about the SDK is that it represents all returned JSON objects as native Python objects so you can access all of the object’s attributes using native dot.syntax!

Congratulations!

At this point you have a working developer environment and a python project that leverages the DNA Center SDK. For the next installment of this series, I’ll walk you through building on top of the code we are building together here, and will start exploring how to leverage the SDK to automate some of the tasks within Cisco DNA Center. For future reference, everything I have mentioned here – from SDK documentation to code – can be found on Cisco DevNet.

Source: cisco.com

Tuesday, 5 January 2021

The Darkness and the Light

Cisco Prep, Cisco Exam Prep, Cisco Certification, Cisco Guides, Cisco Career

Introduction

The psychoanalyst Carl Jung once said “One does not become enlightened by imagining figures of light, but by making the darkness conscious. The later procedure, however, is disagreeable and therefore not popular.”

With a quote as profound as this, one feels obligated to start by saying that workload security isn’t nearly as important as this concept of personal enlightenment that Jung seems to point to. The two admittedly are worlds apart. Yet, if you’ll allow it, I believe there is wisdom here that might be applied to the situation we find ourselves faced with- namely, reducing our business risk by securing our workloads.

The challenge

Many organizations seek to obtain an acceptable balance between the lowest spend in trade off to the highest possible value. Any business not following this general guideline may soon find themselves out of cash. A common business practice is to perform a cost-benefit analysis (CBA). Many even take risk and uncertainty into account by adding sensitivity analysis to variables in their risk assessment as a component of the CBA. However, as well meaning as many folks are, they may often focus on the wrong benefit. With security, the highest benefit is often finding the lowest risk, but again one must ask, “What are our potential risks?“.

Often when risks are evaluated folks tend toward asking questions from the perspective of outside looking in, determining who they want to get through their perimeter defenses, and where they want them to be able to go. These questions often get answered with something perhaps as nebulous as ‘our employees should be able to access our applications’. Even when they get answered in more detailed fashions, perhaps detailing groups of users needing access to specific applications, they often don’t realize that, while not entirely meaningless, that fundamentally they are asking altogether the wrong questions.

The perspective those questions come from is where the failure begins. Deep Throat’s dying words to Scully are perhaps the most appropriate and in fact the very premise from which we shall begin: “Trust no one.”

Trust no one

Most reading this will be familiar with the industry push towards Zero Trust, and while that isn’t the focus of this article, certain aspects of the concept are quite pertinent to our topic of exposing potential darkness in our systems and policies. Aspects such as not trusting yourself or the well-configured security constructs put in place.

The questions to start by asking yourself are:

◉ If your organization were compromised, how long would it take you to know?

◉ Would you even ever know?

◉ Do you trust your existing security systems and the team that put them in place?

◉ Do you trust them enough not to watch your systems closely and set triggers to alert you to undesired behavior?

Most folks believe they are quite secure, but like most beliefs, this comes from the amygdala, not the prefrontal cortex. Meaning this is based on feeling, not on rational, empirical data backed by penetration-tested proof.

I spent a decade helping folks understand the fundamentals necessary to take and pass the CCIE Voice (later Collaboration), CCIE Security, and CCIE Data Center exams. Often this would look like me and 15-20 students holed up in some hotel meeting room in some corner of the globe for 14 days straight. Often, during an otherwise quiet lab time, someone would ask me to help them troubleshoot an issue they were stuck on. Regardless of the platform, I’d ask them if they could go back and show me the basics of their configuration. Nearly every time the student would assure me that they had checked those bits, and everything was correct. They were certain the issue was some bug in the software. Early on in my teaching career I’d let them convince me and we’d both spend an hour or more troubleshooting the complex parts of the config together, only to at some point go back and see that, sure enough, there’d be some misconfiguration in the basics.

As time went on and I gained more experience, I found it was crucial to short-circuit this behavior and check their fundamentals to start. When they would inevitably push back saying their config was good, I’d reply with, “It’s not you that I don’t trust, it’s me. I don’t trust myself and with that, if you would just be so kind as to humor me and show me, I’d be truly grateful.” This sort of ‘assuming the blame’ would disarm even the most ardent detractor. After they’d humored me and gone back to beginning to review, we’d both spot the simple mistake that anyone could have just as easily made and they’d sheepishly exclaim something such as, “How did that get there!?!?  I swear I checked that, and it was correct!“. Then it would hit them that perhaps they actually did make a mistake, and they would go on to fix it. What was far more important to me than helping them fix this one issue was in helping them learn not to trust themselves, and in so doing, begin a habit that would go on to benefit them in the exam and I’d like to believe, in life. What they likely didn’t know was how much this benefitted me. It reinforced my belief in not trusting myself, rather setting up alerts, triggers, and even other pnuemonics that always forced me to go back and check the fundamentals.

Lighting up the darkness

So, how does all of this apply to workload protection?

Organizations have many applications, built by many different teams on many different platforms running on many different OSes, patch levels, having different runtimes and calling different libraries or classes. Surprisingly, many of these are often not well understood by those teams.

Crucial to business security is understanding the typical behavior in an organization’s workloads. Once understood, we can begin to create policy around each one. However, alone, policy is not enough to be trusted. Beyond implementing L4 firewall rules in each workload, it’s important to closely monitor all activity happening. Watching the OS, the processes, the file system, users shell commands, privilege escalation from a user login or a process, and other similar workload behaviors is key to knowing what’s actually happening rather than trusting what should be.

An example might be someone cloning a git repo containing some post-exploitation framework -something such as Empire or PoshC2 to use once they gain initial access after exploiting some vulnerability, testing different techniques to elevate their privileges using a valid account attack or perhaps that of a hijacked software process by using an exploitation for privilege escalation attack.

This isn’t by any means a new sort of attack. Nor is the knowledge that workload behaviors must be actively monitored.

So why then does this remain such a problem?

The challenges have been in collecting logs at scale, parsing them in the context of every other workload’s actions, and garnering useful insights. While central syslog collection is necessary, there remain some substantial drawbacks, primarily with that last bit about context. Avert so-called Zero-Day attacks requires live, contextual monitoring such as is achieved through this type of active forensic investigation.

A better source of light

How do we cast the proper light on only activity we’re interested in?

How do we help our workloads have a sort of -again, if you’ll allow the rough metaphor- collective conscious?

Cisco Prep, Cisco Exam Prep, Cisco Certification, Cisco Guides, Cisco Career

Cisco Secure Workload  is based primarily on distributed agents installed on every workload, constantly sending telemetry back to a central cluster. Think of them as Vary’s informants: “My little birds are everywhere.” -The Master of Whisperers, GOT

These informants play a dual role: First in reporting back to the cluster what I like to call the 3 P’s: Packages (installed), Processes (running), and Packets (Tx/Rx’d); and secondly in obtaining from the cluster a list of rules to be applied to each workload’s firewall rules specific to each workload. They also gather the very type of forensic activity we’ve been discussing. This is done with the collective knowledge and context of every other workload’s behavior.

Cisco Secure Workload gives us great power in defining the behaviors we wish to monitor for, and we can draw from a comprehensive pre-defined list, as well as write our own.

Aggressive Disclosure

Some new regulations require that breaches to an organization must be reported quickly, such as with GDPR where reporting is mandated within 72 hours of each occurrence. Most regulations don’t require that aggressiveness in reporting, but are being taken to task over such inadequate measures, such as in the case of HBR’s report on  a hotel chain breach.

Hackers were camping out for four years in the workloads of a smaller hotelier that the chain aquired. FOUR YEARS! That is an awfully long time to not know that you’ve been pwned. What I wonder is, how many more organizations have currently breached workloads today with no knowledge or insight. Complete darkness, one might say.

As Jung might have appreciated, it’s time to make that darkness conscious.

Key takeaways

1. Don’t rush security policies. Get key stakeholders in the same virtual room, discuss business, application, and workload behavior. Ask questions. Don’t ask with a grounding in known technological capabilities. Ask novel questions. Ask behavioral questions such as “how should good actors behave, who are those good actors, and what bad behavior should we be monitoring and alerting for.” Ensure wide participation with folks from infosec, governance, devops, app owners, cloud, security, and network teams, to name a few.

2. Evaluate carefully the metrics you are using for CBAs and, if you’re not sure if you are using the best metrics, ask a trusted advisor -someone who has been down this path many times- about what you should be measuring.

3. Trust no one. Not yourself, not the security policies put in place. Test and monitor everything.

4. Cast a bright, powerful light into your workload behavior. Deploy little birds to every workload and have them report behavioral telemetry back to a central, AI-driven policy engine, such as Tetration. Turn all of your workloads -regardless if they live in a single data center or are spread out across 15 clouds and DCs- into a single conscious

5. Be sure you can meet current and future laws on aggressive reporting in less time than regulations call for. You want this knowledge for yourself in as short of time as possible so that you can take meaningful action to remediate, even if you aren’t subject to such regulations.

Be vigilant in monitoring and revisiting the basics often. By staying humble, questioning everything, and going back to the basics, you likely will find ways of tightening security while simplifying access.

Source: cisco.com