Thursday 4 March 2021

Enable Consistent Application Services for Containers

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud. While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain top-level network and security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture.

F5 Container Ingress Services (CIS) and Cisco ACI

Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently, F5 and Cisco joined forces by integrating F5 CIS with Cisco ACI to bring L4-7 services into the Kubernetes environment, to further simplify the user experience in deploying, scaling, and managing containerized applications. This integration specifically enables:

◉ Unified networking: Containers, VMs, and bare metal

◉ Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies

◉ A single point of automation with enhanced visibility for ACI and BIG-IP.

◉ F5 Application Services natively integrated into Container and Platform as a Service (PaaS)Environments

One of the key benefits of such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it VLAN or VXLAN into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for an additional gateway, can communicate with any service anywhere.


Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Solution Deployment


To integrate F5 CIS with the Cisco ACI for the Kubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather than getting down to the nitty-gritty, I will just highlight the steps to deploy the joint solution.

Pre-requisites

The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place:

◉ A working Cisco ACI installation

◉ ACI must be integrated with vCenter VDS

◉ Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs.

◉ BIG-IP already running for non-container workload

Deploying Kubernetes Clusters to ACI Fabrics

The following steps will provide you a complete cluster configuration: 

Step 1. Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes

Cisco provides an acc_provision tool, to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. If needed, download the provisioning tool.

Next, you can use this provision tool to generate a sample configuration file that you can edit.

$ acc-provision--sample > aci-containers-config.yaml

We can now edit the sample configuration file to provide information from your network. With such a configuration file, now you can run the following command to provision the Cisco ACI fabric:

acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password]

Step 2. Prepare the ACI CNI Plugin configuration File

The above command also generates the file aci-containers.yaml that you use after installing Kubernetes.

Step 3. Preparing the Kubernetes Nodes – Set up networking for the node to support Kubernetes installation.

With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, configuring a static route for the multicast subnet, configuring the DHCP Client to work with ACI, etc.

Step 4. Installing Kubernetes cluster

After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment.

Step 5. Deploy Cisco ACI CNI plugin

When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command:

kubectl apply -f aci-containers.yaml

The command installs the following (PODs):

◉ ACI Containers Host Agent and OpFlex agent in a DaemonSet called aci-containers-host

◉ Open vSwitch in a DaemonSet called aci-containers-openvswitch

◉ ACI Containers Controller in a deployment called aci-containers-controller.

◉ Other required configurations, including service accounts, roles, and security context

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for the latest and greatest.

After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD’s.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Install the BIG-IP Controller


The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren’t familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable the enforcement of security policies.

Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller.

Use the kubectl get command to verify that the k8s-bigip-ctlr Pod launched successfully.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

BIG-IP as a north-south load balancer for External Services


For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode.

This is where BIG-IP Container Ingress Services (or CIS) comes into the picture, as the north-south load balancer. On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. Besides, each new POD is added to BIG-IP pool dynamically.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Tuesday 2 March 2021

Machine Reasoning is the new AI/ML technology that will save you time and facilitate offsite NetOps

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides

Machine reasoning is a new category of AI/ML technologies that can enable a computer to work through complex processes that would normally require a human. Common applications for machine reasoning are detail-driven workflows that are extremely time-consuming and tedious, like optimizing your tax returns by selecting the best deductions based on the many available options. Another example is the execution of workflows that require immediate attention and precise detail, like the shut-off protocols in a refinery following a fire alarm. What both examples have in common is that executing each process requires a clear understanding of the relationship between the variables, including order, location, timing, and rules. Because, in a workflow, each decision can alter subsequent steps.

So how can we program a computer to perform these complex workflows? Let’s start by understanding how the process of human reasoning works. A good example in everyday life is the front door to a coffee shop. As you approach the door, your brain goes into reasoning mode and looks for clues that tell you how to open the door. A vertical handle usually means pull, while a horizontal bar could mean push. If the building is older and the door has a knob, you might need to twist the knob and they push or pull depending on which side of the threshold the door is mounted. Your brain does all of this reasoning in an instant, because it’s quite simple and based on having opened thousands of doors. We could program a computer to react to each of these variables in order, based on incoming data, and step through this same process.

Now let’s apply these concepts to networking. A common task in most companies is compliance checking where each network device, (switch, access point, wireless controller, and router) is checked for software version, security patches, and consistent configuration. In small networks, this is a full day of work; larger companies might have an IT administrator dedicated to this process full-time. A cloud-connected machine reasoning engine (MRE) can keep tabs on your device manufacturer’s online software updates and security patches in real time. It can also identify identical configurations for device models and organize them in groups, so as to verify consistency for all devices in a group. In this example, the MRE is automating a very tedious and time-consuming process that is critical to network performance and security, but a task that nobody really enjoys doing.

Another good real world example is troubleshooting an STP data loop in your network. Spanning Tree Protocol (STP) loops often appear after upgrades or additions to a layer-2 access network and can data storms that result in severe performance degradation. The process for diagnosing, locating, and resolving an STP loop can be time-consuming and stressful. It also requires a certain level of networking knowledge that newer IT staff members might not yet have. An AI-powered machine reasoning engine can scan your network, locate the source of the loop, and recommend the appropriate action in minutes.

Cisco DNA Center delivers some incredible machine reasoning workflows with the addition of a powerful cloud-connected Machine Reasoning Engine (MRE). The solution offers two ways to experience the usefulness of this new MRE. The first way is something many of you are already aware of, because it’s been part of our AI/ML insights in Cisco DNA Center for a while now: proactive insights. When Cisco DNA Center’s assurance engine flags an issue, it may determine to send this issue to the MRE for automated troubleshooting. If there is an MRE workflow to resolve this issue, you will be presented with a run button to execute that workflow and resolve the issue. Since we’ve already mentioned STP loops, let’s take a look at how that would work.

When a broadcast storm is detected, AI/ML can look at the IP addresses and determine that it’s a good candidate for STP troubleshooting. You’ll get the following window when you click on the alert:

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 1: Broadcast storm detected

When you click on the button “Start Automate Troubleshooting” you spin-up the machine reasoning engine and it traces the host flaps. If it detects STP loops, you’ll see this window:

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 2: STP Loops Detected

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 3: STP loops identified by device and VLAN

Now click on view details and the MRE will present you the specifics for the related VLANs as well as a logical map of the loop with the name of the relevant devices and the VLAN number. All you need to do now is prune your VLANs in those switches, and you’ve solved a complex issue in just a couple minutes. The ease at which this problem is resolved shows how MRE can bridge the skill gap and enable lesser trained IT members to proactively resolve network issues. It also demonstrates that machines can discover, investigate, and resolve network issues much faster than a human can. Eliminating human latency in issue resolution can greatly improve user experience on your network.

Another example of a proactive workflow is the “PSIRT alert” that flag Cisco devices which have advisories for bug or vulnerability software patches. You will see this alert automatically, anytime Cisco has released a PSIRT advisory that is relevant to one of your devices. Simply click the PSIRT alert and the software patch will be displayed and ready to load. The Cisco DNA Center team is working hard to create more proactive MRE workflows, so you’ll see more of these automated troubleshooting solutions in future upgrades.

The second way to experience machine reasoning in Cisco DNA Center, is in the new “Network Reasoner Dashboard,” which is located in the “Tools” menu. There you will find five new buttons that execute automated workflows through the MRE.

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Image 4: Network Reasoner Dashboard

1. CPU Utilization: There are a number of reasons that the CPU in a networking device would be experiencing high utilization. If you have ever had to troubleshoot this, you know that the remediation list for this is quite long and the tasks involved are both time-consuming and require a seasoned IT engineer to perform. This button works through numerous tasks, such as IOS process, packets per second flow, broadcast storm, etc. It then returns a result with specific guided remediation to resolve the issue.

2. Interface Down: Understanding the reasons for an interface that doesn’t come up requires deep knowledge of virtual routing and forwarding (VRF). This means that your less experienced team members will likely escalate this issue to a higher level engineer to be resolved. Furthermore, unless your switch has the capability of advanced telemetry you would need to have physical access to the switch in order to rule out a Layer-1 problem such as an SPF, cables, connectors, patch panel, etc. This button compares the interface link parameters at each end, runs a loopback, ping, traceroute, and other tests before returning a result for the most likely cause.

3. Power supply: Cisco Catalyst switches can detect power issues related to inconsistent voltage, fluctuating input, no connection, etc. This is generally done on site with visible inspection of the interface and LEDs. The MRE workflow uses sensors and logic reasoning to determine the probable cause. So, press this button if you want to skip a trip to the switch site.

4. Ping Device: I know what you’re thinking, it’s so simple to ping a device. But, it does take time to open a CLI window and it’s a distraction from the window you have open. Now all you need to do is push a button and enter the target IP address.

5. Fabric Data Collection: Moving to a software defined network with a fully layered fabric and micro-segmentation has tremendous benefits, but it does take some training to master. This button will collect show command outputs from network devices for complete visibility of your overlay (virtual) network. Having clear visibility can help troubleshoot issues in your fabric network.

Now that you know what machine reasoning is, and what it can offer your team, let’s take a look at how it works. It all starts with Cisco subject matter experts that have created a knowledge base of processes required to achieve certain outcomes which are based on best practices, defect signatures, PSIRTs, and other data. Using a “workflow editor” these processes are encapsulated into a central knowledge base, located in the Cisco cloud. When the AI/ML assurance engine in Cisco DNA Center sees and issue, it will send this issue to the MRE, which then uses inferences to select a relevant workflow from the knowledge base in the cloud. Cisco DNA Center can then present remediation or execute a complete workflow to resolve the issue. In the case of the workflows on demand in the network reasoner dashboard, the MRE simply selects the workflow from the knowledge base and executes it.

Cisco Prep, Cisco Tutorials and Material, Cisco Career, Cisco Preparation, Cisco Guides
Figure 1: MRE architecture

If you’re following my description of the process on the image above, you’ll notice I left out a couple icons in the diagram: Community, Partners, and Governance. Cisco is inviting our DEVNET community and fabulous Cisco Partners to create and publish MRE workflows. In conjunction with Cisco CX, we have developed a governance process, which works inside of our software Early Field Trials (EFT) program. This allows us to grow the library of workflows in the Network Reasoner window with industry-specific as well as other interesting and time-saving workflows. What tedious networking tasks would you like to automate? Let me know in the comments below!

If you haven’t yet installed the latest Cisco DNA Center software (version 2.1.2.x), the newly expanded machine reasoning engine is a great reason to do it. Look for continued development in our AI/ML machine reasoning engine in the coming releases with features for compliance verification (HIPPA, PCI, and DSS), network consistence checks (DNS, DHCP, IPAM, and AAA), security vulnerabilities (PSIRTs), and more.

Source: cisco.com

Monday 1 March 2021

Get Ready to Crack Cisco CCNP Security 300-710 Certification Exam

Cisco SNCF Exam Description:

The Securing Networks with Cisco Firepower v1.0 (SNCF 300-710) exam is a 90-minute exam associated with the CCNP Security, and Cisco Certified Specialist - Network Security Firepower certifications. This exam tests a candidate's knowledge of Cisco Firepower® Threat Defense and Firepower®, including policy configurations, integrations, deployments, management and troubleshooting. These courses, Securing Networks with Cisco Firepower, and Securing Network with Cisco Firepower Next-Generation Intrusion Prevention System help candidates prepare for this exam.

Cisco 300-710 Exam Overview:

Exam Name:- Securing Networks with Cisco Firepower
Exam Number:- 300-710 SNCF
Exam Price:- $300 USD
Duration:- 90 minutes
Number of Questions:- 55-65
Passing Score:- Variable (750-850 / 1000 Approx.)
Recommended Training:-
Exam Registration:- PEARSON VUE

Saturday 27 February 2021

Optimize Real-World Throughput with Cisco Silicon One

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career

Switches are the heart of a network data plane and at the heart of any switch is the buffering subsystem. Buffering is required to deal with transient oversubscription in the network. The size of the buffer determines how large of a burst can be accommodated before packets are dropped. The goodput of the network depends heavily on how many packets are dropped.

The amount of buffer needed for optimal performance is mainly dependent on the traffic pattern and network Round-Trip Time (RTT).

The applications running on the network drive the traffic pattern, and therefore what the switch experiences. Modern applications such as distributed storage systems, search, AI training, and many others employ partition and aggregate semantics, resulting in traffic patterns that are especially effective in creating large oversubscription bursts. For example, consider a search query where a server receives a packet to initiate a search request. The task of mining through the data is dispatched to many different servers in the network. Once each server finishes the search it sends the results back to the initiator, causing a large burst of traffic targeting a single server. This phenomenon is referred to as incast.

Round-Trip Time

The network RTT is the time it takes a packet to travel from a traffic source to a destination and back. This is important because it directly translates to the amount of data a transmitter must be allowed to send into the network before receiving acknowledgment for data it sent. The acknowledgments are necessary for congestion avoidance algorithms to work and in the case of Transmission Control Protocol (TCP), to guarantee packet delivery.

For example, a host attached to a network with a 100Gbps link through a network with an RTT of 16us must be allowed to send at least 1.6Mb (16us * 100Gbps) of data before receiving an acknowledgment if it wants to be able to transmit at 100Gbps. In TCP protocol this is referred to as the congestion window size, which for a flow is ideally equal to the bandwidth delay product.

The amount of buffer a switch needs to avoid packet drop is directly related to this bandwidth delay product. Ideally a queue within a switch should have enough buffer to accommodate the sum of the congestion windows of all the flows passing through it. This guarantees that a sudden incast will not cause the buffer to overflow. For Internet routers this dynamic has been translated to a widely used rule of thumb – each port needs a buffer of average RTT times the port rate. However, the datacenter presents a different environment than the Internet. Whereas an Internet router can expect to see 10s of thousands of flows across a port with the majority bandwidth distributed across 1000s of flows, a datacenter switch often sees most of the bandwidth distributed over a few high bandwidth elephant flows. Thus, for a datacenter switch, the rule is that a port needs at most the entire switch bandwidth (not just the port bandwidth) times average RTT. In practice of course this can be relaxed by noting that this assumes an extremely pessimistic scenario where all traffic happens to target one port. Regardless, a key observation is that the total switch buffer is also the entire switch bandwidth times average RTT, just like for the Internet router case. Therefore, the most efficient switch design is one where all the buffer in the switch can be dynamically available to any port.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 1. Buffer requirement based on RTT

To help understand the round-trip times associated with a network, let’s look at a simple example. The RTT is a function of the network physical span, the delay of intermediate switches, and end nodes delay (that is the network adapters and the software stack). Light travels through fiber at about 5us per kilometer, so the contribution of the physical span is easy to calculate. For example, communication between two hosts in a datacenter with a total fiber span of 500 meters per direction will contribute 5us to the RTT. The delay through switches is composed of pipeline (minimum) delay and buffering delay.

Low delay switches can provide below 1us of pipeline delay. However, this is an ideal number based on a single packet flowing through the device. In practice, switches have more packets flowing through them simultaneously, and with many flows from different sources some minimum buffering in the switches is needed. Even a small buffer of 10KB will add almost 1us to the delay through a 100Gbps link.

Finally, optimized network adapters will add a minimum of two microseconds of latency, and often this is much more. So, putting this all together we can see that even a small datacenter network with 500 meters of cable span and three switching hops will result in a minimum RTT of around 16us. In practice, networks are typically never this ideal, having more hops and covering greater distances, with even greater RTTs.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 2. Simple Datacenter Network – minimum RTT

As can be seen from the figure above, supporting a modest RTT of 32us in a 25.6T switch requires up to 100MB. It’s important to notice at this point that this is both the total required buffer in the switch and the maximum required buffer for any one port. The worst-case oversubscription to any one port is when all incoming traffic happens to target one port. In this pathological incast case, all the buffer in the device is needed by the victim port to absorb the burst. Other oversubscribing traffic patterns involving multiple victim ports will require that the buffer be distributed in proportion to the oversubscription factor among the victim ports.

It’s also important to note that other protocols, like User Datagram Protocol (UDP) that are utilized by Remote Direct Memory Access (RDMA), don’t have the congestion feedback schemes used in TCP, and they rely on flow control to prevent packet loss during bursts. In this case the buffer is critical as it reduces the likelihood of triggering flow control, thus reducing the likelihood of blocking and optimizing overall network throughput.

Traditional Buffering Architectures


Unfortunately, since the buffer must handle extremely high bandwidth it needs to be integrated on the core silicon die, meaning off-chip buffering that can keep up with the total IO bandwidth is no longer possible as we discussed in our white paper, “Converged Web Scale Switching And Routing Becomes A Reality: Cisco Silicon One and HBM Memory Change the Paradigm”. On-die buffering in high bandwidth switches consumes a significant amount of die area and therefore it’s important to use whatever buffer can be integrated on-die in the most efficient way.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 3: Bandwidth growth in DDR memories and Ethernet switches

Oversubscription is an unpredictable transient condition that impacts different ports at different times. An efficient buffer architecture takes advantage of this by allowing the buffer to be dynamically shared between ports.

Most modern architectures support packet buffer sharing. However, not all claims of shared memory are equal, and not surprisingly this fact is usually not highlighted by the vendors. Often there are restrictions on how the buffer can be shared. Buffer sharing can be categorized according to the level and orientation of sharing, as depicted in the figures below:

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 4. Shared buffer per output port group

A group of output ports share a buffer pool. Each buffer pool absorbs traffic destined to a subset of the output ports.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 5. Shared buffer per input port group

A group of input ports share a buffer pool. Each buffer pool absorbs traffic from a subset of the input ports.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 6. Shared buffer per input-output port group

A group of input and output ports share a buffer pool. Each buffer pool absorbs traffic from a subset of input ports for a subset of output ports.

In all the cases where there are restrictions on the sharing, the amount of buffer available for burst absorption to a port is unpredictable since it depends on the traffic pattern.

With output buffer sharing, burst absorption to any port is restricted to the individual pool size. For example, an output buffer architecture with four pools means that any output port can consume at most 25 percent of the total memory. This restriction can be even more painful under more complex traffic patterns, as depicted in the figure below, where an output port is restricted to 1/16th of the total buffer. Such restriction makes buffering behavior under incast unpredictable.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 7. Output buffered switch with 4 x 2:1 oversubscription traffic

With input buffer sharing, burst absorption depends on the traffic pattern. For example, in a 4:1 oversubscription traffic pattern with the buffer partitioned to four pools, the burst absorption capacity is anywhere between 25-100 percent of total memory.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 8. Input buffer utilization with 4:1 oversubscription traffic

Input-output port group sharing, like input buffer sharing, by design limits an output port to a fraction of the total memory. In the example of four pools, any one port is limited by design to half the total device buffer. This architecture further limits buffer usage depending on traffic patterns, as in the example below where an output port can use only 12.5 percent of the device buffer instead of 50 percent.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 9: input-output port group buffer architecture with 2 x 2:1 oversubscription traffic

Cisco Silicon One employs a fully shared buffer architecture as depicted in the figure below:

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 10. Cisco Silicon One Fully Shared Buffer

In a fully shared buffer architecture, all the packet buffer in the device is available for dynamic allocation to any ports, so this means sharing the buffer among ALL the input-output ports without any restrictions. This maximizes the efficiency of the available memory and makes burst absorption capacity predictable as it’s independent of the traffic pattern. In the examples presented above, the fully shared architecture yields an effective buffer size that is at least four times the alternatives. This means that, for example, a 25.6T switch that requires up to 100MB of total buffer per device and per port needs exactly 100MB on-die buffer if it implements as a fully shared buffer. To achieve the same performance guarantee, a partially shared buffer design that breaks the buffer into four pool will need four times the memory.

The efficiency gains of a fully shared buffer also extend to RDMA protocol traffic. RDMA uses UDP which doesn’t rely on acknowledgments. Thus, RTT is not directly a driver of the buffer requirement. However, RDMA relies on Priority-based Flow Control (PFC) to prevent packet loss in the network. A big drawback of flow control is the fact that it’s blocking and can cause congestion to spread by stopping unrelated ports. A fully shared buffer helps to minimize the need to trigger flow control by virtue of supporting more buffering when and where it’s needed. Or in other words, it raises the bar of how much congestion needs to happen before flow control is triggered.

Friday 26 February 2021

Preparing for the Cisco 300-620 DCACI Exam: Hints and Tips


CCNP Data Center certification confirms your skills with data center solutions. To obtain CCNP Data Center certification, you need to pass two exams: one that involves
core data center technologies and one data center concentration exam of your preference. For the CCNP Data Center concentration exam, you can craft out your certification to your technical area of focus. In this article, we will discuss concentration exam: 300-620 DCACI: Implementing Cisco Application Centric Infrastructure.

IT professionals who receive the CCNP Data Center certification are prepared for major roles in complicated Data Center environments, with expertise employing technologies involving policy-driven infrastructure, virtualization, automation and orchestration, unified computing, data center security, and integration of cloud initiatives. CCNP Data Center certified professionals are profoundly qualified for high-level roles engaged with empowering digital business transformation initiatives.

Cisco 300-620 Exam Details

The Cisco DCACI 300-620 exam is a 90-minute exam comprising of 55-65 questions that are associated with the CCNP Data Center Certification and Cisco Certified Specialist – Data Center ACI Implementation certifications. This exam tests an applicant's understanding of Cisco switches in ACI mode, comprising configuration, implementation, and management.

Tips That Can Help You Succeed in Cisco 300-620 DCACI Exam

There are a lot of things that the applicants ought to keep in mind to score well. Here they are:

  • To get a higher score, you should know that practice tests are essential for this exam. For this reason, you should make a structured plan of solving them on a daily basis. Practice tests will make you familiar with your gaps in exam preparation. Moreover, you will sharpen your time management skills.
  • Take ample time for your Cisco 300-620 exam preparation. CCNP Data Center certification exam may not appear difficult, but then again, you will notice that the questions asked are generally very tricky. Thorough preparation will wipe out confusion, and you will be more composed during your exam. Having a calm and composed mind during the exam without the last-minute breakout will improve the odds of passing the Cisco 300-620 DCACI exam. Hence, it is essential to prepare yourself well before sitting for the exam.
  • Get familiar with the Cisco 300-620 DCACI exam syllabus. The questions in the exam will come from the syllabus. Without it, you may be studying from inappropriate material that might be evaluated during the exam. You should ensure that you get the syllabus from the Cisco official website to determine that you include the essential areas. Study all the topics covered in the exam syllabus so that you don't leave out any important details.
  • Apart from following the syllabus and reading the essential and relevant material, video training can also help enhance your knowledge and sharpen your skills. It is also essential to read the relevant Cisco 300-620 DCACI book for acquiring mastery over exam concepts. Despite how the video training course and the instructor could be right, you will find out that they cannot bring up every important and theoretical detail.
  • Participate in an online community. Such groups can be of extreme help in passing Cisco 300-620 DCACI exam. Being part of such a community indicates that more heads are better than one. Studying together, you will be better positioned to grasp the concepts in a better way because another member of the community might have perceived them better.
  • When studying the exam by yourself, you might always visualize the study material from the same point of view. This might not be an issue, but getting familiar with different views on the subject can help you learn more comprehensively. You will be in a position to obtain distinct skills and share opinions with other people.

Conclusion

As an IT professional, it should be apparent that achieving a relevant certification is the sure-shot way to strengthen your status in the industry and scale up the corporate ladder. Expectedly, the above-mentioned tips will simplify your Cisco 300-620 DCACI certification journey and make your career aspirations to success.

Thursday 25 February 2021

Cisco User Defined Network: Redefining Secure, Personal Networks

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation

Connecting all your devices to a shared network environment such as dorm rooms, classrooms, multi-dwelling building units, etc. may not be desirable as there are too many users/devices on the shared network and onboarding of devices is not secure. In addition, there is limited user control; that is, there is no easy way for the users to deterministically discover and limit access to only the devices that belong to them. You can see all users’ devices and every user can see your device. This, not only results in poor user experience but also brings in security concerns where users knowingly or unknowingly can take control of devices that may belong to other users. 

Cisco User Defined Network (UDN) changes the shared network experience by enabling simple, secure and remote on-boarding of wireless endpoints on to the shared network to give a personal network-like experience. Cisco UDN provides control to the end-users to create their own personal network consisting of only their devices and also enables the end-users the ability to invite other trusted users into their personal network. This provides security to the end-users at the same time giving them ability to collaborate and share their devices with other trusted users. 

Solution Building Blocks

The following are the functional components required for Cisco UDN Solution. This is supported in the Catalyst 9800 controllers in centrally switched mode.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 1. Solution Building Blocks

Cisco UDN Mobile App: The mobile app is used for registering user’s devices onto the network from anywhere (on-prem or off-prem) and anytime. End-user can log in to the mobile app using the credentials provided by the organization’s network administrator. Device on-boarding can be done in multiple ways. These include: 

◉ Scanning the devices connected to the network and selecting devices required to be onboarded

◉ Manually entering the MAC address of the device

◉ Using a camera to capture the MAC address of the device or using a picture of the mac address to be added

In addition, using mobile app, users can also invite other trusted users to be part of their private network segment. The mobile app is available for download both on Apple store and Google play store.

Cisco UDN Cloud Service: Cloud service is responsible for ensuring the registered devices are authenticated with Active Directory through SAML 2.0 based SSO gateway or Azure AD. Cloud service is also responsible for assigning the end-users and their registered devices to a private network and provides rich insights about UDN service with the cloud dashboard.

Cisco DNA Center: Is an on-prem appliance which connects with Cisco UDN cloud service. It is the single point through which the on-prem network can be provisioned (automation) and provides visibility through telemetry and assurance data. 

Identity Services Engine (ISE): Provides authentication and authorization services for the end-users to connect to the network.

Catalyst 9800 Wireless Controller and Access Points: Network elements which enables traffic containment within the personal network. UDN is supported on wave2 and Cisco Catalyst access points.

How does it work?


Cisco UDN solution focuses on simplicity and secure onboarding of devices. The solution gives flexibility to the end-users to invite other trusted users to be part of their personal network. The shared network can be segmented into smaller networks as defined by the users. Users from one segment will not be able to see traffic from another user segment. The solution ensures that broadcast, link-local multicast and discovery services (such as mDNS, UPnP) traffic from other user segments will not be seen within a private network segment. Optionally, unicast traffic from other segments can also be blocked. However, unicast traffic within a personal network and north-south traffic will be allowed. 

Workflows


There are three main workflows associated with UDN:

1. Endpoint registration workflow: User’s endpoint can register with the UDN cloud service through a mobile-app from anywhere at any time (on-prem or off-prem). Upon registration, the cloud service ensures that the endpoint is authenticated with the active directory. Cloud service then assigns a private segment/network to the authenticated users and assigns a unique identity – User Defined Network ID (UDN-ID). This unique identity (UDN-ID) along with the user and endpoint information (mac address) is pushed from cloud service to on-prem through DNAC. The unique private network identity along with the user/endpoint information is stored in ISE 

2. Endpoint on-boarding workflow: When the endpoint joins the wireless network using one of the UDN enabled WLANs, as part of the authorization policy, ISE will push the private network ID associated with the endpoint to the wireless controller. This mapping of endpoint to UDN-ID is retrieved from ISE. The network elements (wireless LAN controller and access point), will use the UDN-ID to enforce traffic containment for the traffic generated by that endpoint

3. Invitation workflow: A user can invite another trusted user to be part of his personal network. This is initiated from the mobile app of the user who is inviting. The invitation will trigger a notification to the invitee through the cloud service. Invitee has an option to either accept or reject the invitation. Once the invitee has accepted the request, cloud service will put the invitee in the same personal network as the inviter and notify the on-prem network (DNAC/ISE) about the change of the personal room for the invitee. ISE will then trigger a change of authorization to the invitee and notify the wireless controller of this change. The network elements will take appropriate actions to ensure that the invitee belongs to the inviter’s personal room and enforces traffic containment accordingly

The following diagram highlights the various steps involved in each of the three workflows.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 2. UDN Workflows

Traffic Containment


Traffic containment is enforced in the network elements, wireless controller and access points. UDN-ID, an identifier for a personal network segment, is received by WLC from ISE as part of access-accept RADIUS message during either client on-boarding or change-of-authorization. Unicast traffic containment is not enabled by default. When enabled on a WLAN, unicast traffic between two different personal networks is blocked. Unicast traffic only within a personal network and north-south traffic will be allowed. Wireless controller enforces unicast traffic containment. The traffic containment logic in the AP ensures that the link-local multicast and broadcast traffic is sent as unicast traffic over the air to only the clients belonging to a specific personal network. The table below summarizes the details of traffic containment enforced on the network elements.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 3. UDN Traffic Containment

The WLAN on which UDN can be enabled should have either MAC-filtering enabled or should be an 802.1x WLAN. The following are the possible authentication combinations on which UDN can be supported on the wireless controller:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation

For RLAN, only mDNS and unicast traffic can be contained through UDN. To support LLM and/or broadcast traffic, all clients on RLAN needs to be in the same UDN.

Monitor and Control


The end-to-end visibility into the UDN solution is enabled through both DNA cloud service dashboard and DNAC assurance. In addition, DNAC also enables configuring the UDN service through a single pane of glass. 

DNA Cloud Service provides rich insights with the cloud dashboard. It gives visibility into the devices registered, connected within a UDN and also information about the invitations sent to other trusted users etc. 

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 4. Insights and Cloud Dashboard

On-Prem DNAC enables enablement of UDN through automation workflow and provides complete visibility of UDN through Client 360 view in assurance.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 5. UDN Client Visibility

Cisco UDN enriches the user experience in a shared network environment. Users can bring any device they want to the Enterprise network and benefit from home-like user experience while connected to the Enterprise network. It is simple, easy to use and provides security and control for the user’s personal network.

Tuesday 23 February 2021

Introduction to Terraform and ACI – Part 5

If you haven’t already seen the Introduction to Terraform posts, please have a read through. Here are links to the first four posts:

1. Introduction to Terraform

2. Terraform and ACI​​​​​​​

3. Explanation of the Terraform configuration files

4. Terraform Remote State and Team Collaboration

So far we’ve seen how Terraform works, ACI integration, and remote state. Although not absolutely necessary, sometimes it’s useful to understand how the providers work in case you need to troubleshoot an issue.​​​​​​​ This section will cover at a high level how Terraform Providers are built, using the ACI provider as an example. 

Code Example

https://github.com/conmurphy/intro-to-terraform-and-aci-remote-backend.git

For explanation of the Terraform files see Part 3 of the series. The backend.tf file will be added in the current post.

Lab Infrastructure

You may already have your own ACI lab to follow along with however if you don’t you might want to use the ACI Simulator in the Devnet Sandbox.

ACI Simulator AlwaysOn – V4

Terraform File Structure

As we previously saw, Terraform is split into two main components, Core and Plugins. All Providers and Provisioners used in Terraform configurations are plugins.  ​​​​

Data sources and resources exist within a provider and are responsible for the lifecycle management of the specific endpoint. For example we will have a look at the resource, “aci_bridge_domain“, which is responsible for creating and managing our bridge domains.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

The code for the Terraform ACI Provider can be found on Github


Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

There are a number of files in the root directory of this repo how the ones we are concerned with are “main.go“, the “vendor” folder, and the “aci” folder. 


◉ vendor: This folder contains all of the Go modules which will be imported and used by the provider plugin. The key module for this provider is the ACI Go SDK which is responsible for making the HTTP requests to APIC and returning the response.

◉ aci: This is the important directory where all the resources for the provider exist.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Within the ACI  folder you’ll find a file called “provider.go“. This is a standard file in the Terraform providers and is responsible for setting the properties such as username, password, and URL of the APIC in this case.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

It’s also responsible for defining which resources are available to configure in the Terraform files, and linking them with the function which implements the Create, Read, Update, and Delete (CRUD) capability.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

In the aci folder you’ll also find all the data sources and resources available for this provider. Terraform has a specific structure for the filename and should start with data_source_ or resource_

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Let’s look at the resource, “resource_aci_fvbd“, used to create bridge domains.

◉ On lines 10 and 11 the ACI Go SDK is imported. 
◉ The main function starts on line 16 and is followed by a standard Terraform configuration
    ◉ Lines 18 – 21 define which operations are available for this resource and which function should be called. We will see these four further down the page.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

◉ Lines 29 – 59 in the screenshot are setting the properties which will be available for the resource in the Terraform configuration files.

TROUBLESHOOTING TIP: This is an easy way to check exactly what is supported/configurable if you think the documentation for a provider is incorrect or incomplete. 

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

We’ve now reached the key functions in the file and these are responsible for implementing the changes. In our case creating, reading, updating, and destroying a bridge domain.

If you scroll up you can confirm that the function names match those configured on lines 18-21 

Whenever you run a command, e.g. “terraform destroy“, Terraform will call one of these functions. 

Let’s have a look at what it’s creating.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

First the ACI Go SDK has to be setup on line 419

Following on from that the values from your configuration files are retrieved so Terraform can take the appropriate action. For example in this screenshot the name we’ve configured, “bd_for_subnet“, will be stored in the variable, “name“. 

Likewise for the description, TenantDn, and all other bridge domain properties we’ve configured.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Further down in the file you’ll see the ACI Go SDK is called to create a NewBridgeDomain. This object is then passed to a Save function in the SDK which makes a call to the APIC to create the bridge domain

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Continuing down towards the end of the create function you’ll see the ID being set on line 726. Remember when Terraform manages a resource it keeps the state within the terraform.tfstate file. Terraform tracks this resource using the id, and in the case of ACI the id is the DistinguishedName.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

It’s not only the id that Terraform tracks though, all the other properties for the resource should also be available in the state file. To save this data there is another function, setBridgeDomainAttributes, which sets the properties in the state file with the values that were returned after creating/updating the resource. 

So when Terraform creates our bridge domain, it saves the response properties into the state file using this function.

TROUBLESHOOTING TIP: If resources are always created/updated when you run a terraform apply even though  you haven’t changed any configuration, you might want to check the state file to ensure that all the properties are being set correctly.

Source: cisco.com