Saturday 27 February 2021

Optimize Real-World Throughput with Cisco Silicon One

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career

Switches are the heart of a network data plane and at the heart of any switch is the buffering subsystem. Buffering is required to deal with transient oversubscription in the network. The size of the buffer determines how large of a burst can be accommodated before packets are dropped. The goodput of the network depends heavily on how many packets are dropped.

The amount of buffer needed for optimal performance is mainly dependent on the traffic pattern and network Round-Trip Time (RTT).

The applications running on the network drive the traffic pattern, and therefore what the switch experiences. Modern applications such as distributed storage systems, search, AI training, and many others employ partition and aggregate semantics, resulting in traffic patterns that are especially effective in creating large oversubscription bursts. For example, consider a search query where a server receives a packet to initiate a search request. The task of mining through the data is dispatched to many different servers in the network. Once each server finishes the search it sends the results back to the initiator, causing a large burst of traffic targeting a single server. This phenomenon is referred to as incast.

Round-Trip Time

The network RTT is the time it takes a packet to travel from a traffic source to a destination and back. This is important because it directly translates to the amount of data a transmitter must be allowed to send into the network before receiving acknowledgment for data it sent. The acknowledgments are necessary for congestion avoidance algorithms to work and in the case of Transmission Control Protocol (TCP), to guarantee packet delivery.

For example, a host attached to a network with a 100Gbps link through a network with an RTT of 16us must be allowed to send at least 1.6Mb (16us * 100Gbps) of data before receiving an acknowledgment if it wants to be able to transmit at 100Gbps. In TCP protocol this is referred to as the congestion window size, which for a flow is ideally equal to the bandwidth delay product.

The amount of buffer a switch needs to avoid packet drop is directly related to this bandwidth delay product. Ideally a queue within a switch should have enough buffer to accommodate the sum of the congestion windows of all the flows passing through it. This guarantees that a sudden incast will not cause the buffer to overflow. For Internet routers this dynamic has been translated to a widely used rule of thumb – each port needs a buffer of average RTT times the port rate. However, the datacenter presents a different environment than the Internet. Whereas an Internet router can expect to see 10s of thousands of flows across a port with the majority bandwidth distributed across 1000s of flows, a datacenter switch often sees most of the bandwidth distributed over a few high bandwidth elephant flows. Thus, for a datacenter switch, the rule is that a port needs at most the entire switch bandwidth (not just the port bandwidth) times average RTT. In practice of course this can be relaxed by noting that this assumes an extremely pessimistic scenario where all traffic happens to target one port. Regardless, a key observation is that the total switch buffer is also the entire switch bandwidth times average RTT, just like for the Internet router case. Therefore, the most efficient switch design is one where all the buffer in the switch can be dynamically available to any port.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 1. Buffer requirement based on RTT

To help understand the round-trip times associated with a network, let’s look at a simple example. The RTT is a function of the network physical span, the delay of intermediate switches, and end nodes delay (that is the network adapters and the software stack). Light travels through fiber at about 5us per kilometer, so the contribution of the physical span is easy to calculate. For example, communication between two hosts in a datacenter with a total fiber span of 500 meters per direction will contribute 5us to the RTT. The delay through switches is composed of pipeline (minimum) delay and buffering delay.

Low delay switches can provide below 1us of pipeline delay. However, this is an ideal number based on a single packet flowing through the device. In practice, switches have more packets flowing through them simultaneously, and with many flows from different sources some minimum buffering in the switches is needed. Even a small buffer of 10KB will add almost 1us to the delay through a 100Gbps link.

Finally, optimized network adapters will add a minimum of two microseconds of latency, and often this is much more. So, putting this all together we can see that even a small datacenter network with 500 meters of cable span and three switching hops will result in a minimum RTT of around 16us. In practice, networks are typically never this ideal, having more hops and covering greater distances, with even greater RTTs.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 2. Simple Datacenter Network – minimum RTT

As can be seen from the figure above, supporting a modest RTT of 32us in a 25.6T switch requires up to 100MB. It’s important to notice at this point that this is both the total required buffer in the switch and the maximum required buffer for any one port. The worst-case oversubscription to any one port is when all incoming traffic happens to target one port. In this pathological incast case, all the buffer in the device is needed by the victim port to absorb the burst. Other oversubscribing traffic patterns involving multiple victim ports will require that the buffer be distributed in proportion to the oversubscription factor among the victim ports.

It’s also important to note that other protocols, like User Datagram Protocol (UDP) that are utilized by Remote Direct Memory Access (RDMA), don’t have the congestion feedback schemes used in TCP, and they rely on flow control to prevent packet loss during bursts. In this case the buffer is critical as it reduces the likelihood of triggering flow control, thus reducing the likelihood of blocking and optimizing overall network throughput.

Traditional Buffering Architectures


Unfortunately, since the buffer must handle extremely high bandwidth it needs to be integrated on the core silicon die, meaning off-chip buffering that can keep up with the total IO bandwidth is no longer possible as we discussed in our white paper, “Converged Web Scale Switching And Routing Becomes A Reality: Cisco Silicon One and HBM Memory Change the Paradigm”. On-die buffering in high bandwidth switches consumes a significant amount of die area and therefore it’s important to use whatever buffer can be integrated on-die in the most efficient way.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 3: Bandwidth growth in DDR memories and Ethernet switches

Oversubscription is an unpredictable transient condition that impacts different ports at different times. An efficient buffer architecture takes advantage of this by allowing the buffer to be dynamically shared between ports.

Most modern architectures support packet buffer sharing. However, not all claims of shared memory are equal, and not surprisingly this fact is usually not highlighted by the vendors. Often there are restrictions on how the buffer can be shared. Buffer sharing can be categorized according to the level and orientation of sharing, as depicted in the figures below:

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 4. Shared buffer per output port group

A group of output ports share a buffer pool. Each buffer pool absorbs traffic destined to a subset of the output ports.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 5. Shared buffer per input port group

A group of input ports share a buffer pool. Each buffer pool absorbs traffic from a subset of the input ports.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 6. Shared buffer per input-output port group

A group of input and output ports share a buffer pool. Each buffer pool absorbs traffic from a subset of input ports for a subset of output ports.

In all the cases where there are restrictions on the sharing, the amount of buffer available for burst absorption to a port is unpredictable since it depends on the traffic pattern.

With output buffer sharing, burst absorption to any port is restricted to the individual pool size. For example, an output buffer architecture with four pools means that any output port can consume at most 25 percent of the total memory. This restriction can be even more painful under more complex traffic patterns, as depicted in the figure below, where an output port is restricted to 1/16th of the total buffer. Such restriction makes buffering behavior under incast unpredictable.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 7. Output buffered switch with 4 x 2:1 oversubscription traffic

With input buffer sharing, burst absorption depends on the traffic pattern. For example, in a 4:1 oversubscription traffic pattern with the buffer partitioned to four pools, the burst absorption capacity is anywhere between 25-100 percent of total memory.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 8. Input buffer utilization with 4:1 oversubscription traffic

Input-output port group sharing, like input buffer sharing, by design limits an output port to a fraction of the total memory. In the example of four pools, any one port is limited by design to half the total device buffer. This architecture further limits buffer usage depending on traffic patterns, as in the example below where an output port can use only 12.5 percent of the device buffer instead of 50 percent.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 9: input-output port group buffer architecture with 2 x 2:1 oversubscription traffic

Cisco Silicon One employs a fully shared buffer architecture as depicted in the figure below:

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 10. Cisco Silicon One Fully Shared Buffer

In a fully shared buffer architecture, all the packet buffer in the device is available for dynamic allocation to any ports, so this means sharing the buffer among ALL the input-output ports without any restrictions. This maximizes the efficiency of the available memory and makes burst absorption capacity predictable as it’s independent of the traffic pattern. In the examples presented above, the fully shared architecture yields an effective buffer size that is at least four times the alternatives. This means that, for example, a 25.6T switch that requires up to 100MB of total buffer per device and per port needs exactly 100MB on-die buffer if it implements as a fully shared buffer. To achieve the same performance guarantee, a partially shared buffer design that breaks the buffer into four pool will need four times the memory.

The efficiency gains of a fully shared buffer also extend to RDMA protocol traffic. RDMA uses UDP which doesn’t rely on acknowledgments. Thus, RTT is not directly a driver of the buffer requirement. However, RDMA relies on Priority-based Flow Control (PFC) to prevent packet loss in the network. A big drawback of flow control is the fact that it’s blocking and can cause congestion to spread by stopping unrelated ports. A fully shared buffer helps to minimize the need to trigger flow control by virtue of supporting more buffering when and where it’s needed. Or in other words, it raises the bar of how much congestion needs to happen before flow control is triggered.

Friday 26 February 2021

Preparing for the Cisco 300-620 DCACI Exam: Hints and Tips


CCNP Data Center certification confirms your skills with data center solutions. To obtain CCNP Data Center certification, you need to pass two exams: one that involves
core data center technologies and one data center concentration exam of your preference. For the CCNP Data Center concentration exam, you can craft out your certification to your technical area of focus. In this article, we will discuss concentration exam: 300-620 DCACI: Implementing Cisco Application Centric Infrastructure.

IT professionals who receive the CCNP Data Center certification are prepared for major roles in complicated Data Center environments, with expertise employing technologies involving policy-driven infrastructure, virtualization, automation and orchestration, unified computing, data center security, and integration of cloud initiatives. CCNP Data Center certified professionals are profoundly qualified for high-level roles engaged with empowering digital business transformation initiatives.

Cisco 300-620 Exam Details

The Cisco DCACI 300-620 exam is a 90-minute exam comprising of 55-65 questions that are associated with the CCNP Data Center Certification and Cisco Certified Specialist – Data Center ACI Implementation certifications. This exam tests an applicant's understanding of Cisco switches in ACI mode, comprising configuration, implementation, and management.

Tips That Can Help You Succeed in Cisco 300-620 DCACI Exam

There are a lot of things that the applicants ought to keep in mind to score well. Here they are:

  • To get a higher score, you should know that practice tests are essential for this exam. For this reason, you should make a structured plan of solving them on a daily basis. Practice tests will make you familiar with your gaps in exam preparation. Moreover, you will sharpen your time management skills.
  • Take ample time for your Cisco 300-620 exam preparation. CCNP Data Center certification exam may not appear difficult, but then again, you will notice that the questions asked are generally very tricky. Thorough preparation will wipe out confusion, and you will be more composed during your exam. Having a calm and composed mind during the exam without the last-minute breakout will improve the odds of passing the Cisco 300-620 DCACI exam. Hence, it is essential to prepare yourself well before sitting for the exam.
  • Get familiar with the Cisco 300-620 DCACI exam syllabus. The questions in the exam will come from the syllabus. Without it, you may be studying from inappropriate material that might be evaluated during the exam. You should ensure that you get the syllabus from the Cisco official website to determine that you include the essential areas. Study all the topics covered in the exam syllabus so that you don't leave out any important details.
  • Apart from following the syllabus and reading the essential and relevant material, video training can also help enhance your knowledge and sharpen your skills. It is also essential to read the relevant Cisco 300-620 DCACI book for acquiring mastery over exam concepts. Despite how the video training course and the instructor could be right, you will find out that they cannot bring up every important and theoretical detail.
  • Participate in an online community. Such groups can be of extreme help in passing Cisco 300-620 DCACI exam. Being part of such a community indicates that more heads are better than one. Studying together, you will be better positioned to grasp the concepts in a better way because another member of the community might have perceived them better.
  • When studying the exam by yourself, you might always visualize the study material from the same point of view. This might not be an issue, but getting familiar with different views on the subject can help you learn more comprehensively. You will be in a position to obtain distinct skills and share opinions with other people.

Conclusion

As an IT professional, it should be apparent that achieving a relevant certification is the sure-shot way to strengthen your status in the industry and scale up the corporate ladder. Expectedly, the above-mentioned tips will simplify your Cisco 300-620 DCACI certification journey and make your career aspirations to success.

Thursday 25 February 2021

Cisco User Defined Network: Redefining Secure, Personal Networks

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation

Connecting all your devices to a shared network environment such as dorm rooms, classrooms, multi-dwelling building units, etc. may not be desirable as there are too many users/devices on the shared network and onboarding of devices is not secure. In addition, there is limited user control; that is, there is no easy way for the users to deterministically discover and limit access to only the devices that belong to them. You can see all users’ devices and every user can see your device. This, not only results in poor user experience but also brings in security concerns where users knowingly or unknowingly can take control of devices that may belong to other users. 

Cisco User Defined Network (UDN) changes the shared network experience by enabling simple, secure and remote on-boarding of wireless endpoints on to the shared network to give a personal network-like experience. Cisco UDN provides control to the end-users to create their own personal network consisting of only their devices and also enables the end-users the ability to invite other trusted users into their personal network. This provides security to the end-users at the same time giving them ability to collaborate and share their devices with other trusted users. 

Solution Building Blocks

The following are the functional components required for Cisco UDN Solution. This is supported in the Catalyst 9800 controllers in centrally switched mode.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 1. Solution Building Blocks

Cisco UDN Mobile App: The mobile app is used for registering user’s devices onto the network from anywhere (on-prem or off-prem) and anytime. End-user can log in to the mobile app using the credentials provided by the organization’s network administrator. Device on-boarding can be done in multiple ways. These include: 

◉ Scanning the devices connected to the network and selecting devices required to be onboarded

◉ Manually entering the MAC address of the device

◉ Using a camera to capture the MAC address of the device or using a picture of the mac address to be added

In addition, using mobile app, users can also invite other trusted users to be part of their private network segment. The mobile app is available for download both on Apple store and Google play store.

Cisco UDN Cloud Service: Cloud service is responsible for ensuring the registered devices are authenticated with Active Directory through SAML 2.0 based SSO gateway or Azure AD. Cloud service is also responsible for assigning the end-users and their registered devices to a private network and provides rich insights about UDN service with the cloud dashboard.

Cisco DNA Center: Is an on-prem appliance which connects with Cisco UDN cloud service. It is the single point through which the on-prem network can be provisioned (automation) and provides visibility through telemetry and assurance data. 

Identity Services Engine (ISE): Provides authentication and authorization services for the end-users to connect to the network.

Catalyst 9800 Wireless Controller and Access Points: Network elements which enables traffic containment within the personal network. UDN is supported on wave2 and Cisco Catalyst access points.

How does it work?


Cisco UDN solution focuses on simplicity and secure onboarding of devices. The solution gives flexibility to the end-users to invite other trusted users to be part of their personal network. The shared network can be segmented into smaller networks as defined by the users. Users from one segment will not be able to see traffic from another user segment. The solution ensures that broadcast, link-local multicast and discovery services (such as mDNS, UPnP) traffic from other user segments will not be seen within a private network segment. Optionally, unicast traffic from other segments can also be blocked. However, unicast traffic within a personal network and north-south traffic will be allowed. 

Workflows


There are three main workflows associated with UDN:

1. Endpoint registration workflow: User’s endpoint can register with the UDN cloud service through a mobile-app from anywhere at any time (on-prem or off-prem). Upon registration, the cloud service ensures that the endpoint is authenticated with the active directory. Cloud service then assigns a private segment/network to the authenticated users and assigns a unique identity – User Defined Network ID (UDN-ID). This unique identity (UDN-ID) along with the user and endpoint information (mac address) is pushed from cloud service to on-prem through DNAC. The unique private network identity along with the user/endpoint information is stored in ISE 

2. Endpoint on-boarding workflow: When the endpoint joins the wireless network using one of the UDN enabled WLANs, as part of the authorization policy, ISE will push the private network ID associated with the endpoint to the wireless controller. This mapping of endpoint to UDN-ID is retrieved from ISE. The network elements (wireless LAN controller and access point), will use the UDN-ID to enforce traffic containment for the traffic generated by that endpoint

3. Invitation workflow: A user can invite another trusted user to be part of his personal network. This is initiated from the mobile app of the user who is inviting. The invitation will trigger a notification to the invitee through the cloud service. Invitee has an option to either accept or reject the invitation. Once the invitee has accepted the request, cloud service will put the invitee in the same personal network as the inviter and notify the on-prem network (DNAC/ISE) about the change of the personal room for the invitee. ISE will then trigger a change of authorization to the invitee and notify the wireless controller of this change. The network elements will take appropriate actions to ensure that the invitee belongs to the inviter’s personal room and enforces traffic containment accordingly

The following diagram highlights the various steps involved in each of the three workflows.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 2. UDN Workflows

Traffic Containment


Traffic containment is enforced in the network elements, wireless controller and access points. UDN-ID, an identifier for a personal network segment, is received by WLC from ISE as part of access-accept RADIUS message during either client on-boarding or change-of-authorization. Unicast traffic containment is not enabled by default. When enabled on a WLAN, unicast traffic between two different personal networks is blocked. Unicast traffic only within a personal network and north-south traffic will be allowed. Wireless controller enforces unicast traffic containment. The traffic containment logic in the AP ensures that the link-local multicast and broadcast traffic is sent as unicast traffic over the air to only the clients belonging to a specific personal network. The table below summarizes the details of traffic containment enforced on the network elements.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 3. UDN Traffic Containment

The WLAN on which UDN can be enabled should have either MAC-filtering enabled or should be an 802.1x WLAN. The following are the possible authentication combinations on which UDN can be supported on the wireless controller:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation

For RLAN, only mDNS and unicast traffic can be contained through UDN. To support LLM and/or broadcast traffic, all clients on RLAN needs to be in the same UDN.

Monitor and Control


The end-to-end visibility into the UDN solution is enabled through both DNA cloud service dashboard and DNAC assurance. In addition, DNAC also enables configuring the UDN service through a single pane of glass. 

DNA Cloud Service provides rich insights with the cloud dashboard. It gives visibility into the devices registered, connected within a UDN and also information about the invitations sent to other trusted users etc. 

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 4. Insights and Cloud Dashboard

On-Prem DNAC enables enablement of UDN through automation workflow and provides complete visibility of UDN through Client 360 view in assurance.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Preparation
Figure 5. UDN Client Visibility

Cisco UDN enriches the user experience in a shared network environment. Users can bring any device they want to the Enterprise network and benefit from home-like user experience while connected to the Enterprise network. It is simple, easy to use and provides security and control for the user’s personal network.

Tuesday 23 February 2021

Introduction to Terraform and ACI – Part 5

If you haven’t already seen the Introduction to Terraform posts, please have a read through. Here are links to the first four posts:

1. Introduction to Terraform

2. Terraform and ACI​​​​​​​

3. Explanation of the Terraform configuration files

4. Terraform Remote State and Team Collaboration

So far we’ve seen how Terraform works, ACI integration, and remote state. Although not absolutely necessary, sometimes it’s useful to understand how the providers work in case you need to troubleshoot an issue.​​​​​​​ This section will cover at a high level how Terraform Providers are built, using the ACI provider as an example. 

Code Example

https://github.com/conmurphy/intro-to-terraform-and-aci-remote-backend.git

For explanation of the Terraform files see Part 3 of the series. The backend.tf file will be added in the current post.

Lab Infrastructure

You may already have your own ACI lab to follow along with however if you don’t you might want to use the ACI Simulator in the Devnet Sandbox.

ACI Simulator AlwaysOn – V4

Terraform File Structure

As we previously saw, Terraform is split into two main components, Core and Plugins. All Providers and Provisioners used in Terraform configurations are plugins.  ​​​​

Data sources and resources exist within a provider and are responsible for the lifecycle management of the specific endpoint. For example we will have a look at the resource, “aci_bridge_domain“, which is responsible for creating and managing our bridge domains.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

The code for the Terraform ACI Provider can be found on Github


Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

There are a number of files in the root directory of this repo how the ones we are concerned with are “main.go“, the “vendor” folder, and the “aci” folder. 


◉ vendor: This folder contains all of the Go modules which will be imported and used by the provider plugin. The key module for this provider is the ACI Go SDK which is responsible for making the HTTP requests to APIC and returning the response.

◉ aci: This is the important directory where all the resources for the provider exist.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Within the ACI  folder you’ll find a file called “provider.go“. This is a standard file in the Terraform providers and is responsible for setting the properties such as username, password, and URL of the APIC in this case.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

It’s also responsible for defining which resources are available to configure in the Terraform files, and linking them with the function which implements the Create, Read, Update, and Delete (CRUD) capability.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

In the aci folder you’ll also find all the data sources and resources available for this provider. Terraform has a specific structure for the filename and should start with data_source_ or resource_

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Let’s look at the resource, “resource_aci_fvbd“, used to create bridge domains.

◉ On lines 10 and 11 the ACI Go SDK is imported. 
◉ The main function starts on line 16 and is followed by a standard Terraform configuration
    ◉ Lines 18 – 21 define which operations are available for this resource and which function should be called. We will see these four further down the page.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

◉ Lines 29 – 59 in the screenshot are setting the properties which will be available for the resource in the Terraform configuration files.

TROUBLESHOOTING TIP: This is an easy way to check exactly what is supported/configurable if you think the documentation for a provider is incorrect or incomplete. 

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

We’ve now reached the key functions in the file and these are responsible for implementing the changes. In our case creating, reading, updating, and destroying a bridge domain.

If you scroll up you can confirm that the function names match those configured on lines 18-21 

Whenever you run a command, e.g. “terraform destroy“, Terraform will call one of these functions. 

Let’s have a look at what it’s creating.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

First the ACI Go SDK has to be setup on line 419

Following on from that the values from your configuration files are retrieved so Terraform can take the appropriate action. For example in this screenshot the name we’ve configured, “bd_for_subnet“, will be stored in the variable, “name“. 

Likewise for the description, TenantDn, and all other bridge domain properties we’ve configured.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Further down in the file you’ll see the ACI Go SDK is called to create a NewBridgeDomain. This object is then passed to a Save function in the SDK which makes a call to the APIC to create the bridge domain

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

Continuing down towards the end of the create function you’ll see the ID being set on line 726. Remember when Terraform manages a resource it keeps the state within the terraform.tfstate file. Terraform tracks this resource using the id, and in the case of ACI the id is the DistinguishedName.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Guide, Cisco Career

It’s not only the id that Terraform tracks though, all the other properties for the resource should also be available in the state file. To save this data there is another function, setBridgeDomainAttributes, which sets the properties in the state file with the values that were returned after creating/updating the resource. 

So when Terraform creates our bridge domain, it saves the response properties into the state file using this function.

TROUBLESHOOTING TIP: If resources are always created/updated when you run a terraform apply even though  you haven’t changed any configuration, you might want to check the state file to ensure that all the properties are being set correctly.

Source: cisco.com

Monday 22 February 2021

The Best Kept Secret in Mobile Networks: A Million Saved is a Million Earned

Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guides

I’m about to let you in on a little secret, actually it’s a big secret. What if I told you that you could save millions – even tens of millions – of dollars in two hours or less? If you were the CFO of a Mobile Operator, would this get your attention? What if I told you that the larger and busier your mobile network is, the more money you could save? What if the savings were in the hundreds of millions or even billions of dollars?

Now that I have your attention, let me tell you a bit about the Cisco Ultra Traffic Optimization (CUTO) solution. I could tell you that this is a vendor-agnostic solution for both the RAN and the mobile packet core. I could tell you how CUTO uses machine learning algorithms or about proactive cross-traffic contention detection. I could tell you about elephant flows and how the CUTO software optimizes the packet scheduler in RAN networks. I could write a whole blog on the CUTO technology and how it works, but I won’t. I will leave out the technical details for another time to share.

What I want to share today are two important facts about our CUTO solution:

1) We have helped multiple operators install and turn on CUTO, networkwide in less than 2 hours.

2) Real world deployments are demonstrating material savings, including a recent trial with a large Tier 1 operator that resulted in a calculated savings of several billions of dollars.

Out of all the segments of the network that operators are investing in, spectrum and RAN tend to be a top priority, both from a CAPEX and an OPEX standpoint. This is because mobile network operators have thousand’s, if not tens of thousands, of cell towers with accompanying RAN equipment. The amount of equipment required, and the cost of a truck roll per site leads to enormous expenses when you want to upgrade or augment the RAN network. With network traffic growing faster every year, the challenge to stay ahead of demand also grows. Major RAN network augmentations can take months, if not years to complete, especially when you factor in governmental regulations and permitting processes. This is where one aspect of the CUTO solution stands out, its ease and speed of deployment. CUTO’s purpose is to optimize the RAN network and improve the efficiency of the use of the spectrum. Instead of sending an army of service trucks and technicians to each and every cell tower, CUTO is deployed in the Core of the network, which is a very small number of sites (data centers). Making things even easier, CUTO can be deployed on Common Off the Shelf Servers (COTS), or better yet it can be deployed on the existing Network Function Virtual Infrastructure (NFVI) stack already in the mobile network core. Installing and deploying CUTO is as easy as spinning up a few virtual machines. In less than 2 hours, real operators on live networks have managed to install and deploy CUTO networkwide. Service providers talk a lot about MTTD (mean time to detect an issue) and MTTR (mean time to repair it), but with CUTO they can talk about MTTME – mean time to millions earned (saved).

If you look at the present mode of operation (PMO) for most mobile operators, there’s a very typical workflow that occurs in RAN networks. Customer’s consumption of video and an insatiable appetite for bandwidth eventually leads to a capacity trigger in the network, alerting the mobile network operator that they have congestion in a cell site. To handle the alert, operators typically have three options:

1) They might be able to “re-farm” spectrum and transition 3G spectrum to 4G spectrum. If this is an option, it typically leads to about a 40% spectrum gain at the price tag of about 22K in CAPEX per site and with very nominal OPEX costs. This option is relatively quick, simple, and leads to a good capacity improvement.

2) They might be able to deploy a new spectrum band or increase the Antenna Sectorization density. This option leads to about 20%-30% of the capacity gain but comes at a price tag of about 80K in CAPEX per site and about 20K in OPEX. This is a relatively long and costly process for a marginal capacity improvement and finding high quality “beachfront” spectrum is impossible in many markets around the world.

3) If neither option 1 or 2 are possible, the Operator would need to build a new cell site and cell split (tighten reuse). A new site leads to about 80% capacity gain depending on the site’s placement, user distribution, terrain, shadowing, etc., and comes at a cost of roughly 250K in CAPEX and about 65K/year in OPEX. This is an extremely long process (permitting, etc.) and very expensive.

Unfortunately, the spectrum is a finite resource, the opportunity for operators to choose option 1 or 2 is becoming scarce. Just five years ago, about 50% of cell sites with congestion were candidates for option 1, 30% are candidates for option 2, and only 20% required option 3. Five years from now virtually no cell sites will be candidates for option 1, maybe 10% will be candidates for option 2, and over 90% will require the very time-consuming and costly option 3.

CUTO offers mobile operators an alternative, helping to optimize traffic and reduce the congestion in the cell sites, which can significantly reduce the number of sites requiring capacity upgrades. During a recent trial to measure the efficacy of CUTO in the real-world and at scale, we deployed it with a Tier 1 operator. Near immediately, we saw a 15% reduction in the number of cell sites triggering a capacity upgrade due to consistent congestion. Here are some real-world numbers of how we were able to calculate the savings based on that 15% reduction:

The operator in this use case has:

• ~40M subscribers

• 10,000 sites triggering a need for a capacity upgrade

• Annual data traffic growth rate of 25%

The assumptions we agreed to with the operator were:

• Blended Incremental CAPEX/Upgrade (options 1, 2, and 3) = $100K CAPEX

• Blended Incremental OPEX/Site/Year (options 1, 2, and 3) = $20K OPEX/year

Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guides

In this real-world example, you can see that CUTO saves this operator over $1.8B of CAPEX and $837M of OPEX over 5 years. I like to consider that as “adult money.” I recognize that these numbers are enormous and because of that, you may be skeptical. I was skeptical until I saw the results of the real-world trial for myself. I expect there to be plenty of FUD coming from the folks that are at risk of missing out on significant revenue because of your deployment of CUTO. Here’s my answer to those objections:

• Even if you cut these numbers in half or more, my guess is that you are looking at a material impact on your P&L.

• See it firsthand, don’t just take my word for it, ask your local Cisco Account Manager for a trial of CUTO and look at the savings based on your network, your mix of options 1, 2, or 3, and your costing models.

We all know that 5G will be driving new use cases, the need for more bandwidth, and we know that video will only become more ubiquitous. Mobile operators will require more and more cell sites and those sites will continue to fill up in capacity over time. Why not get ahead of the problem, and see what sort of MTTME your organization is capable of when your organization embraces a highly innovative software strategy?

Saturday 20 February 2021

Introduction to Terraform with ACI – Part 4

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

If you haven’t already seen the Introduction to Terraform post, please have a read through. This section will cover the Terraform Remote Backend using Terraform Cloud.​​​​​​​

1. Introduction to Terraform

2. Terraform and ACI​​​​​​​

3. Explanation of the Terraform configuration files

Code Example

https://github.com/conmurphy/intro-to-terraform-and-aci-remote-backend.git

For explanation of the Terraform files see the following post. The backend.tf file will be added in the current post.

Lab Infrastructure

You may already have your own ACI lab to follow along with however if you don’t you might want to use the ACI Simulator in the Devnet Sandbox.

ACI Simulator AlwaysOn – V4

Terraform Backends

An important part of using Terraform is understanding where and how state is managed. In the first section Terraform was installed on my laptop when running the init, plan, and apply commands. A state file (terraform.tfstate) was also created in the folder in which I ran the commands. ​​​​​​​

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

It’s fine when learning and testing concepts however does not typically work well in shared/production environment. What happens if my colleagues also want to run these commands? Do they have their own separate state file?​​​​​​​

These questions can be answered with the concept of the Terraform Backend.

“A backend in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.


Here are some of the benefits of backends:

◉ Working in a team: Backends can store their state remotely and protect that state with locks to prevent corruption. Some backends such as Terraform Cloud even automatically store a history of all state revisions.

◉ Keeping sensitive information off disk: State is retrieved from backends on demand and only stored in memory. If you’re using a backend such as Amazon S3, the only location the state ever is persisted is in S3.

◉ Remote operations: For larger infrastructures or certain changes, terraform apply can take a long, long time. Some backends support remote operations which enable the operation to execute remotely. You can then turn off your computer and your operation will still complete. Paired with remote state storage and locking above, this also helps in team environments.”


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

As you can see from the Terraform documentation, there are many backend options to choose from.

In this post we’ll setup the Terraform Cloud Remote backend.


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

We will use the same Terraform configuration files as we saw in the previous posts, with the addition of the “backend.tf “ file. See the code examples above for a post explaining the various files.

For this example you will need to create a free account on the Terraform Cloud platform

◉ Create a new organization and provide it a name

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Create a new CLI Driven workspace

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Once created, navigate to the “General” page under “Settings”

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Change the “Execution Mode” to “Local”

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

You have two options with Terraform Cloud

◉ Remote Execution – Let Terraform Cloud maintain the state and run the plan and apply commands

◉ Local Execution – Let Terraform Cloud main the state but you run the plan and apply  commands on your local machine

In order to have Terraform Cloud run the commands you will either need public access to the endpoints or run an agent in your environment (similar to Intersight Assist configuring on premises devices)

Agents are available as part of the Terraform Cloud business plan. For the purposes of this post Terraform Cloud will manage the state while we will run the commands locally.

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Navigate back to the production workspace and you should see that the queue and variables tabs have been removed.

◉ Copy the example Terraform code and update the backend.tf file (the Terraform files can be found in the Github repo above)

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Navigate to the Settings link at the top of the page and then API Tokens

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Create an authentication token
◉ Copy the token
◉ On your local machine create a file (if it doesn’t already exist) in the home directory with the name .terraformrc
◉ Add the credentials/token information that was just create for your organization. Here is an example

CONMURPH:~$ cat ~/.terraformrc
credentials "app.terraform.io" {
  token="<ENTER THE TOKEN HERE> "
}

◉ You should now have the example Terraform files from the Github repo above, an updated backend.tf file with your organization/workspace, and a .terraformrc file with the token to access this organization
◉ Navigate to the folder containing the example Terraform files and your backend.tf file
◉ Run the terraform init command. If everything is correct you should see the remote backend initialised and the ACI plugin installed

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Run the terraform plan and terraform apply commands to apply to configuration changes.
◉ Once complete, if the apply is successful have a look at your Terraform Cloud organization.
◉ In the States tab you should now see the first version of your state file. When you look through this file you’ll see it’s exactly the same as the one you previously had on your local machine, however now it’s under the control of Terraform Cloud​​​​​​​
◉ Finally, if you want to collaborate with your colleagues, you can all run the commands locally and have Terraform Cloud manage a single state file. (May need to investigate locking depending on how you are managing the environment)

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Source: cisco.com