Thursday, 30 June 2022

Deployment Options Expand with New Cisco DNA Center Virtual Appliance

Virtualization technology has changed the world of IT and enabled cloud computing. It has also been embraced by Cisco customers due to its flexibility and cost benefits. That demand is behind our recent announcement at Cisco Live of the Cisco DNA Center Virtual Appliance, which gives customers new deployment options for our network controller, whether deployed within the company data center or in public and private clouds.

Why a Virtual Appliance?

A virtual appliance provides operational flexibility and choice. For new Cisco DNA Center customers, choosing a Cisco DNA Center Virtual Appliance for deployment in their data center eliminates additional capital expenditures, supply chain worries, long lead times for orders, and truck rolls.

There are many other benefits of a virtual appliance: Eliminating lengthy and expensive compliance and certification checks; fast, automated deployment; and high availability, implemented with the use of native features. A virtual appliance in the cloud can also scale out; with the Cisco DNA Virtual Appliance in the cloud, customers can manage up to 5,000 devices.

Multiple Options for New and Existing Cisco DNA Center Customers

The Cisco DNA Center Virtual Appliance is designed to be deployed in a public cloud service starting with AWS (and later Microsoft Azure and Google Cloud Platform) or in a VMware ESXi (and later Hyper-V and KVM) virtual environments located on-premises or in a co-location facility (Figure 1).

Cisco DNA Center Virtual Appliance, Cisco Networking, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Learning, Cisco Tutorial and Material, Cisco Prep
Figure 1. On-premises and Cloud Versions

These virtual appliances from Cisco have feature parity with today’s physical Cisco DNA Center platform (Figure 2). Additionally, customers can take advantage of native high availability features from AWS and VMware to deliver quality performance and minimize downtime.

Cisco DNA Center Virtual Appliance, Cisco Networking, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Learning, Cisco Tutorial and Material, Cisco Prep
Figure 2. Feature Parity Across Physical and Virtual Appliance Versions

We’re providing our customers with options because some customers, especially government agencies with strict security requirements, don’t want to deploy management solutions in the cloud. They require physical Cisco DNA Center appliances and Cisco will continue to provide them. We fully support the air gap capability to ensure that networks can be physically isolated from unsecured networks like the public Internet or an unsecured LAN.

Cisco DNA Center Deployments, License Portability, Prime Migrations


Current DNA Center customers wanting to expand to the cloud can quickly, easily, and cost-effectively add a separate instance of Cisco DNA Center Virtual Appliance to remote offices or branches, maintaining a physical appliance in their central data center. This hybrid approach is seamless due to license portability and the choice of different platforms. You can easily deploy Cisco DNA Center in the data center or in a cloud, using the same license.

Cisco DNA Center Virtual Appliance is an option for customers migrating from Cisco Prime management infrastructure to Cisco DNA Center. Cisco Prime Infrastructure (current Release 3.10 Patch 1) includes a Cisco DNA Center coexistence and migration feature that allows users to easily export data from Cisco Prime Infrastructure to Cisco DNA Center. The two management and control systems can be operated in parallel so IT teams can train and get familiar with Cisco DNA Center before a complete system migration is performed. Teams can begin to migrate as soon as they are comfortable with the new paradigm for NetOps, AIOps, SecOps, and DevOps capabilities that Cisco DNA Center offers.

The Cisco DNA Center Virtual Appliance is here. Now you can manage and troubleshoot your network using Cisco DNA Center as a physical or a virtual appliance. Or deploy both types of appliances, on-premise or in the cloud. Then sit back and manage your network with a steady hand using guided Cisco workflows specific to job roles in NetOps, AIOps, SecOps, or DevOps.

Source: cisco.com

Tuesday, 28 June 2022

Cisco Catalyst 9200CX Series switches now in Compact size

Cisco Catalyst 9200CX Series, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career, Cisco Jobs, Cisco Skills, Cisco Learning

Hybrid work has become prevalent everywhere and it is here to stay. It is important for your network to be able to handle business demands more efficiently and remotely. This is especially emphasized in extended small enterprise and campus locations. Cisco Catalyst 9200 Series switches offer trusted network capabilities, with more flexibility, energy efficiency, and ease.

Hybrid work extended

Just 2 or 3 years ago, you probably didn’t even know what it was like to do hybrid work outside of the office. Now, you cannot imagine doing your job without it. You can be working in the office, or hybrid working in a café, with a nice breeze, reviewing security anomalies.

Hybrid work is a reality and connectivity needs are changing every day. In some cases, deployments are temporary installations or have smaller more efficient requirements. You need a versatile network that is predictable and not just in the small branch offices but also extended onto your campus and temporary settings. You need this network to make your hybrid work, work. All of these needs are addressed by the trusted and powerful backend infrastructure the Cisco Catalyst 9000 switches can deliver.

Connect with flexibility

Cisco has been focused on delivering products to support hybrid work. Cisco offers more flexibility in network deployments with more power density per size at a lower cost, efficient power options, and secure Zero Trust networks to simplify IT jobs.

Imagine a switch that offers PoE (Power over Ethernet) so you can connect more power-hungry devices like laptops, monitors, lighting, HVAC, and refrigerators into a previously siloed network, therefore enabling more flexibility for secure hybrid networks. All this can be supported with the Cisco Catalyst 9200 Series switches. It allows you to work more flexibly, more confidently, remotely, and in small business branches and campuses, extending your hybrid work environments.

Efficient Smart Buildings

Energy efficiency impacts the bottom line and is environmentally friendly – so it is a win-win in your operations. PoE ports bring switches closer to the endpoints while facilitating efficient power usage and consolidated networks. This is especially practical in smart buildings to provide sustainable and healthier spaces to meet the demands of hybrid work.

The Catalyst 9200 Series switches, with Class 6 Power PoE devices, can offer efficiencies, from less power consumption to reducing power losses on some models.

Lower energy bills AND help the planet without compromising your connectivity. Yes, please!

Connect with ease to ‘set it and forget it’

IT teams love the Catalyst 9200 switches because of features like Zero-Touch Provisioning (ZTP) and flexible power options. ZTP is a ‘must have’ feature for small branches where IT teams can automatically set up devices using a switch feature – and eliminate most of the manual labor and travel expenses associated with branch upkeep.

Executive-level C-Suite decision makers love Catalyst 9200 Series switches because they are predictable and can help to reduce costs so lean IT teams can ‘set it and forget it’ when doing out-of-the-box installations at small branches and other sites.

What else is new?

The Cisco Catalyst 9200CX compact models extend their Layer 3 network access with all the features of the Catalyst 9200 Series switches, plus even more flexibility with its smaller size, and more energy-efficient fanless operation. The smaller footprint and quieter fanless design mean the switch can go in more places where other switches cannot, such as under desks, in closets, on the wall, and at the checkout counter for retail point of sale (POS) installations.

More use-cases include locations that are easy to set up and easy to dismantle, such as ATM rentals, small office home offices (SOHO), extended hospitals, mobile clinics, classrooms, cruise ships, sports games, festivals, events, and pop-up kiosks.

One quick look and you will notice something different about the Catalyst 9200CX models. The enclosure is designed to reduce the costs of cooling and be more environmentally friendly.

Additional Key Benefits of Catalyst 9200CX

◉ Naturally cooled fanless operation

◉ Multiple port choices with incredible speeds, some uplinks increase from 1G to 10G

◉ AC/DC power convergence with increased power efficiencies and reduced conversion losses

◉ Zero Trust security with policy-based segmentation, for less downtime

◉ Built-in micro-SSD (Solid State Drive) card slot for the “offline” setup

Source: cisco.com

Sunday, 26 June 2022

Autonomous Operations in Mining

Trend Overview

By the end of 2021, Caterpillar has hauled more than 4 billion tons of product and driven more than 145 million kilometers autonomously. As an aside, that’s the distance of a round trip to Mars. Autonomous technology is mature.

Autonomous Operations in Mining, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Preparation, Cisco Mining, Cisco Tutorial and Materials

Perhaps haulage is the simplest of all autonomous problems to solve and has the most significant return on investment. In 2017, Rio Tinto identified that in one year, each of their autonomous trucks had 700 more production hours than an average conventional truck. Autonomous trucks are 15% less expensive to operate and generate up to 30% more productivity.

With these substantial benefits, you would think mines would be trending to full autonomy. Here are a two of the most significant challenges.

Reliable wireless coverage everywhere

Reliable and pervasive wireless access to the autonomous system is critical for all elements of an autonomous environment. For the haulage use case, the path of haul trucks is well defined and only covers a small percentage of a full mine. Coverage in that well defined region is much less costly than full and reliable coverage of a mine.

Use cases like autonomous dozing, autonomous operation of service vehicles, and other systems could be anywhere in the mine.

Reliable instrumentation and control software

In the Caterpillar example above, all the critical components are controled by Caterpillar. Most new Caterpillar equipment can be bought with all the sensors and actuators required for autonomous operation. Cat Command is the autonomous system that coordinates all the vehicles in the autonomous zone. Even vehicles that are not Caterpillar need to be fitted with Cat Command software so they can be seen in the autonomous system.

In today’s mine there are numerous vehicles, gauges, valves, and measurement points that are not connected and may not even have the sensors required for autonomous operation.

Every mining company needs to make a business decision about which processes or activities should become autonomous in their mines.

Industry POV

Cisco’s infrastructure solutions are a critical part of an autonomous mining solution. Here are a two practical ways that Cisco technology makes autonomous projects more successful.

Reliable wireless coverage

Cisco Wifi was the early favorite for wireless connectivity to autonomous trucks. Caterpillar and Sandvik have done extensive testing with Cisco wifi and continue to support this technology. Since then, many mining companies have started testing and deploying LTE in hopes that it will provide broader mine coverage at a similar price point and provide more consistent connectivity. Now, Cisco is starting to see an increase in its Ultra Reliable Wireless for autonomous use cases as well, mostly because of its price point and very high reliability.

The important consideration Is that Cisco has solutions in all three of these technologies for autonomous operation in an integrated architecture.

Broad instrumentation partnerships

The culture at Cisco is one of partnership. These companies include autonomous system providers, instrumentation vendors, analytics platforms, and numerous other technologies that provide a platform for autonomy.

Although haulage solutions are usually self contained systems with very few outside elements, other autonomous systems will likely have a lot more diversity in their sensors, actuators, software and analytics. This approach will require a rich ecosystem of partners like the one that Cisco operates in.

Source: cisco.com

Saturday, 25 June 2022

Our future network: insights and automation

Insights and automation will power our future network. Think of it as a circular process: collect data from network infrastructure. Analyze it for insights. Share those insights with teams to help them improve service. Use the insights to automatically reprogram infrastructure where possible. Repeat. The aim is to quickly adapt to whatever the future brings—including new traffic patterns, new user habits, and new security threats.

Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Jobs, Cisco Tutorial and Material, Cisco Automation

Now I’ll dive into more detail on each block in the diagram.

Insights


Data foundation. Good insights can only happen with good data. We collect four types of data:

◉ Inventory data for compliance reporting and lifecycle management
◉ Configuration data for audits and to find out about configuration “drift”
◉ Operational data for network service health monitoring
◉ Threat data to see what parts of our infrastructure might be under attack—e.g., a DDoS attack on the DMZ, or a botnet attack on an authentication server

Today, some network data is duplicated, missing (e.g., who authorized a change), or irrelevant. To prepare for our future network, we’re working to improve data quality and store it in centralized repositories such as our configuration management database.

Analytics. With a trusted data foundation, we’ll be able to convert data to actionable insights. We’re starting by visualizing data—think color-coded dials—to make it easier to track key performance indicators (KPIs) and spot trends. Examples of what we track include latency and jitter for home VPN users, and bandwidth and capacity for hybrid cloud connections. We’re also investing in analytics for decision support. One plan is tracking the number of support tickets for different services so we can prioritize the work with the biggest impact. Another is monitoring load and capacity on our DNS infrastructure so that we can automatically scale up or down in different regions based on demand. Currently, we respond to performance issues manually—for instance, by re-routing traffic to avoid congestion. In our future network we’ll automate changes in response to analytics. Which leads me to our next topic: automation.

Automation


Policy and orchestration. February 2022 marked a turning point: we now fulfill more change requests via automation than we do manually. As shown in the figure, we automatically fulfilled more than 7,500 change requests in May 2022, up from fewer than 5,000 just six months earlier. Examples include automated OS upgrades with Cisco DNA Center Software Image Management (SWIM), compliance audits with an internally developed tool, and daily configuration audits with an internal tool we’re about to swap out for Cisco Network Services Orchestrator. We have strong incentives to automate more and more tasks. Manual activities slow things down, and there’s also the risk that a typo or overlooked step will affect performance or security.

Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Jobs, Cisco Tutorial and Material, Cisco Automation
In our future network, automation will make infrastructure changes faster and more accurate. Our ultimate goal is a hands-off, AIOps approach. We’re building the foundation today with an orchestrator that can coordinate top-level business processes and drive change into all our domains. We are working closely with the Cisco Customer Experience (CX) group to deploy Business Process Automation solution. We’re developing workflows that save time for staff by automating pre- and post-validation and configuration management. The workflows integrate with IT Service Management, helping us make sure that change requests comply with Cisco IT policy.

Release management. In the past, when someone submitted a change request one or more people manually validated that the change complied with policy and then tested the new configuration before putting it into production. This takes time, and errors can affect performance or security. Now we’re moving to automated release pipelines based on modern software development principles. We’re treating infrastructure as code (IaC), pulling device configurations from a single source of truth. We’ve already automated access control list (ACL) management and configuration audits. When someone submits a change to the source of truth (typically Git), the pipeline automatically checks for policy compliance and performs tests before handing off the change for deployment.

The Road Ahead


To sum up, in our future network, the only road to production is through an automated pipeline. Automation helps us adapt more quickly to unexpected change, keeps network configuration consistent worldwide, and reduces the risk of errors. We can’t anticipate what changes our business will face between now and 2025—but with insights and automation, we’ll be able to adapt quickly.

Source: cisco.com

Thursday, 23 June 2022

Get Brilliant Results by Using Cisco 700-760 ASAEAM Practice Test

Cisco 700-760 ASAEAM Exam Description:

The Advanced Security Architecture Express for Account Managers (ASAEAM 700-760) is a 90-minute exam for the required knowledge across the Cisco Security portfolio for a registered partner organization to obtain the Security specialization in the AM role.

Cisco 700-760 Exam Overview:

Cisco 700-760 Exam Topics:

  1. Threat Landscape and Security Issues- 20%
  2. Selling Cisco Security- 15%
  3. Customer Conversations- 15%
  4. IoT Security- 15%
  5. Cisco Zero Trust- 15%
  6. Cisco Security Solutions Portfolio- 20%

People: A cornerstone for fostering security resilience

Cisco Certification, Cisco Exam Prep, Cisco Guides, Cisco Career, Cisco Skill, Cisco Jobs, Cisco News

Security resilience isn’t something that happens overnight. It’s something that grows with every challenge, pivot and plot change. While organizations can invest in solid technology and efficient processes, one thing is critical in making sure it translates into effective security: people.

What impact do people have on security resilience? Does the number of security employees in an organization affect its ability to foster resilience? Can a lower headcount be supplemented by automation?

In a world where uncertainty is certain, we recently explored how people can contribute to five dimensions of security resilience, helping businesses weather the storm.

Through the lens of our latest Security Outcomes Study – a double-blind survey of over 5,100 IT and security professionals – we looked at how people in SecOps teams can influence organizational resilience.

Strong people = successful security programs  

SecOps programs built on strong people, processes and technology see a 3.5X performance boost over those with weaker resources, according to our study. We know that good people are important to any organization, and they are fundamental to developing capable incident response and threat detection programs.

Why are detection and response capabilities important to look at? Because they are key drivers of security resilience. In the study, we calculated a ratio of SecOps staff to overall employees for all organizations. Then, we compared that ratio to the reported strength of detection and response capabilities.

Cisco Certification, Cisco Exam Prep, Cisco Guides, Cisco Career, Cisco Skill, Cisco Jobs, Cisco News
Effect of security staffing ratio on threat detection and incident response capabilities

What we can clearly see is that organizations with the highest security staffing ratios are over 20% more likely to report better threat detection and incident response than those with the lowest. However, the overall average highlights that organizations not on the extreme ends of the spectrum are more likely to report roughly equal levels of success with SecOps — indicating that headcount alone isn’t a sure indicator of an effective program or resilient organization. It can be inferred that experience and skills also play a pivotal role.

Automation can help fill in the gaps


But what about when an organization is faced with a “people gap,” either in terms of headcount or skills? Does automating certain things help build security resilience? According to our study, automation more than doubles the performance of less experienced people.

Cisco Certification, Cisco Exam Prep, Cisco Guides, Cisco Career, Cisco Skill, Cisco Jobs, Cisco News
Effect of staffing and automation strength on threat detection and incident response capabilities

In the graph above, the lines compare two different types of SecOp programs: One without strong people resources, and one with strong staff. In both scenarios, moving to the right shows the positive impact that increasing automation has on threat detection and incident response.

Out of the survey respondents, only about a third of organizations that lack strong security staff, and don’t automate processes, report sound detection and response.

When one of three security process areas (threat monitoring, event analysis, or incident response) is automated, we see a significant jump in capability among organizations that say their tech staff isn’t up to par. Automating two or three of these processes continues to increase strength in detection and response.

Why does this matter? Because over 78% of organizations that say they don’t have adequate SecOps staffing resources still report that they are able to achieve robust capabilities through high levels of automation.

A holistic approach to security resilience


When it comes to security resilience, however, we have to look at the whole picture. While automation seems to increase detection and response performance, we can’t count people out. After all, over 95% of organizations that have a strong team AND advanced automation report SecOps success. Organizations need to have the right blend of people and automation to lay the foundation for organization-wide security resilience.

As your business continues to look towards building a successful and resilient SecOps program, figuring out how to utilize your strongest staff, and where to best employ automation, will be a step in the right direction.

Source: cisco.com

Wednesday, 22 June 2022

Is It Possible to Pass the Cisco 300-730 SVPN Exam At First Attempt?

The Cisco Security Certification is one of the industry's most renowned career certifications. The CCNP Security concentration exam 300-730 SVPN, also known as called Implementing Secure Solutions with Virtual Private Networks, is designed for individuals looking to cultivate crucial skills required for implementing secure remote communications with the help of VPN solutions. The exam is affiliated with two Cisco certifications, namely CCNP Security and Cisco Certified Specialist-Network Security VPN Implementation certifications.

Cisco 300-730 SVPN is a 90-minute exam available in the English and Japanese languages. The exam cost is $300. If you crack this CCNP Security concentration exam, you will prove your proficiency in working with VPN solutions, qualifying for security job positions like Network Engineering, Network Architect, and Network Administration.

Is It Possible to Pass the Cisco 300-730 SVPN Exam At First Attempt?

If you are aspiring to take up CCNP Security 300-730 SVPN exam, you have to chart out a strategy for exam preparation that will ease your preparation process. Make sure you strictly follow every tip outlined, and it will help you excellently in passing the Cisco SVPN exam quickly! Let's begin.

1. Know the Cisco 300-730 SVPN Exam Objectives

The first step in your Cisco 300-730 exam preparation is to become acquainted with the exam topics. Make sure you have the exam objectives handy because they serve as the definitive guide. They will also help you create your strategy because you will know what you are anticipated to learn. As a result, you will not go off the track of your whole preparation time.

2. Take an Official Instructor-Led Training Course

An official training course is an excellent way to gain the skills and knowledge for any Cisco exam. The official training course prepares you with the knowledge, skills, and hands-on practice you need to carry out the tasks at the workplace.

3. Watch Online Videos

If you have spare time, try to explore the internet and find videos related to Cisco 300-730 SVPN exam. Learning from watching the videos is the most enjoyable way. And you won't be unhappy with the videos you can find, particularly on YouTube. These are updated videos that are carefully prepared by seasoned people who want to support exam candidates.

4. Participate in Online Communities

It is extremely crucial to get immersed in an online community discussion as it helps you incorporate your knowledge and skills with your co-workers. In most circumstances, a Cisco community is the best option for filling your knowledge gaps with fellow counterparts.

It is a variety of skills, concepts, and techniques striving for you to comprehend the main concepts evaluated in the Cisco SVPN exam. Furthermore, you can share study resources, tips, and other valuable information to boost exam preparation.

5. Evaluate Your Preparation Level with Cisco 300-730 SVPN Exam

Once you have obtained the essential skills and knowledge, it is time to evaluate yourself. To solve this purpose, Cisco 300-730 SVPN practice tests from nwexam are the best means to decide if you have soaked up the information to crack the exams. Practice tests familiarize you with the actual exam environment and the same exam structure and question types as the actual exam.

Reasons Why You Must Pass Cisco 300-730 SVPN Exam And Achieve CCNP Security Certification

There are many reasons CCNP Security certification is the answer to achievement in the network security field:

Amazing Job opportunities

CCNP Security certification satisfies the standards for many different positions like IT executive; manager of computers and information systems; network engineer; computer systems and network administrators; computer system designer or engineering projects, to list a few.

Acknowledgment of Skills

Getting a CCNP Security certification is a means to disclose your superior information and skills in the field of computer networking. Professionals by good reputation organization similar to Cisco means you will be acknowledged as the best-qualified person in the field.

Radiant Career Growth

Earning a CCNP Security certification not simply helps you in discovering excellent networking jobs; it also places you at the top of the list when it arrives time for internal promotions and career advancements. If you are looking to switch jobs, Cisco 300-730 certification will promote you to obtaining a high-level job without having to begin at an entry-level and climb up the career ladder.

Boosts Self-Confidence

Passing Cisco 300-730 SVPN exam can boost your confidence and self-esteem. Rather than being scared to apply for a job because of fewer qualifications and experience, you boost confidence by understanding that you have a certification from a leading vendor - Cisco.

Conclusion

Nowadays, having a Cisco certification is a synonym for having great career opportunities. There are many ways to pass the Cisco 300-730 SVPN exam, And the Essential Step Is thorough Preparation. All the study resources, official training courses to practice tests from the nwexam website will give you a greater possibility of passing the exam on the first try.

Sunday, 12 June 2022

Perspectives on the Future of Service Provider Networking: Mass Network Simplification

Traditional service provider networks have become very complex, creating significant overhead across engineering and operations teams tasked with building, expanding, and maintaining them. This results in higher costs, reduced agility, and increased environmental impact. Built on multiple technology layers, domains, protocols, operational silos, and proprietary components that have been stacked over years or decades, service provider networks must go through mass simplification to couple with our society’s increasing business and sustainability demands. Simplification is key to allow service provider networks to continue supporting exponential traffic growth and emerging demands for service agility while reducing the cost of services and power, as well as footprint requirements.

In some sense, talking about why networks need to be simplified is like talking about the importance of exercising for our health and well-being – both can start small and deliver clear, unquestionable long-term benefits, yet we can always find an excuse not to do them. And like many people that struggle to start an exercise routine and maintain it over the long run, many operators struggle to embrace simplicity as a long-term network design principle that benefits the health and well-being of their network.

Intuitively, a leaner network with less moving parts will be simpler, more efficient, consume less resources, and allow for a smoother operation, thus lowering its total cost of ownership. Similarly, using common design, protocols, and tools across the end-to-end network improves agility. Such simplifications can be achieved through small, consistent changes from network design to operations. Over time, networks will achieve compounded benefits in cost savings, lower power use, and improved environmental impact. Operations will be more agile too, directly impacting customer experience.

Mass network simplification is about taking a holistic approach to apply modern network design and operational practices, embracing simplification opportunities across every network domain, and automating everything that can be automated. It’s also about making simplicity part of the engineering and operations culture.

There are several potential areas of simplification to aim for, from the end-to-end network architecture all the way down to the network device level. The following table provides some examples:

Network Level Mass network simplification opportunity  Examples 
End-to-end Architecture Removing legacy technologies and converging services towards modern IP networks Moving TDM-based private line and dedicated wavelength services onto IP/MPLS networks using circuit emulation and, thereby, eliminating the need for dedicated legacy SONET/SDN or OTN switching equipment
  Integrating technologies to remove redundancy and lower interconnect costs   Integrating advanced DWDM transponder functions into pluggable optics that go directly into router ports using Digital Coherent Optics (DCO) technology
  Collapsing technology layers, removing functional redundancy, and converging services and network intelligence at the IP/MPLS layer.   Adopting Routed Optical Networking solution which converges L1, L2 and L3 services and advanced network functions, e.g., traffic engineering and network resiliency, at IP/MPLS layer while simplifying the DWDM network requirements as routers are connected hop-by-hop and the IP/MPLS network is self-protected
  Using common technologies end-to-end, avoiding technology and operational silos   End-to-end unified forwarding plane using Segment Routing over IPv6 (SRv6) and an end-to-end unified control plane using M-BGP including EVPN across core, edge, aggregation, access networks and data center fabrics – distributed to the edge or centralized 
Device  Adopting modern network platforms with simpler and more efficient hardware architectures   State-of-the-art Network Processor Units (NPUs) based on System on a Chip (SoC) multi-purpose architecture, allowing simpler, more scalable, and more efficient routing platforms
Protocols   Reducing the number of protocols required to run the network   IETF’s Segment Routing and EVPN standard technologies reduces the number of protocols in an IP/MPLS network from 6 or 7 down to 3 (50% reduction) while improving network resiliency and service ability
Management & Automation   Building management and automation solutions based open software frameworks   IETF’s ACTN framework, ONF Transport-SDN framework and OpenConfig gNMI
  Consolidating software interfaces to open APIs and data models   YANG model-driven APIs using NETCONF and/or gNMI , T-API interfaces

Let’s look at two examples of how these network simplifications can be introduced in small steps as part of a long-term initiative – one at the IP/MPLS network protocol level and another at the end-to-end network architecture level.

Mass network simplification in practice


IP/MPLS networks provide unmatched multi-service capabilities. They support Layer 1 services through circuit emulation, Layer 2 services (point-to-point and multipoint, i.e., E-Line, E-LAN, E-Tree services), Layer 3 VPN services as well as various internet services. The technology required to support those services was developed and standardized over many years and as a result traditional IP/MPLS networks require many individual protocols – typically six or seven. Segment routing (SR – IETF RFC 8402 and related) was developed at the Internet Engineering Task Force (IETF) specifically to improve this scenario. By embracing a software defined networking (SDN) framework, segment routing combined with Ethernet VPN (EVPN – IETF RFC 7432) can reduce the number of protocols required in the IP/MPLS network by 50% or more, down to three protocols – segment routing, the interior gateway protocol (IGP) routing protocol, and border gateway protocol (BGP) as a service protocol. Resource reservation protocol (RSVP) and label distribution protocol (LDP) can be eliminated, as can other transport and service signaling protocols.


Segment routing also simplifies network devices because it doesn’t require them to maintain state about traffic engineering tunnels otherwise required by the RSVP-TE protocol. Instead, segment routing embraces an SDN architecture where traffic engineering is supported by network controllers.

Segment routing was created with smooth network migrations in mind. EVPN implementations have also been enhanced to allow for smooth migrations. To achieve that, both allow co-existence of the old and new protocol stack. Co-existence means both traditional IP/MPLS protocols and segment routing are enabled on the same network, or traditional networks can be connected to segment routing networks through routers that provide “interworking” functions to allow traffic to smoothly cross them. Besides that, segment routing was also created with operational simplicity in mind. It’s enabled by using simple configurations since there are less protocols involved. As a result, network operators have been migrating their networks to segment routing for quite some time and have fully transitioned their networks to this much simpler architecture.

At the end-to-end architecture level, service providers also had to stack multiple technology layers. This multi-layer architecture typically has at least four key technology components: IP/MPLS for packet services, OTN switching for TDM grooming and private line services, DWDM transponders for mapping grey signals to DWDM channels, and DWDM ROADMs to cross-connect DWDM channels across multiple fibers. Each technology layer has its own management system and runs its own complex protocol stack. Multiply this for each network domain (WAN, metro, access, etc.) and add a multi-vendor component and the result is a very complex architecture that’s hard to plan, design, deploy, and operate. It’s also very inefficient as it’s hard to optimize all the network resources mobilized for any given service as well as troubleshoot network faults.

Technology innovations made possible the emergence of routed optical networking, a much simpler and cost-effective end-to-end network architecture. These are key improvements promoted by routed optical networking:

◉ Full services convergence at the IP/MPLS layer, including private line services through private line emulation (PLE) technology

◉ Elimination of OTN switching – OTN services are supported by PLE technology

◉ Integration of advanced transponder functions into pluggable optics using digital coherent optics (DCO) technology that goes directly into the router ports

◉ Centralization of network intelligence at the IP/MPLS layer for traffic engineering and network resilience removes the dependency on complex transport control planes (e.g., WSON/SSON)

◉ Use of industry-defined open interfaces and data models for management and automation with segment routing to further simplify the end-to-end network


Even such a breakthrough network transformation like routed optical networking can start small. The first step can involve simply replacing transponders with DCO pluggable optics as you adopt 400GE in your IP/MPLS network, while maintaining your existing DWDM network. In parallel you can start your path towards segment routing adoption. Over time, you can embrace more automation and start migrating TDM services to the IP/MPLS layer, until you eventually adopt all the innovations and deploy a full featured routed optical network. As we speak, many service providers have already started these network transitions.

Source: cisco.com

Saturday, 11 June 2022

Cisco 700-651 CASE Exam | Best Collaboration Architecture Sales Essentials Practice Test

Cisco 700-651 CASE Exam Description:

The 700-651 CASE exam tests a candidate's knowledge of the skills needed by an account manager to design and sell Cisco collaboration architecture solutions.

Cisco 700-651 Exam Overview:

Related Article:-

Get Ready to Take Cisco Collaboration Architecture Sales Essentials 700-651 CASE Exam

Metrics that Matter

Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Certification, Cisco News

In large, complex organizations, sometimes the only metric that seems to matter is mean time to innocence (MTTI). When a system breaks down, MTTI is the tongue-in-cheek measure of how long it takes to prove that the breakdown was not your fault. Somehow, MTTI never makes it into the slide deck for the quarterly board meeting.

With the explosion of tools available today—observability platforms for gathering system telemetry, CI/CD pipelines with test suite timings and application build times, and real user monitoring to track performance for the end user—organizations are blessed with a wealth of metrics. And cursed with a lot of noise.

Every team has its own set of metrics. While every metric might matter to that team, only a few of those metrics may have significant value to other teams and the organization at large. We’re left with two challenges:

1. Metrics within a team are often siloed. Nobody outside the team has access to them or even knows that they exist.

2. Even if we can break down the silos, it’s unclear which metrics actually matter.

Breaking down silos is a complex topic for another post. In this one, we’ll focus on the easier challenge: highlighting the metrics that matter. What metrics does a technology organization need to ensure that, in the big picture, things are working well?  Are we good to push that change, or could the update make things worse?

Availability Metrics

Humans like big, simple metrics: the Dow Jones, heartbeats per minute, number of shoulder massages you get per week. To get the big picture in IT, we also have simple, easily-understandable metrics.

Uptime

As a percentage of availability, uptime is the simplest metric of all. We would all guess that anything less than 99% is considered poor. But chasing those last few nines can get expensive. Complex systems designed to avoid failure can cause failure in their own right, and the cost of implementing 99.999% availability—or “five nines”—may not be worth it.

Mean Time Between Failures (MTBF)

MTBF is the average time between failures in a system. The beauty of MTBF is that you can actually watch your boss start to twitch as you approach MTBF: Will the system fail before the MTBF? After? Perhaps it’s less stressful to throw the breakers intentionally, just to enjoy another 87 days!

Mean Time To Recovery (MTTR)

MTTR is the average time to fix a failure and can be thought of as the flip side of MTBF. Both Martin Fowler and Jez Humble have quoted the phrase, “If it hurts, do it more often,” and that principle seems like it could apply to MTTR as well. Rather than avoiding changes—and generally treating your systems with kid gloves to try and keep MTBF high—why not get better at recovery? Work to reduce your MTTR. Paradoxically, you could enjoy more uptime by caring about it less. 

Development Metrics

For years, an important improvement metric used by developers was Product Owner Glares Per Day. Development in the 21st century has given us new ways to understand developer productivity, and a growing body of research points to the metrics we need to focus on. 

Deployment Frequency

The outstanding work of Nicole Forsgren, Jez Humble, and Gene Kim in Accelerate demonstrates that teams that can deploy frequently experience fewer change failures than teams that deploy infrequently. It would be a brave move to try and game this metric by deploying every hour from your CI/CD pipeline. However, capturing and understanding this metric will help your team investigate its impediments.

Cycle Time

Cycle time is measured from the time a ticket is created to the healthy deployment of the resulting fix in production. If you needed to fix an HTML tag, how long would it take to get that single change deployed? If you need to start calling meetings about the deployment outages, you know that the value of that metric, for your organization, is too high.

Change Failure Rate

Of all your organization’s deployments, how many need to be rolled back or followed up with an emergency bugfix? This is your change failure rate, and it’s an excellent metric to try to improve. Improving your change failure rate helps developers to proceed more confidently. This will improve the deployment frequency rate im turn.

Error Rate

How many errors per hour does your code create at runtime? Is that better or worse since the last deployment? This is a great metric to expose to stakeholders: Since many demos only show the UI of an application, it’s helpful to see what is blowing up behind the scenes.

Platform Team Metrics

Metrics often originate from the platform team because metrics help raise the maturity level of their team and other teams. So, which metrics are most helpfu? While uptime and error rate matter here too, monthly active users and latency are also important.

Monthly Active Users

Being able to plan capacity for infrastructure is a gift. Monthly active users is the metric that can make this happen. Developers need to understand the load their code will have at runtime, and the marketing team will be incredibly thankful for those metrics.

Latency

Just like ordering coffee at Starbucks, sometimes you need to wait a little while. The more you value your coffee, the longer you might be willing to wait. But your patience has limits.

For application requests, latency can destroy the end-user experience. What’s worse than latency is unpredictable latency: If a request takes 100ms one time but 30s another time, then the impact on systems that create the request will be multiplied.

UX Metrics

Senior and non-technical leadership tend to focus on what they can see in demos. They can be prone to nitpicking the frontend because that is what’s visible to them and the end users. So, how does a UX team nudge leadership to focus on the achievements of the UX instead of the placement of pixels? 

Conversion Rate

The organization always has a goal for the end user: register an account, log in, place an order, buy some coins. It’s important to track these goals and see how users perform. Test different versions of your application with A/B testing. An improvement in conversion rate can mean the difference between profit and loss.

Time on Task

Even if you’re not making an application for employees, the amount of time spent on a task matters. If your users are being distracted by colleagues, children, or pets, it helps if their interactions with you are as efficient as possible. If your end user can complete an order before they need to help the kids with their homework or get Bob unstuck, that’s one less shopping cart abandoned.

Net Promoter Score (NPS)

NPS comes from asking an incredibly simple question: On a scale of 1 to 10, how likely is it that you would recommend this website (or application or system) to a friend or colleague? Embedding this survey into checkout processes or receipt emails is easy. Given enough volume of response, you can work out if a recent change compromised the experience of using a product or service.

If you can compare NPS scores for different versions of your application, then that’s even more helpful. For example, maybe the navigation that the marketing manager insisted on really is less intuitive than the previous version. NPS comparisons can help identify these impacts on the end user.

Security Metrics

Security is a discipline that touches everything and everyone—from the developer inadvertently creating an SQL injection flaw because Jenna can’t let the product launch slip, to Bob allowing the physical pen tester into the data center because they smiled and asked him about his day. Fortunately, several security metrics can help an organization get a handle on threats.

Number of Vulnerabilities

Security teams are used to playing whack-a-mole with vulnerabilities. Vulnerabilities are built into new code, discovered in old code, and sometimes inserted deliberately by unscrupulous developers. Tackling the discovery of vulnerabilities is a great way to show management that the security team is on the job squashing threats. This metric can also show, for example, how pushing the devs to hit that summer deadline caused dozens of vulnerabilities to crop up.

Mean Time To Detect (MTTD)

MTTD measures how long an issue had been in production before it was discovered. An organization should always be striving to improve how it handles security incidents. Detecting an incident is the first priority. The more time an adversary has inside your systems, the harder it will be to say that the incident is closed.

Mean Time To Acknowledge (MTTA)

Sometimes, the smallest signal that something is wrong turns out to be the red-alert indicator that a system has been compromised. MTTA measures the average time between the triggering of an alert and the start of work to address that issue. If a junior team member raises concerns but is told to put those on ice until after the big release, then MTTA goes up. As MTTA goes up, potential security incidents have more time to escalate.

Mean Time To Contain (MTTC)

MTTC is the average time, per incident, it takes to detect, acknowledge, and resolve a security incident. Ultimately, this is the end-to-end metric for the overall handling of an incident.

Signal, Not Noise

Amidst the noise of countless metrics available to teams today, we’ve highlighted specific metrics at different points in the application stack. We’ve looked at availability metrics for the IT team, followed by metrics for the developer, platform, UX, and security teams. Metrics are a fantastic tool for turning chaos into managed systems, but they’re not a free ride.

First, setting up your systems to gather metrics can require a significant amount of work. However, data gathering tools and automation can help free up teams from the task of collecting metrics.

Second, metrics can be gamed, and metrics can be confounded by other metrics. It’s always worth checking out the full story before making business decisions solely based on metrics. Sometimes, the appearance of rigor in data-driven decision-making is just that.

At the end of the day, the goal for your organization is to track down those metrics that truly matter, and then build processes for illuminating and improving them.

Source: cisco.com

Thursday, 9 June 2022

Initiative Stresses Periodic Software Upgrades for Better Reliability, Security, Performance, & Enhanced Features

Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News

Smartphones regularly push out OS updates and we dutifully download and install them without a second thought. With most laptops, the process is automatic and happens while we are asleep. However, when it comes to enterprise-grade networking software, keeping routers, switches, wireless equipment, and other gear on the latest recommended software release is often uneven.

Among Cisco customers, this is slowly changing. Launched in April 2021, a software conformance initiative at Cisco is driving greater awareness of the benefits of software upgrades, providing tools to make it easier for customers to periodically upgrade to the latest Cisco-recommended networking software releases.

Why Regular Software Upgrades are Vital

Customers buying routers or switches typically deploy the software and then may not keep close track of the version running on the devices. Often, lagging upgrades are the result of network administrators trying to avoid downtime.

Cisco estimates that more than 80% of Cisco hardware in customer networks is running on older versions of software that leave networks less reliable, less secure, and less efficient. Those three categories of risk were the most cited reasons why enterprises are choosing to migrate, according to a recent McKinsey study on software conformance (Figure 1). Compared to the other reasons cited in the survey, the responses demonstrate that most companies upgrade more to avoid risk than to gain new features and capabilities.

Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News
Figure 1. McKinsey Survey on Software and Firmware Upgrades

Cisco advises that customers using enterprise products should run recommended software releases to get the maximum value from the steady stream of innovations being developed by Cisco engineers. Unpatched security bugs and loopholes in outdated software can open attack routes for hackers to take advantage of and it is one of the most overlooked vectors for cyber-attacks.

The Cisco Software Conformance Initiative 


Whatever reasons companies use to justify networking software upgrades, Cisco is spearheading an internal effort to help customers recognize the need to upgrade and do it as quickly and painlessly as possible. The Software Conformance Initiative specifically targets customers using Cisco DNA Center, Cisco ISE, Cisco SD-WAN, and wireless products within enterprises.

For each of the Cisco Enterprise Networking products, we maintain and provide support for two recently recommended versions. Whenever a new recommended version becomes available, we encourage customers to upgrade.  Cisco recommends that each customer’s network to be on either of the most recent two recommended releases, to maintain an elevated level of security, utilize features vital to performance and stability, and maintain compatibility with other vendor technologies in their infrastructure.

Falling behind in software versions, however, is common. To get companies caught up, the Software Conformance Initiative does three things:

◉ Spread awareness of new software features and benefits by making customers aware of key updates that relate to their use cases

◉ Apply rigorous criteria and improved validation to real-world environments before suggesting recommended versions

◉ Build upgrade tools and make them available to customers to make the process of upgrading and pre- and post-checks simpler

Spreading Awareness 


With Cisco software engineers delivering innovations and enhancements on a regular basis, the Software Conformance Initiative makes customers aware of the current releases and how their features relate to their planned and implemented use cases. For example, software release 20.3.4.2 for Cisco SD-WAN is recommended because it provides new features like zone-based firewalls, protection from log4j security vulnerabilities, and service insertion tracker support.

The communication about software upgrade opportunities happens through field notices, Systems Engineer Virtual Training (SEVT), defects notifications, Product Security Incident Response Team (PSIRT) advisories, partner events, and EOS and EOL announcements. In parallel, customers are notified if they are downloading any non-recommended versions through all channels, inside and outside of the products.

Rigor Behind Recommended Versions


As part of the effort to drive rapid adoption of new recommended software, in addition to a laser-sharp focus on improving quality@source, Cisco has invested heavily to bring changes in the way we qualify enterprise networking software. For example, we integrated comprehensive real-world customer scenarios in Cisco R&D labs, enhanced solution test coverage, and introduced key checkpoint to validate software in several customer production networks before making software publicly available. To make the recommended version solid, we have a closed-loop process to incorporate learnings from global deployments that are baked into the new recommended version.

In addition to making sweeping changes in the way we qualify new recommended versions, we improved the rigor and criteria before a software version is recommended on Cisco.com. The rigor and criteria of marking recommended versions ensure the recommended software meets the exacting standards of reliability in complex real-world deployments.

Tools Make Upgrading and Migrating Easier 


The Software Conformance Initiative is providing workflows and tools for Cisco ISE, Cisco DNA Center, Cisco SD-WAN, and Cisco wireless and switching products. There are four diverse ways that Cisco is reaching out to our customer base: through Cisco direct sales, partner-driven customer engagements, high-touch support customers, and self-service customers. For each of these environments, Cisco engineers have developed tools to speed up and simplify software upgrade decisions and migrations, including:

◉ Value proposition

◉ Migration tool

◉ Automating pre- and post-upgrade checklists

◉ Software Upgrade playbook with step-by-step procedures, via Cisco Networking BOT (cnBot)

◉ Migration status dashboards (as shown in Figure 2)

Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News
Figure 2. Software Upgrade Migration Dashboard

Cisco enterprise networking customers interested in finding out more about software upgrades available for their products can also get information, workflows, and tools from the cnBot―check out my recent cnBot blog post―and support from Cisco TAC. Query the cnBot via Cisco WebEx Teams.

A year since its inception, is the Cisco Software Conformance Initiative working?

With hundreds of upgrades completed (e.g., 3,012 upgrades to Cisco DNA Center version 2.2.3.4 and 718 upgrades to Cisco ISE version 3.1, just last quarter) the answer is a resounding YES.

“What continuous learning does to mind; software upgrades do to devices.” — Cisco engineer

Source: cisco.com

Tuesday, 7 June 2022

Implementing Infrastructure as Code- How NDFC works with Ansible and Terraform

Automation has been the focus of interest in the industry for quite some time now. Out of the top tools available, Ansible and Terraform have been popularly used amongst automation enthusiasts like me. While Ansible and Terraform are different in their implementation, they are equally supported by products from the Cloud Networking Business Unit at Cisco (Cisco ACI, DCNM/NDFC, NDO, NXOS). Here, we will discuss how Terraform and Ansible work with Nexus Dashboard Fabric Controller (NDFC). 

First, I will explain how Ansible and Terraform works, along with their workflow. We will then look at the use cases. Finally, we will discuss implementing Infrastructure as Code (IaC).

Ansible – Playbooks and Modules

For those of you that are new to automation, Ansible has two main parts – the inventory file and playbooks. The inventory file gives information about the devices we are automating including any sandbox environments set up. The playbook acts as the instruction manual for performing tasks on the devices declared in the inventory file. 

Ansible becomes a system of documentation once the tasks are written in a playbook. The playbook leverages REST API modules to describe the schema of the data that can be manipulated using Rest API calls. Once written, the playbook can be executed using the ansible-playbook command line.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Ansible Workflow

Terraform – Terraform Init, Plan and Apply


Terraform has one main part – the TF template. The template will contain the provider details, the devices to be automated as well as the instructions to be executed. The following are the 3 main points about terraform:

1. Terraform defines infrastructure as code and manage the full lifecycle. Creates new resources, manages existing ones, and destroys ones no longer necessary. 

2. Terraform offers an elegant user experience for operators to predictably make changes to infrastructure.

3. Terraform makes it easy to re-use configurations for similar infrastructure designs.

While Ansible uses one command to execute a playbook, Terraform uses three to four commands to execute a template. Terraform Init checks the configuration files and downloads required provider plugins. Terraform Plan allows the user to create an execution plan and check if the execution plan matches the desired intent of the plan. Terraform Apply applies the changes, while Terraform Destroy allows the user to delete the Terraform managed infrastructure.

Once a template is executed for the first time, Terraform creates a file called terraform.state to store the state of the infrastructure after execution. This file is useful when making mutable changes to the infrastructure. The execution of the tasks is also done in a declarative method. In other words, the direction of flow doesn’t matter. 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Terraform Workflow

Use Cases of Ansible and Terraform for NDFC


Ansible executes commands in a top to bottom approach. While using the NDFC GUI, it gets a bit tedious to manage all the required configuration when there are a lot of switches in a fabric. For example, to configure multiple vPCs or to deal with network attachments for each of these switches, it can get a bit tiring and takes up a lot of time. Ansible uses a variable in the playbook called states to perform various activities such as creation, modification and deletion which simplifies making these changes. The playbook uses the modules we have depending on the task at hand to execute the required configuration modifications. 

Terraform follows an infrastructure as code approach for executing tasks. We have one main.tf file which contains all the tasks which are executed with a terraform plan and apply command. We can use the terraform plan command for the provider to verify the tasks, check for errors and a terraform apply executes the automation. In order to interact with application specific APIs, Terraform uses providers. All Terraform configurations must declare a provider field which will be installed and used to execute the tasks. Providers power all of Terraform’s resource types and find modules for quickly deploying common infrastructure configurations. The provider segment has a field where we specify whether the resources are provided by DCNM or NDFC.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Ansible Code Example (Click to view full size)

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Terraform Code Example (Click to view full size)

Below are a few examples of how Ansible and Terraform works with NDFC. Using the ansible-playbook command we can execute our playbook to create a VRF and network. 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Below is a sample of how a Terraform code execution looks: 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Infrastructure as Code (IaC) Workflow 


Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Infrastructure as a Code – CI/CD Workflow

One popular way to use Ansible and Terraform is by building it from a continuous integration (CI) process and then merging it from a continuous delivery (CD) system upon a successful application build:

◉ The CI asks Ansible or Terraform to run a script that deploys a staging environment with the application.

◉ When the stage tests pass, CD then proceeds to run a production deployment.

◉ Ansible/Terraform can then check out the history from version control on each machine or pull resources from the CI server.

An important benefit that is highlighted through IaC is the simplification of testing and verification. CI rules out a lot of common issues if we have enough test cases after deploying on the staging network. CD automatically deploys these changes onto production with just a simple click of a button. 

While Ansible and Terraform have their differences, NDFC supports the automation through both software equally and customers are given the option to choose either one or even both.

Terraform and Ansible complement each other in the sense that they both are great at handling IaC and the CI/CD pipeline. The virtualized infrastructure configuration remains in sync with changes as they occur in the automation scripts. 

There are multiple DevOps software alternatives out there to handle the runner jobs. Gitlab, Jenkins, AWS and GCP to name a few. 

In the example below, we will see how GitLab and Ansible work together to create a CI/CD pipeline. For each change in code that is pushed, CI triggers an automated build and verify sequence on the staging environment for the given project, which provides feedback to the project developers. With CD, infrastructure provisioning and production deployment is ensured once the verify sequence through CI has been successfully confirmed. 

As we have seen above, Ansible works in similar way to a common line interpreter, we define a set of commands to run against our hosts in a simple and declarative way. We also have a reset yaml file which we can use to revert all changes we make to the configuration. 

NDFC works along with Ansible and the Gitlab Runner to accomplish a CI/CD Pipeline. 

Gitlab Runner is an application that works with Gitlab CI/CD to run jobs in a pipeline. Our CI/CD job pipeline runs in a Docker container. We install GitLab Runner onto a Linux server and register a runner that uses the Docker executor. We can also limit the number of people with access to the runner so Pull Requests (PRs) of the merge can be raised and approved of the merge by a select number of people. 

Step 1: Create a Repository for the staging and production environment and an Ansible file to keep credentials safe. In this, I have used the ansible vault command to store the credentials file for NDFC.

Step 2: Create an Ansible file for resource creation. In our case, we have one main file for staging and production separately followed by a group_vars folder to have all the information about the resources. The main file pulls the details from the group_vars folder when executed.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Step 3: Create a workflow file and check the output.

As above, our hosts.prod.yml and hosts.stage.yml inventory files act as the main file for implementing resource allocation to both production and staging respectively. Our group_vars folder contains all the resource information including fabric details, switch information as well as overlay network details. 

For the above example, we will be showing how adding a network to the overlay.yml file and then committing this change will invoke a CI/CD pipeline for the above architecture. 

Optional Step 4: Create a password file (Optional). Create a new file called password.txt containing the ansible vault password to encrypt and decrypt the Ansible vault file.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Our overlay.yml file currently has 2 networks. Our staging and production environment has been reset to this stage. We will now add our new network network_db to the yaml file as below:

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

First, we make this change to the staging by raising a PR and once it has been verified, the admin of the repo can then approve this PR merge which will make the changes to production. 

Once we make these changes to the Ansible file, we create a branch under this repo to which we commit the changes.

After this branch has been created, we raise a PR request. This will automatically start the CI pipeline.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Once the staging verification has passed, the admin/manager of the repo can go ahead and approve of the merge which kicks in the CD pipeline for the production environment.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

If we check the NDFC GUI, we can find both staging and production contain the new network network_db. 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Source: cisco.com