Thursday, 6 October 2022

CCNA Practice Test Will Help You With the Real Exam

If you want to propel your IT and networking career by taking the CCNA certification exam, you have come to the right place! This article will impart complete information on all the CCNA syllabus topics, exam details, tips, and how the CCNA practice test can help you get a flying score.

Cisco 200-301 Exam Overview

The Cisco 200-301 or CCNA exam incorporates what an applicant should know to become a skilled networking professional. Cisco 200-301 exam covers the following topics:

  •  Network Fundamentals (20%)
  • IP Services (10%)
  • IP Connectivity (25%)
  • Network Access (20%)
  • Security Fundamentals (15%)
  • Automation and Programmability (10%)

And mastering these topics will allow applicants to obtain the well-known CCNA certification. To focus on the fundamentals, the CCNA 200-301 exam will include 90-110 questions and needs to be finished in 120 minutes. But, the journey won’t begin until you pay the application fee - $300. After that, to help you cover what will be evaluated in this Cisco exam, the vendor provides one training course with a similar name, Implementing and Administering Cisco Solutions (CCNA), that’s completely practical and actual information-based.

CCNA is an entry-level certificate offered by Cisco, as a justification for your skill to excel in a practical networking field. Moreover, it concentrates on IT technologies and combines networking skills with technical expertise to ensure successful applicants are armed with all the essential skills in just about all domains of the digital world.

Top Tips to Crack CCNA 200-301 Exam Like a Pro

1. Know the Cisco 200-301 Exam Details In-Detail

A solid starting point for the CCNA 200-301 exam is knowing the exact CCNA syllabus topics. Indeed, you are never going to pass this exam on the first attempt if you don’t understand what it will include, how much time you will be given to finish the questions, and what task formats you will confront.

2. Build a Realistic Study Plan That You’ll Actually Stick To

A study plan is excellent, but it creates monotony, whereas creating a realistic study plan will be helpful. For the Cisco 200-301 exam, determine how much time you wish to spend on every objective, what resources you will utilize, and when you want to take the exam. Planning will also help you sidestep stress and keep you stimulated.

3. Use the CCNA Practice Test

It’s good if you wish to get a feeling of the actual exam environment before the scheduled exam date. This will help you familiarize yourself with the exam structure and boost your confidence. If you can’t deal with the Cisco 200-301 exam questions, you will inevitably know what to concentrate on to help you pass the CCNA exam on the first try. Hence, taking the CCNA practice test will be crucial to shaping your path by helping you endure an exacting 2-hour-long exam.

4. Become a Part of an Online Community and Forum

You can become a member of an online community where you can meet other exam-takers and professionals with whom you can exchange knowledge and exam tips and find explanations for your doubts. One such community is the CCNA Certification Community on the Cisco Learning Network. Here you can ask queries, exchange ideas and meet with other members studying for the CCNA 200-301 exam. It also contains links to articles that relate to CCNA prep and exams.

How CCNA Practice Test Can Help You Score Much Better in Your Cisco 200-301 Exam

Following are some of the prominent reasons behind the growing importance of CCNA practice tests-

1. Imparts Clarity of the CCNA 200-301 Exam Structure

Practicing CCNA practice tests will allow you to understand the structure of the Cisco 200-301 exam and enhance your odds of passing the exam and getting your desired score in the exam.

2. Analyze Your Weak Areas by Reviewing the Result of the CCNA Practice Test

The evaluation of the performance of your CCNA practice exam can give valuable insights into the areas you need to concentrate on well. Significantly, comprehend how much time you have dedicated to the correct answers. Find shorter ways to solve such questions in less time. This can boost your analytical skills and also give you more time to focus on questions that are tough for you.

3. CCNA Practice Test Works As a Revision of the Whole Syllabus

By performing the CCNA practice test, you can revise the whole CCNA syllabus. A CCNA practice test assesses your skills for the exam and knowledge of the resources you own. Regular practice of CCNA practice tests can strengthen the frequently asked pieces of information and techniques used, and your brain becomes better at recovering them every time. This can help you prepare well for the exam with focus and perseverance.

Also Read: Make Your Resume Competitive With CCNA 200–301 Certification

4. CCNA Practice Test Helps Overcome Exam Anxiety

Finally, any exam can yield a lot of stress, mainly if one isn’t prepared adequately. A CCNA practice test helps you mentally and psychologically ready for an exam and understand how it would feel when solving it.

You learn how to control your anxiety under pressure and concentrate on answering CCNA 200-301 exam questions without worrying about the result. If you score well in a CCNA practice test, it gives you confidence while acing the actual exam.

Conclusion

It’s not possible to think about a career in IT infrastructure and networking without cracking the Cisco 200-301 exam. And to become certified as well as confirm your expertise, you should take up appropriate training and CCNA practice tests. For that, follow the top tips mentioned in this article to pass this exam like a pro. Good Luck!!

How NSO 6.0 Delivers Up To 9x Faster Transaction Throughput

Cisco Certification, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Guides, Cisco NSO, Cisco Career, Cisco Skill

Two years ago we set out on a quest to tune Cisco Network Services Orchestrator (NSO) for massive deployments. The primary challenge was the transaction throughput since no one wants a network that is slow or non-responsive. Customers will shout before you know it “Make your code run faster” or “My system is hanging”.

Today we are happy to announce that we have a significant performance boost for you. I almost dare to say that NSO 6.0 is “The Perfected Sword.” The magic is within the NSO Release 6.0 and the reimagined Transaction Manager. When we started the project we knew that it was our best attribute that was our greatest enemy, as well as our biggest potential. We were challenged as we had to perfect something that made us who we are. Now we are proud to claim that you will get three (3) times faster transaction throughput by only upgrading SW, and up to nine (9) times faster if you engage in optimization. If you are new to NSO and don’t care about the history, you can stop reading now, and enjoy the new version!

For those of you who have been with us for a while, or maybe struggled to scale with NSO, I will add a few layers to the history. If you want to know even more and get hands-on, sign-up for our next Automation Developer Days, Nov 29-30 in New York!

Shaping NSO for Increasing Demand


With an ever-growing network demand, we knew we had to be radical. Future networks need to push through more transactions per second than ever before. Our attempts to help customers optimize their code inside the lock were not enough. We knew about the opportunities to increase the concurrency and performance if we can reduce the time we protect transactions (a.k.a code lock). It would simply let us use the processing power more efficiently.

Things we did in the protected phase.
 
◉ FASTMAP create-code. Can be more or less efficient.
◉ Validations are model-driven constructs such as must, when, leafref, etc. These can be time-consuming.
◉ Kicker evaluations can be more or less efficient
◉ Device communication is normally time-consuming.

Cisco Certification, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Guides, Cisco NSO, Cisco Career, Cisco Skill
A transaction in NSO 5.x and earlier
 
It was tough to realize, but the merits that make NSO so unique also can impact performance at scale. We cannot expect users to write perfect validation expressions just because we know how. We also understood that we could not achieve sufficient gain unless we challenge the NSO heritage and break the transaction integrity, just enough to release the power. That is what makes our transactions fail-safe and also prevents some level of parallelism.

Can we run without locks or can we make the lock shorter? We need to manage any code that runs unprotected without adding too much complexity that eats up the cycles on the other side.

The New Concurrency Model


We put a lot of research behind the new design and the parts that control concurrency. The Transaction Manager is the central piece of this project. It is a specific function outside the database (CDB) that contains all functionality necessary for e.g FASTMAP.

Cisco Certification, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Guides, Cisco NSO, Cisco Career, Cisco Skill
The Transaction Manager controls the concurrency in NSO.
 
We knew that we could do much more in parallel if we can apply “checking” instead of “locking”. We just need to verify that the create condition is still valid when we apply “commit”.  Service invocations, Validations, Rollback file creation and more could potentially run outside the lock if we find a way to detect interference. We went from a pessimistic view of the transaction to an optimistic view to optimize concurrency.

Cisco Certification, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Guides, Cisco NSO, Cisco Career, Cisco Skill
A transaction in NSO 6.0

Conflict detection is one way to verify the conditions at commit and the basis for our new programming paradigm. We basically compare the current transaction read-set to other completed transaction write-sets. If some transaction has changed what the current transaction read, then the current transaction must abort and the services restarted. In this way, we protect existing services from being rewritten. Pretty straightforward, right? Of course, if you do your part to ensure your code is conflict-free you will avoid service restarts and NSO can run full speed.

Another less surprising example is the Commit Queue Option which proved to be very useful for moving device communication outside the lock removing dependencies.

Unexpected Outcomes


The Transaction Manager is probably one of the more well-tested code sections in NSO for a reason. Changing the core architecture can of course be risky. When you start poking around you will have to roll up your sleeves and fix old bugs as you run into them. The upside can be equally motivating as unexpected gains materialize.

◉ Lockless dry-run is one of them. The dry-run transactions will never enter the critical section, not even in LSA. It affects most actions with the dry-run option as well as service check-sync, get-modification, and deep-check-sync.

◉ Improved device locking is another one that allows us to obsoletes the wait-for-device commit parameter. The devices are locked automatically before entering the critical section which simplifies both code and operations.

◉ Improvements backported to the NSO 5.x branch

    ◉ Improved commit queue error recovery
    ◉ Internal performance improvements in CDB
    ◉ Performance Improvement for kicker evaluation

Sometimes it Pays Off to Dare a Little More


Sometimes it is worth trying the more advanced path to reach a certain goal. When you know it works you can simplify and evaluate. Now we challenge you to upgrade to NSO 6.0 and optimize your SW for faster transaction throughput. To learn more I highly recommend the new Packet Pusher podcast that uncovers the new features in NSO 6.0. As the next step, come to Developer Days in New York in November if you want to know more about the details and how you can gain performance with NSO 6.0. You will dive deeper into this topic in hands-on coding sessions led by our experts. If you can’t come to New York or want to come prepared you can always check out the NSO YouTube Channel for the latest content.  We have two particular sessions on the new concurrency model from our previous event in Stockholm. One overview session explains what we have done and one session is a deep dive that focuses on the conflict detection algorithm.

Source: cisco.com

Tuesday, 4 October 2022

CML 2.4 Now Supports Horizontal Scale With Clustering

Cisco Career, Cisco Certification, Cisco Jobs, Cisco Skills, Cisco Learning, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Scale

When will CML 2 support clustering?

This was the question we heard most when we released Cisco Modeling Labs (CML) 2.0 — and it was a great one, at that. So, we listened. CML 2.4 now offers a clustering feature for CML-Enterprise and CML-Higher Education licenses, which supports the scaling of a CML 2 deployment horizontally.

But what does that mean? And what exactly is clustering? Read on to learn about the benefits of Cisco Modeling Labs’ new clustering feature in CML 2.4, how clustering works, and what we have planned for the future.

Cisco Career, Cisco Certification, Cisco Jobs, Cisco Skills, Cisco Learning, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Scale

CML clustering benefits


When CML is deployed in a cluster, a lab is no longer restricted to the resources of a single computer (the all-in-one controller). Instead, the lab can use resources from multiple servers combined into a single, large bundle of Cisco Modeling Labs infrastructure.

In CML 2.4, CML-Enterprise and CML-Higher Education customers who have migrated to a CML cluster deployment can leverage clustering to run larger labs with more (or larger) nodes. In other words, a CML instance can now support more users with all their labs. And when combining multiple computers and their resources into a single CML instance, users will still have the same seamless experience as before, with the User Interface (UI) remaining the same. There is no need to select what should run where. The CML controller handles it all behind the scenes, transparently!

How clustering works in CML v2.4 (and beyond)


A CML cluster consists of two types of computers:

◉ One controller: The server that hosts the controller code, the UI, the API, and the reference platform images

◉ One or more computes: Servers that run node Virtual Machines (VMs), for instance, the routers, switches, and other nodes that make up a lab. The controller controls these machines (of course), so users will not directly interact with them. Also, a separate Layer 2 network segment connects the controller and the computes. We chose the separate network approach for security (isolation) and performance reasons. No IP addressing or other services are required on this cluster network. Everything operates automatically and transparently through the machines participating in the cluster.
This intracluster network serves many purposes, most notably:

    ◉ serving all reference platform images, node definitions, and other files from the controller via NFS sharing to all computes of a cluster.

    ◉ transporting networking traffic in a simulated network (which spans multiple computes) on the cluster network between the computes or (in case of external connector traffic) to and from the controller.

    ◉ conducting low-level API calls from the controller to the computes to start/stop VMs, for example, and operating the individual compute.

Defining a controller or a compute during CML 2.4 cluster installation


During installation, and when multiple network interface cards (NICs) are present in the server, the initial setup script will ask the user to choose which role this server should take: “controller” or “compute.” Depending on the role, the person deploying the cluster will enter additional parameters.

For a controller, the important parameters are its hostname and the secret key, which computes will use to register with the controller. Therefore, when installing a compute, the hostname and key parameters serve to establish the cluster relationship with the controller.

Every compute that uses the same cluster network (and knows the controller’s name and secret) will then automatically register with that controller as part of the CML cluster.

CML 2.4 scalability limits and recommendations


We have tested clustering with a bare metal cluster of nine UCS systems, totaling over 3.5TB of memory and more than 630 vCPUs. On such a system, the largest single lab we ran (and support) is 320 nodes. This is an artificial limitation enforced by the maximum number of node licenses a system can hold. We currently support one CML cluster with up to eight computes.

Plans for future CML releases

While some limitations still exist in this release in terms of features and scalability, remember this is only Phase 1. This means the functionality is there, and future releases promise even more features, such as the:

◉ ability to de-register compute

◉ ability to put computes in maintenance mode.

◉ ability to migrate node VMs from one compute to another.

◉ central software upgrade and management of compute

Source: cisco.com

Saturday, 1 October 2022

Empowering the four IT personas using Cisco DNA Center with Rings of Power

There are many variations of the “Law of Constant Change”; while they all have their own spin on it, the common thread is that change is constant and that it needs to be harnessed. When looking at changes and disruptions in technology, it comes as no surprise that there are numerous transformations and trends which are reshaping the IT landscape. The megatrends and change drivers span a wide range of business changes and transformation agents such as:

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco IT Prep, Cisco Certification

To keep up with the rapidly changing IT landscape, many IT organizations have been able to ascend and transform into new operational paradigms with the xOps transformation. Conversations around agility, AIOps, NetOps, SecOps, and DevOps are an outcome of a combination of organizational behavior and tooling in the networking and infrastructure realms. Separately, Gartner has also identified four IT personas (NetOps, SecOps, AIOps, and DevOps) which Gartner defined as predominant roles in today’s network operations realm.

In looking at key challenges, organizations are struggling with:

◉ Reducing time recovery objectives due to the reactive nature of traditional network operations practices.
◉ Bridging the growing IT skill gap.
◉ Keeping up with changing business requirements.
◉ Delivery of secure services in the hybrid workplace.
◉ Having to deliver more with less.

With Cisco’s years of expertise in designing, operating, and supporting networks of all sizes across the globe. Cisco has been an instrumental part in helping IT organizations move forward to the next operational level with tools to embrace and enable the xOps personas and embark on the transformation journey. This boils down to providing tools with analytics capabilities from the infrastructure and cultivating staff skills to use them effectively.

Speaking of how tooling can enable the transition, Cisco DNA Center is at the center of the IT/OT transition into the four IT personas, providing the digital agility to drive network insight automation and security while promoting key capabilities and tools to help in skill cultivation and changed operational models.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco IT Prep, Cisco Certification

Network Operations or “NetOps” is the front line of administrators in the IT organization. The term NetOps is a way to classify the common tasks and responsibilities, or “Jobs to be Done,” by these individuals. With Cisco DNA Center at the heart of the network infrastructure, the NetOps persona is enhanced with varying levels of automation to simplify the creation and maintenance of networks with agile flexibility to move from manual tasks to AI-assisted to selectively autonomous network management. For example, the SWIM (Software Image Management) and network profiles feature not only save time but allow for consistency and elimination of human error with routine tasks. The NetOps automation brought into DevOps provides agility and scalability to IT organizations to keep up with changing demands and integration into the larger IT ecosystem. Gartner has stated that the next generation of Netops, which Gartner coined as “Netops 2.0” is the evolution of network operations towards automation.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco IT Prep, Cisco Certification

Network, application, and user security is a key requirement for any enterprise network, and no network can operate safely without security. The security team is responsible for providing a safe digital experience in today’s connect-from-anywhere hybrid work environment and networks with countless numbers of endpoint devices. Also, many IT organizations in different market segments have various network security and architecture recruitments. Cisco DNA Center empowers the SecOps persona by enabling the complete zero-trust workplace solution with AI-driven security to classify endpoints and automated enforcement of security policies. This is achieved with Cisco’s fully integrated platform, which incorporates hardware and software designed to provide contextual security insights and automation. Cisco DNA Center SecOps can help eliminate security vulnerabilities with proactive security scans, automated security advisory alerting Cisco’s Product Security Incident Response Team (PSIRT), and proactive bug scans powered by Cisco AI Network Analytics engine to ensure the network is always secure.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco IT Prep, Cisco Certification

The DevOps persona brings integration, automation, and orchestration together. Traditionally, DevOps teams focused on very specialized, proprietary, and home-spun applications. Today, these individuals are tasked with taking these apps and integrating them into a connected universe of corporate solutions. DevOps depends on manufacturer-supplied software tool kits (STKs) and standards-based application programming interfaces (APIs) in order to share information and intelligence between applications. With Cisco DNA Center, IT organizations can quickly utilize pre-built integrations to Cisco products and 3rd party enterprise applications such as ServiceNow, Splunk, PagerDuty, and a growing selection of partner integrations. Cisco DNA Center’s mature APIs enable the extraction of data and network management, leveraging and harnessing the power of Cisco DNA Center’s NetOps, AIOps and SecOps via the API interface.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco IT Prep, Cisco Certification

AIOps defines the technologies that implement AI/ML (Artificial Intelligence and Machine Learning) and the individuals that leverage these technologies. Evidently, AI/ML is being implemented in so many of our networking components that it has become imperative that a specialized team of experts manage and amplify the use of this intelligence. Cisco DNA Center provides a simplified view into the complexities of big data and machine learning so that your AIOps teams can make the most of this rich data.  Additionally, Cisco DNA Center provides best-in-class AI-driven visibility, observability, and insights, ensuring the health and experience of users, applications, and infrastructure. AI/ML is packaged within Cisco DNA Center in an easy consumption interface that can deliver value in minutes and allow IT teams to work smarter and elevate the level of service to the users and organization. Hence, with Cisco DNA Center AIOps, IT organizations can gain visibility and insights otherwise not attainable without AI/ML combined with Cisco’s deep networking knowledge. Simply put, this powerful combination makes the IT team more agile and smarter and helps bridge growing IT skills gaps.

The xOps Rings of power

While the four IT personas were explained as distinct roles, in many organizations, they are simply different hats that IT staff can wear at different times depending on the business need.  It is also essential to keep the perspective that each of the personas enables and provides services to other personas, yielding the “Rings of Power” for example, with AI centricity, Cisco DNA Center empowers, enables, and enhances the NetOps, SecOps, and DevOps personas by providing interactions with all personas in the ring. Similarly, NetOps persona-centricity enables and empowers DevOps, SecOps, and AIOps personas.

An example of the AIOps ring of power:

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco IT Prep, Cisco Certification
AIOps
discovers security vulnerabilities and recommends an upgrade.

NetOps performs the SWIM process to upgrade the software.

DevOps connects to ServiceNow for the change management and ticket creation processes.

SecOps reports the new network security posture, eliminating the security vulnerability from the network.

Leveraging Cisco DNA Center to enable and empower the new IT personas model, IT organizations can quickly and easily gain visibility, observability, insights, and out-of-the-box automation. While organizations with more modern operational models are also able to yield zero trust, and programmability from the Cisco Network infrastructure. This enables IT organizations to be more agile and transform into the new xOps operational paradigm, allowing the IT organization to progress on the operational maturity journey, become proactive and leave the reactive persona behind.

Source: cisco.com

Thursday, 29 September 2022

[New] 200-301 CCNA: Cisco 200-301 Free Exam Questions & Answers

 

Cisco CCNA Exam Description:

This exam tests a candidate's knowledge and skills related to network fundamentals, network access, IP connectivity, IP services, security fundamentals, and automation and programmability. The course, Implementing and Administering Cisco Solutions (CCNA), helps candidates prepare for this exam.

Cisco 200-301 Exam Overview:

Cisco 200-301 Exam Topics:

  • Network Fundamentals- 20%
  • Network Access- 20%
  • IP Connectivity- 25%
  • IP Services- 10%
  • Security Fundamentals- 15%
  • Automation and Programmability- 10%
Related Articles:-

Monitoring for Your “Pets.” Observability for Your “Cattle.”

What’s the difference between monitoring and observability


Today, the second most active project in CNCF is the Open Telemetry project that provides a solution to the Observability problem of modern cloud native applications.

A question often asked is – I have monitoring for my legacy applications that I can extend to include any new apps, so why do I need observability? And what’s the difference between monitoring and observability anyways? There is much debate in the industry about this and if you ask ten people about their take on this, you will probably get ten different answers. Let us see some common interpretations of the two.

How legacy monitoring systems worked


Remember those times when we deployed our applications on a bunch of servers? We even knew those servers by name – just like our pets! To assess the health and performance of our applications, we collected events from every application and every network entity. We deployed centralized monitoring systems that collected standard (remember SNMP?) and vendor proprietary notifications. Correlation engines, which were basically vendor specific, executed on this vast number of events and identified failure objects with custom rules.

Here’s a very simplistic view of a legacy monitoring system:

Simplistic view of a legacy monitoring system

Trend analysis with custom dashboards came to our aid when we had to trouble shoot a production problem. Traditional monitoring worked off a known set of anticipated problems. Monitoring systems were built around that, reacting to issues as and when they occurred with a prebuilt set of actions. Failure domains were known ahead of time and identified with customized correlation rules. Telemetry data such as logs, metrics, and traces were siloed. Operators did a manual correlation of the three sets of data. Alerting was after the fact (or reactive) when thresholds exceeded a preset minor, major or critical threshold.

Servers hosting our critical applications were our “pets”


The entire application landscape, including infrastructure, was operationalized with proprietary monitoring systems. It seemed quite adequate. Operators had a deep understanding of the architecture of applications and the systems hosting them. Operating guides laid out alerting and details on resolutions. Everything seemed to function like a well-oiled machine aligned with the goal of those times – to help I&O teams keep the lights on.

And then the applications split and spread their wings, migrating to the clouds!

Enter microservices


We now deal with “cattle.” That is, short lived containers that come and go – everything seems dispensable, replaceable, and scalable. Considering the magnitude of containers, traditional monitoring systems prove totally insufficient to manage this new breed of applications with their unimaginable number of events. This scenario is only made more complex considering that there are no standards for cloud monitoring with each public cloud provider inserting their own little stickiness into the mix.

Microservices make it hard to update monitoring systems


Microservices no longer deal with long release cycles. With monolithic apps, there used to be a sync up among various teams on architecture changes to the services being updated. However,  it’s hard on I&O teams to update monitoring systems as microservices change. The bottom line is that I&O teams will possibly be operating apps that they don’t totally understand architecturally.

Enter “observability”


Observability promises to address the complexities of tracking cloud native application health and performance.

Observability is for systems that can be pretty much of a black box. It helps I&O teams who are trying to identify the internal state of the black box from telemetry data collected. It involves finding an answer to the unknown unknowns – meaning we cannot predict what’s going to happen but need the ability to ask questions and get answers so we can best formulate an action to the issue. Observability is about deriving signals from raw telemetry data as an integrated platform for logs, metrics, and traces.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Guides, Cisco Certification, Cisco Prep, Cisco Preparation

In today’s dynamic, polyglot ecosystem where services are individually scaling to meet demands, simple monitoring built around a known set of events and alerts will fail. An Observability platform will ingest an insightful set of data generated by instrumentation of apps. Then, transform and collate trace/metrics/log data and funnel it into data stores that can then be queried to gauge the system health and performance. The key here is the context that can be attached to any aggregated data that can help decipher the internal state of the system and failures.

Extracting valuable signals from correlated data


In conclusion, the nirvana that we are striving for seems to be a scenario where we have literally all the data we need from instrumented apps as a correlated set of metrics, logs, and traces. Following this, the right set of tools will extract valuable signals from this correlated data revealing not only the service model but also failure objects to address health and performance issues.

Watch out for future blogs where we will explore OpenTelemetry as a solution to observability and explore MELT (metrics, events, logs, traces) with open source and commercial tools.

Source: cisco.com

Tuesday, 27 September 2022

Cisco MDS 9000 FSPF Link Cost Multiplier: Enhancing Path Selection on High-Speed Storage Networks

Cisco MDS, Cisco, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco FSPF, Cisco Certification, Cisco Tutorial and Materials, Cisco News

The need for optimal path selection


When embarking on a journey from one location to another one, we always try to determine the best route to follow. This happens according to some preference criteria. Having a single route is the simplest situation but that would lead to long delays in case of any problem occurring along the way. Availability of multiple paths to destination is good in terms of reliability. Of course, we would need an optimal path selection tool and clear street signs, or a navigation system with GPS, to avoid loops or getting lost.

In a similar way, data networks are designed with multiple paths to destination for higher availability. Specific protocols enable optimal path selection and loop avoidance. Ethernet networks have used the Spanning Tree Protocol or more recent standards like TRILL. IP networks rely on routing protocols like BGP, OSPF and RIP to determine the best end-to-end path. Fibre Channel fabrics have their own standard routing protocol, called Fabric Shortest Path First (FSPF), defined by INCITS T11 FC-SW-8.

FSPF on Cisco MDS switches


The FSPF protocol is enabled by default on all Cisco Fibre Channel switches. Normally you do not need to configure any FSPF parameters. FSPF automatically calculates the best path between any two switches in a fabric. It can also select an alternative path in the event of the failure of a given link. FSPF regulates traffic routing no matter how complex the fabric might be, including dual datacenter core-edge designs.

Cisco MDS, Cisco, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco FSPF, Cisco Certification, Cisco Tutorial and Materials, Cisco News

FSPF supports multipath routing and bases path status on a link state protocol called Shortest Path First. It runs on E ports or TE ports, providing a loop free topology. Routing happens hop by hop, based only on the destination domain ID. FSPF uses a topology database to keep track of the state of the links on all switches in the fabric and associates a cost with each link. It makes use of Dijkstra algorithm and guarantees a fast reconvergence time in case of a topology change. Every VSAN runs its own FSPF instance. By combining VSAN and FSPF technologies, traffic engineering can be achieved on a fabric. One use case would be to force traffic for a VSAN on a specific ISL. Also, the use of PortChannels instead of individual ISLs makes the implementation very efficient as fewer FSPF calculations are required.

FSPF link cost calculation


FSPF protocol uses link costs to determine the shortest path in a fabric between a source switch and a destination switch. The protocol tracks the state of links on all switches in the fabric and associates a cost with each link in its database. Also, FSPF determines path cost by adding the costs of all the ISLs in each path. Finally, FSPF compares the cost of various paths and chooses the path with minimum cost. If multiple paths exist with the same minimum cost, FSPF distributes the load among them.

You can administratively set the cost associated with an ISL link as an integer value from 1 to 30000. However, this operation is not necessary and typically FSPF will use its own default mechanism for associating a cost to all links. This is specified within INCITS T11 FC-SW-8 standard. Essentially, the link cost is calculated based on the speed of the link times an administrative multiplier factor. By default, the value of this multiplier is S=1. Practically the link cost is inversely proportional to its bandwidth. Hence the default cost for 1 Gbps links is 1000, for 2 Gbps is 500, for 4 Gbps is 250, for 32 Gbps is 31 and so on.

FSPF link cost calculation challenges


It is easy to realize that  high-speed links introduce some challenges because the link cost computes smaller and smaller. This becomes a significant issue when the total link bandwidth is over 128 Gbps. For these high-speed links, the default link costs become too similar to one another and so leading to inefficiencies.

The situation gets even worse for logical links. FSPF treats PortChannels as a single logical link between two switches. On Cisco MDS 9000 series, a PortChannel can have a maximum of 16 member links. With multiple physical links combined into a PortChannel, the aggregate bandwidth scales upward and the logical link cost reduces accordingly. Consequently, different paths may appear to have the same cost although they have a different member count and different bandwidths. Path inefficiencies may occur when PortChannels with as low as 9 x 16 Gbps members are present. This leads to poor path selection by FSPF. For example, imagine two alternative paths to same destination, one traversing a 9x16G PortChannel and one traversing a 10x16G PortChannel. Despite the two PortChannels have a different aggregate bandwidth, their link cost would compute to the same value.

FSPF link cost multiplier feature


To address the challenge, for now and the future, Cisco MDS NX-OS 9.3(1) release introduced the FSPF link cost multiplier feature. This new feature should be configured when parallel paths above the 128 Gbps threshold exist in a fabric. By doing so, FSPF can properly distinguish higher bandwidth links from one another and is able to select the best path.

All switches in a fabric must use the same FSPF link cost multiplier value. This way they all use the same basis for path cost calculations. This feature automatically distributes the configured FSPF link cost multiplier to all Cisco MDS switches in the fabric with Cisco NX-OS versions that support the feature. If any switches are present in the fabric that do not support the feature, then the configuration fails and is not applied to any switches. After all switches accept the new FSPF link cost multiplier value, a delay of 20 seconds occurs before being applied. This ensures that all switches apply the update simultaneously.

The new FSPF link cost multiplier value is S=20, as opposed to 1 in the traditional implementation. With a simple change to one parameter, Cisco implementation keeps using the same standard based formula as before. With the new value for this parameter, the FSPF link cost computation will stay optimal even for PortChannels with 16 members of up to 128 Gbps speed.

Cisco MDS, Cisco, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco FSPF, Cisco Certification, Cisco Tutorial and Materials, Cisco News

Source: cisco.com