Thursday, 2 June 2022

SecureX and Secure Firewall: Integration and Automation to Simplify Security

Cisco Secure Firewall stops threats faster, empowers collaboration between teams, and enables consistency across your on-premises, hybrid, and multi-cloud environments. With an included entitlement for Cisco SecureX, our XDR and orchestration platform, you’ll experience efficiency at scale and maximize your productivity. New streamlined Secure Firewall integrations make it easier to use SecureX capabilities to increase threat detection, save time and provide the rapid and deeper investigations you require. These new features and workflows provide the integration and automation to simplify your security.

Move to the Cloud

The entire suite of Firewall Management Center APIs is now available in the cloud. This means that existing APIs can now be executed from the cloud. Cisco makes this even easier for you by delivering fully operational workflows as well as pre-built drag-n-drop code blocks that you can use to craft your own custom workflows. SecureX is able to proxy API calls from the cloud to the SSE connector embedded in the FMC codebase. This integration between Firewall 7.2 and SecureX provides your Firewall with modern cloud-based automation.

Expedited Integration

We’ve dramatically reduced the amount of time needed to fully integrate Firewall into Securex. Even existing Firewall customers who use on-premises Firewall Management Center will be able to upgrade to version 7.2 and start automating/orchestrating in under 15 minutes — a huge time savings! The 7.2 release makes the opportunities for automating your Firewall deployment limitless with our built-in low code orchestration engine.

Previously Firewall admins had to jump through hoops to link their smart licensing account with SecureX which resulted in a very complicated integration process. With the new one-click integration, simply click “Enable SecureX” in your Firewall Management Center and log into SecureX. That’s it! Your Firewalls will automatically be onboarded to SecureX.

SecureX, Secure Firewall, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News, Cisco Tutorial and Material

Built In Orchestration


Cisco Secure Firewall users now get immense value from SecureX with the orchestration capability built natively into the Firewall. Previously Firewall admins would have to deploy an on-premises virtual machine in vCenter to take advantage of Firewall APIs in the cloud which was a major hurdle to overcome. With the 7.2 release, orchestration is built right into your existing Firewall Management Center. There is no on-premises connector required; SecureX orchestration is able to communicate directly with Firewall APIs highlighting the power of Cisco-on-Cisco integrations.

Customizable Workflows


PSIRT Impact monitoring  

The PSIRT impact monitoring workflows helps customers streamline their patch management process to ensure their network is always up to date and not vulnerable to CVE’s. This workflow will check for new PSIRTs, determine if device versions are impacted, and suggest a fixed version to upgrade to. By scheduling this workflow to run once a week customers can be notified via email if there is any potential impact from a PSIRT.

Firewall device health monitoring  

This workflow will run every 15 minutes to pull a health report from FMC and proactively notify customers via email if any devices are unhealthy. This means customers can rest assured that their fleet of devices is operating as expected or be notified of things like high CPU usage, low disk space, or interfaces going down.

Expiry notification for time-based objects 

This workflow highlights the power of automation and showcases what is possible by using the orchestration proxy to use FMC API’s. Managing policy is always an on-going effort but can be made easier by introducing automation. This workflow can be run once a week to search through Firewall policies and determine if any rules are going to expire soon. This makes managing policy much easier because customers will be notified before rules expire and can make changes accordingly.

Response Action: Block URL in access control policy 

This workflow is a one-click response action available from the threat response pivot menu. With the click of a button a URL is added to an object in a block rule of your access control policy. This action can be invoked during an investigation in SecureX or from any browser page using the SecureX browser extension. Reducing time to remediation is a critical aspect of keeping your business secure. This workflow turns a multi-step policy change into a single click by taking advantage of Secure Firewall’s integration with SecureX.

Proven Results


A recent Forrester Economic Impact Study of Secure Firewall show that deploying these types of workflows in SecureX with Secure Firewall increased operational efficiency.

In fact, SecureX in combination with Secure Firewall helped to dramatically reduce the risk of a material breach. It’s clear that the integration of the two meant a significant time savings for already overburdened teams.

SecureX, Secure Firewall, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News, Cisco Tutorial and Material

We continue to innovate new features and workflows that prioritize the efficacy of your teams and help drive the security resilience of your organization.

Source: cisco.com

Monday, 30 May 2022

[New] 500-701 VID Certification | Get Ready to Crack Cisco Video Infrastructure Design Exam

Cisco Video Infrastructure VID Exam Description:

This exam tests a candidate's knowledge of the skills needed by a systems engineer to understand a Cisco Video Collaboration Solution.

Cisco 500-701 VID Exam Overview:

Why CCNA Practice Test is Important for CCNA 200-301 Exam

ccna practice test, ccna exam topics, CCNA Sample Questions, CCNA Test Questions, CCNA 200-301 EXam, ccna 200-301 practice test, CCNA Topics

If you want to propel your career in IT and networking by passing the Cisco Certified Network Associate - CCNA exam, then you have made a smart decision! It gives you complete knowledge of all the concepts and topics. You ought to earn the most sought-after networking certification today by cracking the Cisco CCNA 200-301 exam with the CCNA practice test.

Overview of CCNA 200-301 Exam

Cisco 200-301 is the only exam that applicants should take in order to receive the CCNA certification. The certification covers a broad spectrum of fundamental skills for IT careers, the latest networking developments, software skills, and job functions.

CCNA 200-301 will be a 2-hour closed-book exam. The number of questions is around 90-110, and the exam cost in the US $300. CISCO has split the syllabus into different sections. The CCNA 200-301 exam contains its objectives and sub-topics in it. The CCNA 200-301 exam topics are mentioned below:

  • Network Fundamentals
  • IP Connectivity
  • Network Access
  • IP Services
  • Security Fundamentals
  • Automation and Programmability

All interested applicants should register via Pearson VUE to take the exam, which is the official exam body.

Tips and Tricks to Pass CCNA 200-301 Exam

Many applicants have passed the Cisco certification exams and shared their experiences. To conclude, these are the most periodic and practical suggestions you can consider the following:

Make a study plan. When you determine to take the CCNA 200-301 exam, you should attentively organize your study plan. Depending on the date you will take the exam, you should devote at least 2-3 hours per day to CCNA 200-301 exam preparation. Designate a specific time for studies and select the topics you have to learn during each of them.

1. Study with Updated and Trusted Learning Resources

Cisco’s official training for 200-301 is “Implementing and Administering Cisco Solutions (CCNA) v1.0”. You will find all information on the vendor’s official website. You can take up instructor-led classes (offline or virtual) that incorporate an interactive part taught by a qualified trainer and a self-study course. Also, you can take advantage of e-learning materials if you do not require any guidance.

Must Read: CCNA 200-301 Certification: Reasons Why You Should Get It and How

2. Participate in an Online Cisco Community

This is a superb opportunity to get in touch with former exam-takers and come to know how they have passed the CCNA 200-301 exam successfully. Their guidance is beneficial in organizing your study schedule and deciding whether the CCNA certification is what you require.

3. Attempt CCNA Practice Tests to Complete Your Study Routine

CCNA practice test will help you evaluate your preparedness and competently identify your knowledge gaps. Cisco practice tests provided by NWExam.com mimics the actual exam context, so you will be able to feel the vibe of the actual CCNA 200-301 exam and get familiar with them.

Why Should You Take CCNA Practice Test?

When a large number of reasons reveal the importance of taking a mock test before the real Cisco exam, it’s reasonable to discuss the best reasons to take up the CCNA practice test to get a flying score in CCNA 200-301 exam. Let’s explore:

1) Improved Time Management

In the CCNA practice test, a considerable amount of emphasis is put on time which is definitely one of the essential factors for Cisco exams. Practice tests help you to manage time competently.

2) It Solves the Much-Needed Aspect of the Exam, I.E., Revisions

Since all the complicated things you study are prone to get more intricate at the end of the day as it gets too much to soak up. Thus, revisions can’t be avoided. At this point, the CCNA practice test gives the applicants an opportunity to carry out revisions.

3) CCNA Practice Test Kick Your Confidence Into High Gear

In addition to improving time management skills and performance, you kick your confidence as practice makes you understand the weak and strong topics. Briefly, you build a positive attitude.

For CCNA 200-301 Question and Answer PDF Click Here.

4) Result of CCNA Practice Test Helps

Attempting the CCNA practice test is very beneficial, but you also need to know fair results about where you stand at the end. CCNA practice test solves that purpose. It’s smart to approach a practice test on NWExam.com to equip you with a practical and detailed analysis of your weak and strong areas and useful guidelines.

How Many CCNA Practice Tests Should a Cisco 200-301 Exam Taker Solve?

As suggested by seasoned professionals, there is no definite number that applicants should take up. But, exam-takers should solve as many CCNA practice tests as possible, and there’s no boundary to it. CCNA practice tests help to work on accuracy, increase confidence, and boost speed.

Conclusion

Passing your CCNA 200-301 exam is a thing of tremendous pride. After obtaining the suitable certification, you can directly charge after another Cisco accreditation, stay within your community for more updates or take a break enjoying the resulting perks. Take the given piece of advice, and you’ll be sure to advance your career in networking.

Sunday, 29 May 2022

Enabling Scalable Group Policy with TrustSec Across Networks to Provide More Reliability and Determinism

Cisco Career, Cisco Preparation, Cisco Guides, Cisco Skills, Cisco Jobs, Cisco Preparation

Cisco TrustSec provides software-defined access control and network segmentation to help organizations enforce business-driven intent and streamline policy management across network domains. It forms the foundation of Cisco Software-Defined Access (SD-Access) by providing a policy enforcement plane based on Security Group Tag (SGT) assignments and dynamic provisioning of the security group access control list (SGACL).

Cisco TrustSec has now been enhanced by Cisco engineers with a broader, cross-domain transport option for network policies. It relies on HTTPS, Representational State Transfer (REST) protocol API, and the JSON file and data interchange format for far more reliable and scalable policy updates and segmentation for more deterministic networks. It is a superior choice over the current use of RADIUS over User Datagram Protocol (UDP), which is notorious for packet drops and retries that degrade performance and service guarantees.

Scaling Policy

Cisco SD-Access, Cisco SD-WAN, and Cisco Application Centric Infrastructure (ACI) have been integrated to provide enterprise customers with a consistent cross-domain business policy experience. This necessitated a more robust, reliable, deterministic, and dependable TrustSec infrastructure to meet the increasing scale of SGTs and SGACL policies―combined with high-performance requirements and seamless policy provisioning and updates followed by assured enforcement.

With increased scale, two things are required of policy systems. 

◉ A more reliable SGACL provisioning mechanism. The use of RADIUS/UDP transport is inefficient for the transport of large volumes of data. It often results in a higher number of round-trip retries due to dropped packets and longer transport times between devices and the Cisco Identity Services Engine (ISE server). The approach is error-prone and verbose.

◉ Determinism for policy updates. TrustSec uses the RADIUS change of authorization (CoA) mechanism to dynamically notify changes to SGACL policy and environmental data (Env-Data). Devices respond with a request to ISE to update the specified change. These are two seemingly disparate but related transaction flows with the common intent to deliver the latest policy data to the devices. In scenarios where there are many devices or a high volume of updates, there is a higher risk of packet loss and out-of-ordering, it is often challenging to correlate the success or failure of such administrative changes.

More Performant, Scalable, and Secure Transport for Policy 

The new transport option for Cisco TrustSec is based on a system of central administration and distributed policy enforcement, with Cisco DNA Center, Cisco Meraki Enterprise Cloud, or Cisco vManage used as a controller dashboard and Cisco ISE serving as the service point for network devices to source SGACL policies and Env-Data (Figure 1).  

Figure 1 shows the Cisco SD-Access deployment architecture depicting a mix of both old and newer software versions and policy transport options. 

Cisco Career, Cisco Preparation, Cisco Guides, Cisco Skills, Cisco Jobs, Cisco Preparation
Figure 1. Cisco SD-Access Deployment Architecture with Policy Download Options

Cisco introduced JSON-based HTTP download for policies to ensure 100% delivery with no packet drops and no retries necessary. It improves the scale, performance, and reliability of policy workflows. Using TLS is also more secure than RADIUS/UDP transport. 

The introduction of the REST API for TrustSec data download is an additional protocol option on devices used to interface with Cisco ISE. Based on the system configuration, either of the transport mechanisms can be used to download environment data (Env-Data) and SGACL policies from Cisco ISE.  

Change of authorization (CoA) is an important functionality on the server to notify updates to network devices.  Cisco ISE continues to use RADIUS CoA, a lightweight message to notify updates to SGACL and Env-Data. In scenarios where there are a high number of devices or a high volume of updates, ISE may experience high CPU utilization due to high volume of CoA requests triggering equal number of CoA responses and follow-up requests from devices eager to update policies. But with the transition of SGACL and Env-data download to the REST protocol, reducing compute and transport time, it indirectly provides better CoA performance.  

In addition to improved reliability and deterministic policy updates, the REST transport interface has also paved the way for better platform assurance and operational visibility. 

The new policy enforcement plane available with Cisco TrustSec provides a broader, cross-domain transport option for network policies. It’s both a more reliable SGACL provisioning mechanism for larger volumes of data and a more deterministic solution for policy updates. The result is more scalable enforcement of business-driven intent and policy management across network domains.

Source: cisco.com

Saturday, 28 May 2022

Automated Service Assurance at Microsecond Speed

For communication service providers (CSPs), the network trends of cloudification, open, software-based infrastructure, and multi-vendor environments are a double-edged sword. On the plus side, these trends break the long tradition of vendor lock-in, freeing service providers to mix best-of-breed solutions that provide competitive advantages.

But with that freedom comes new and daunting responsibilities. It’s now up to CSPs to ensure that all those disparate solutions, APIs, and network functions work together flawlessly. And in the case of mobile networks, operators have two steep learning curves to climb simultaneously: Open RAN and 5G standalone core networks.

Multi-vendor interoperability challenges highlight the need for vendors to collaborate on solutions that are pre-integrated so they’re ready for flawless deployment. This would free CSPs from the time and expense of performing extensive integration and testing — tasks that delay service launches. Pre-integrated, best-of-breed solutions would also deliver faster time to revenue for those new services. Closed-loop automation with tightly integrated network and service orchestration and assurance is the ultimate goal for efficient operations in this new environment.

Another major benefit is confidence that those services will have the performance and quality of experience (QoE) that customers expect. But to maximize that benefit, operators will need real-time, KPI-level insights into those network components across network domains, as well as the services and customer applications running over them. These insights are key to understanding the customer experience and differentiating services with competitive enterprise SLAs.

Automated Assurance and Orchestration that can Handle SLAs at Scale and Speed

By tightly integrating automated assurance and orchestration, Accedian Skylight with Cisco Crosswork enables closed-loop automation based on end user experiences at microsecond speeds. In addition to real-time insights and actions, the solution enables CSPs to return later and configure their network to fix problems or enhance QoE.

Speed is critical because customers — businesses and consumers — notice within seconds when their connection suddenly slows down or is lost. This puts enormous pressure on CSPs to find and fix these problems as they’re emerging, before customers start to notice. That’s a tall order because operators need to do that 24/7/365 at scale: thousands of types of applications and services with tens or hundreds of millions of simultaneous connections now, and even more in the future as the Internet of Things (IoT) becomes even more prevalent.


Service providers need to act at a microsecond level and it’s a tall mountain to climb, but Cisco and Accedian are here to help.

Accedian Skylight and the Cisco Crosswork Automation platform show what happens in every millisecond and enables service providers to automate intervention, stay in control, and deliver assured customer experience in real time.


Insights in real time are driven through the APIs and cloud native carrier-scale Skylight Architecture, simultaneously collecting and correlating critical network performance at an individual packet level sourced from efficient sensors in the network to measure latency and packet loss. When milliseconds matter, Accedian and Cisco automation are mission critical.


Source: cisco.com

Friday, 27 May 2022

Perspectives on the Future of SP Networking: Intent and Outcome Based Transport Service Automation

One lesson we could all learn from cloud operators is that simplicity, ease of use, and “on-demand” are now expected behaviors for any new service offering. Cloud operators built their services with modular principles and well-abstracted service interfaces using common “black box” software programming fundamentals, which allow their capabilities to seamlessly snap together while eliminating unnecessary complexity. For many of us in the communication service provider (CSP) industry, those basic principles still need to be realized in how transport service offerings are requested from the transport orchestration layer.

The network service requestor (including northbound BSS/OSS) initiates an “intent” (or call it an “outcome”) and it expects the network service to be built and monitored to honor that intent within quantifiable service level objectives (SLOs) and promised service level expectations (SLEs). The network service requestor doesn’t want to be involved with the plethora of configuration parameters required to deploy that service at the device layer, relying instead on some other function to complete that information. Embracing such a basic principle would not only reduce the cost of operations but also enable new “as-a-Service” business models which could monetize the network for the operator.

But realizing the vision requires the creation of intent-based modularity for the value-added transport services via well-abstracted and declarative service layer application programming interfaces (APIs).  These service APIs would be exposed by an intelligent transport orchestration controller that acts in a declarative and outcome-based way. Work is being done by Cisco in network slicing and network-as-a-service (NaaS) to define this layer of service abstraction into a simplified – yet extensible – transport services model allowing for powerful network automation.

How we got here


Networking vendors build products (routers, switches, etc.) with an extensive set of rich features that we lovingly call “nerd-knobs”. From our early days building the first multi-protocol router, we’ve always taken great pride in our nerd-knob development. Our pace of innovation hasn’t slowed down as we continue to enable some of the richest networking capabilities, including awesome features around segment routing traffic engineering (SR-TE) that can be used to drive explicit path forwarding through the network (more on that later). Yet historically it’s been left to the operator to mold these features together into a set of valuable network service offerings that they then sell to their end customers. Operators also need to invest in building the automation tools required to support highly scalable mass deployments and include some aspects of on-demand service instantiation. While an atomic-level setting of the nerd knobs allows the operator to provide granular customization for clients or services, this level of service design creates complexity in other areas. It drives very long development timelines, service rigidity, and northbound OSS/BSS layer integration work, especially for multi-domain use cases.

With our work in defining service abstraction for NaaS and network slicing and the proposed slicing standards from the Internet Engineering Task Force (IETF), consumers of transport services can soon begin to think in terms of the service intent or outcome and less about the complexity of setting feature knobs on the machinery required to implement the service at the device level. Transport automation is moving towards intent, outcome, and declarative-based service definitions where the service user defines the what, not the how.

In the discussion that follows, we’ll define the attributes of the next-generation transport orchestrator based on what we’ve learned from user requirements. Figure 1 below illustrates an example of the advantages of the intent-based approach weaving SLOs and SLEs into the discussion. Network slicing, a concept inspired by cellular infrastructure, is introduced as an example of where intent-based networking can add value.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 1. Increased confidence with transport services

What does success look like?


The next-generation transport orchestrator should be closed loop-based and implement these steps:

1. Support an intent-based request to instantiate a new transport service to meet specific SLEs/SLOs

2. Map the service intent into discrete changes, validate proposed changes against available resources and assurance, then implement (including service assurance tooling for monitoring)

3. Operational intelligence and service assurance tools monitor the health of service and report

4. Insights observe and signal out-of-tolerance SLO events

5. Recommended remediations/optimizations determined by AI tooling drawing on global model data and operational insights

6. Recommendations are automatically implemented or passed to a human for approval

7. Return to monitoring mode

Figure 2 shows an example of intent-based provisioning automation. On the left, we see the traditional transport orchestration layer that provides very little service abstraction. The service model is simply an aggregation point for network device provisioning that exposes the many ‘atomic-level’ parameters required to be set by northbound OSS/BSS layer components. The example shows provisioning an L3VPN service with quality of service (QoS) and SR-TE policies, but it’s only possible to proceed atomically. The example requires the higher layers to compose the service, including resource checks, building the service assurance needs, and then performing ongoing change control such as updating and then deleting the service (which may require some order of operations). Service monitoring and telemetry required to do any service level assurance (SLA) is an afterthought and built separately, and it’s not easily integrated into the service itself. The higher layer service orchestration would need to be custom-built to integrate all these components and wouldn’t be very flexible for new services.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 2. Abstracting the service intent

On the right side of Figure 2, we see a next-gen transport service orchestrator which is declarative and intent-based. The user specifies the desired outcome (in YANG via a REST/NETCONF API), which is to connect a set of network endpoints, also called service demarcation points (SDPs) in an any-to-any way and to meet a specific set of SLO requirements around latency and loss. The idea here is to express the service intent in a well-defined YANG-modeled way directly based on the user’s connectivity and SLO/SLE needs. This transport service API is programable, on-demand, and declarative.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 3. IETF slice framework draft definitions

The new transport service differentiator: SLOs and SLEs


So how will operators market and differentiate their new transport service offerings? While posting what SLOs can be requested will certainly be a part of this (requesting quantifiable bandwidth, latency, reliability, and jitter metrics), the big differentiators will be the set of SLE “catalog entries” they provide. SLEs are where “everything else” is defined as part of the service intent. What type of SLEs can we begin to consider? See Table 1 below for some examples. Can you think of some new ones? The good news is that operators can flexibly define their own SLEs and map those to explicit forwarding behaviors in the network to meet a market need.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Table 1. Sample SLE offerings

Capabilities needed in the network


The beauty of intent-based networking is that the approach treats the network as a “black box” that hides detailed configuration from the user. With that said, we still need those “nerd-knobs” at the device layer to realize the services (though abstracted by the transport controller in a programable way). At Cisco, we’ve developed a transport controller called Crosswork Network Controller (CNC) which works together with an IP-based network utilizing BGP-based VPN technology for the overlay connectivity along with device layer QoS and SR-TE for the underlay SLOs/SLEs. We’re looking to continue enhancing CNC to meet the full future vision of networking intent and closed loop.

While BGP VPNs (for both L2 and L3), private-line emulation (for L1), and packet-based QoS are well-known industry technologies, we should expound on the importance of SR-TE. SR-TE will allow for a very surgical network path forwarding capability that’s much more scalable than earlier approaches. All the services shown in Table 1 will require some aspect of explicit path forwarding through the network. Also, to meet specific SLO objectives (such as BW and latency), dictating and managing specific path forwarding behavior will be critical to understanding resource availability against resource commitments. Our innovation in this area includes an extensive set of PCE and SR-TE features such as flexible algorithm, automated steering, and “on-demand-next-hop” (ODN) as shown in Figure 4.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 4. Intent-based SR-TE with Automated Steering and ODN

With granular path control capabilities, the transport controller, which includes an intelligent path computation element (PCE), can dynamically change the path to keep within the desired SLO boundaries depending on network conditions. This is the promise of software-defined networking (SDN), but when using SR-TE at scale in a service provider-class network, it’s like SDN for adults!

Given the system is intent-based, that should also mean it’s declarative. If the user wanted to switch from SLE No.1 to SLE No.2 (go from a “best effort” latency-based service to a lowest latency-based service), then that should be a simple change in the top-level service model request. The transport controller will then determine the necessary changes required to implement the new service intent and only change what’s needed at the device level (called a minimum-diff operation). This is NOT implemented as a complete deletion of the original service and then followed by a new service instantiation. Instead, it’s a modify-what’s-needed implementation. This approach thus allows for on-demand changes which offer the cloud-like flexibility consumers are looking for, including time-of-day and reactionary-based automation.

Even the standards bodies are getting on board


The network slicing concept initially defined by 3GPP TS 23.501 for 5G services as “a logical network that provides specific network capabilities and network characteristics”, was the first to mandate the service in an intent-based way, and to request a service based on specific SLOs. This approach has become a generic desire for any network service (not just 5G) and certainly for the transport domain, most service providers look to the IETF for standards definitions. The IETF is working on various drafts to help vendors and operators have common definitions and service models for intent-based transport services (called IETF Network Slice Services). These drafts include: Framework for IETF Network Slices and IETF Network Slice Service YANG Model.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 5. IETF network slice details

Conclusion


We envision a future where transport network services are requested based on outcomes and intents and in a simplified and on-demand fashion. This doesn’t mean the transport network devices will lose rich functionality – far from it. The “nerd-knobs” will still be there! Rich devices (such as VPN, QoS, and SR-TE) and PCE-level functionality will still be needed to provide the granular control required to meet the desired service objectives and expectations, yet the implementation will now be abstracted into more consumable and user-oriented service structures by the intent-based next-gen transport orchestrator.

This approach is consistent with the industry’s requirements on 5G network slicing and for what some are calling NaaS, which is desired by application developers. In all cases, we see no difference in that the service is requested as an outcome that meets specific objectives for a business purpose. Vendors like us are working to develop the proper automation and orchestration systems for both Cisco and third-party device support to realize this future of networking vision into enhanced, on-demand, API-driven, operator-delivered transport services.

Source: cisco.com

Thursday, 26 May 2022

How to Contribute to Open Source and Why

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Getting involved in the open-source community (especially early in your career) is a smart move for many reasons. When you help others, you almost always get help in return. You can make connections that can last your entire career, helping you down the road in ways you can’t anticipate.

In this article, we’ll cover more about why you should consider contributing to open source, and how to get started.

Why Should I Get Involved in Open Source?

Designing, building, deploying, and maintaining software is, believe it or not, a social activity. Our tech careers place us in a network of bright and empathetic professionals, and being in that network is part of what brings job satisfaction and career opportunities.

Nowhere in tech is this more apparent than in the world of free and open-source software (FOSS). In FOSS, we build in public, so our contributions are highly visible and done together with like-minded developers who enjoy helping others. And by contributing to the supply of well-maintained open-source software, we make the benefits of technology accessible around the world.

Where Should I Contribute?

If you’re looking to get started, then the first question you’re likely asking is: Where should I get started? A great starting place is an open-source project that you have used or are interested in.

Most open-source projects store their code in a repository on GitHub or GitLab. This is the place where you can find out what the project’s needs are, who the project maintainers are, and how you can contribute. Because of the collaborative and generous culture of FOSS, maintainers are often receptive to unsolicited offers of help. Often, you can simply reach out to a maintainer and offer to contribute.

For example, are you interested in contributing to Django? They make it very clear: We need your help to make Django as good as it can possibly be.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Finding known issues


Most projects keep a list of known issues. You can find a task that fits your knowledge and experience level. For example, the list of issues for Flask shows the following:

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Finding tasks for new contributors


Finally, many maintainers take the time to mark specific issues as being better for new contributors. For example, the Electron project applies a “good first issue” label. Notice the “Labels” selector on GitHub. You can use this to filter, showing you the best issues to start with.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Now you’ve got an issue to work on. How should you get started?

The Contribution Process


The basic process for contributing to open source is fairly uniform across all projects. However, you should still read the contributor guidelines for an individual project to be aware of any special requirements.

In general, the process looks like this:

1. Fork the project repository
2. Solve the issue
3. Submit a pull request
4. Wait for feedback

Let’s examine each of these steps in detail. We’ll use GitHub for our examples; most online repositories will operate similarly.

Fork the Project Repository


When you fork a project repository, you create a local copy of the project to do your work on. After you have your own copy, be sure to read any special instructions in the project README so that you can get the project up and running on your machine.

In GitHub, you can simply use the “Fork” button to start this. You’ll find it in the upper-right part of your screen:

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

As you save the forked repository to your account, you’ll be prompted to provide a name for it.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Solve the Issue


With a forked local copy up and running, you’re now ready to tackle the issue at hand. As you solve the issue, it’s important to keep a few things in mind:

◉ Pay attention to any coding style guidelines provided for the project.
◉ Make sure the project will run as expected, and that any provided tests pass.
◉ Comment your code as needed to help future developers.

Now that you’ve got a solution in place, it’s time to present your solution to the project maintainers.

Submit a Pull Request


The maintainers of the project need to review your proposed changes before they (hopefully) merge those changes into the main project repository. You kick off this process by submitting a pull request (PR).

Open a new PR

You can start PR creation in GitHub right from the original repository by clicking on New pull request on the Pull requests page.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Set up the branch comparison

On the Compare changes page, click on compare across forks.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Choose the branch to merge

When creating a pull request, it’s very important to pay close attention to which branch you want to merge.

The branch in the original repository

First, select the desired branch that the code changes will merge into. Typically this will be the main branch in the original repository, but be sure to check the contributor guidelines.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

The branch in your forked repo

Next, select the branch from your forked repository where you did the work.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Give your PR a title and description

Next, you’ll need to provide a title and description for your pull request. Don’t be overly wordy. You can explain your approach, but you should let your code and comments speak for themselves. Maintainers are often tight on time. Make your PR easy to read and review.

Some repositories provide template content for the PR description, and they include a checklist of items to ensure all contributors adhere to their process and format. Pay attention to any special instructions you’ve been given.

Cisco Certification, Cisco Exam, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Skills, Cisco Jobs, Cisco, Cisco Preparation, Cisco Tutorial and Material, Cisco News

Create the pull request

After making sure you’ve provided everything the maintainers are asking for, click Create Pull Request.

You’ve done it! You have submitted your first PR for an open-source project!

Wait for Feedback


You’re likely anxious to hear back on your PR. Again, check the contributor guidelines for what to expect here. Often, it will be some time until you hear back, and maintainers may not want you to nudge them.

If there are any points to address in your PR, maintainers will probably have that conversation with you as a thread in the PR. Watch your email for notifications. Try to respond quickly to comments on your PR. Maintainers appreciate this.

If you need to refactor your code, do so, and then commit the changes. You likely will not need to notify the maintainer, but you should check the contributor guidelines to be sure. The platform (in our case, GitHub) will notify the maintainers of the commit, so they’ll know to look at the PR again.

Source: cisco.com