With the executive order signed by the US government in the wake of recent cybersecurity attacks like SolarWinds, Colonial Pipeline, Microsoft Exchange server breach that have plagued high-value government entities and private organizations, it is very important to have security ammunition ready that can detect such attacks – one that can provide deep forensic details and visibility into your users and endpoints.
In the SolarWinds breach, a form of supply chain attack, the attacker spent months performing undetected reconnaissance to gain deep understanding of the inner workings of the trusted IT supplier before targeting them as the means to infiltrate US government targets bypassing ransomware defense in endpoint anti-malware solutions. The attack went undetected by many security solutions for months. New supply chain attacks are happening regularly, with many of them targeting endpoint security components directly and with many more such new techniques emerging, it is more important than ever to have a defense-in-depth endpoint strategy with forensics capabilities.
Cisco Endpoint Security Analytics (CESA) helps solve this problem and can be that security ammunition in your security infrastructure to act as an early threat warning system by providing behavior-based deep – user, endpoint and network visibility all in one place. The three components that forms the overall CESA solution are
2. CESA Collector that acts as an NVM telemetry broker, converting IPFIX NVM data into SIEM consumable Syslogs
3. Analytics platform like Splunk that can transform the endpoint telemetry data into meaningful insights and alerts
Figure 1: CESA Architecture
With the latest CESA 3.1.11 release, we have added the following features that makes it even more secure as well as provide newer user and endpoint telemetry to help you detect advanced forms of attacks.
SecureX Integration
You can now unleash the full power of SecureX threat response and accelerate the time-to-value, through the SecureX CESA Relay module (Figure 2). Through the CESA module, you can perform threat investigations using sightings of observables from CESA and use SecureX for remediation and response actions as shown in Figure 3. For example, if Umbrella had categorized a certain domain with neutral reputation, through CESA, if you observe that the process which originated the traffic to this destination domain has never connected earlier, and hence indicates a malicious activity; you can now view this relationship in SecureX, through the SecureX CESA Relay module. You can then take a response action to block the domain immediately with Umbrella and other security controls in your network.
Figure 2: SecureX CESA Relay
Figure 3: Observables extracted through CESA into your SecureX Threat Response dashboard
Secure NVM Transport
With the introduction of DTLS 1.2 support in NVM, all communications between the client and the CESA collector is now encrypted and secured. Prior to this release the information was sent over plain text UDP which could be susceptible to Man-in-the-Middle (MITM) attack where an attacker had visibility into all NVM traffic between the client and the collector. With the secure DTLS connectivity to the collector, the NVM client first verifies the availability of the collector before sending the telemetry data over the encrypted channel thus preventing network sniffing, spoofing, reconnaissance and MITM type of attacks.
Figure 4: Secure NVM Transport
Trace Path of Malicious Software
CESA can now alert you when an application is being executed from illegitimate or unexpected paths by tracing such suspicious/malicious activity all the way down to the process path of the known, unknown, or modified executable. This helps in Zero-day analysis of attacks based on suspicious activity thus simplifying your investigations. With the new Process Path Investigation dashboard, you can now see the process path from where the process was executed. In the Figure 5 below you can see that that the process “svchost.exe” is being executed from a suspicions path “d1ecfbd***”.
Figure 5: Deep visibility into process path
Find Ultra-Stealthy Threats
CESA can now also provide additional visibility into process command line arguments helping you detect attack methods such as obfuscation or other malicious evasion techniques. You can now detect unusual command line arguments to exploitable executables (eg., /bin/sh, powershell.exe, wmic etc), files given as arguments to other programs as well as whole malicious script in obfuscated form being sent as command line argument to run. With the new Process Path Investigation dashboard, you can see in Figure 6 that an attacker who has compromised the root user is trying to ssh into 10.126.111.235.
Figure 6: Deep visibility into process path arguments
Logged-in User Visibility
Prior to this release, CESA reported console user as the originator of all traffic for all user processes. An attacker could SSH into a compromised endpoint and start performing malicious activity hiding his tracks behind that of the console user of the endpoint. With the new release, CESA reports logged-in user for remote sessions like RDP and SSH for processes launched through such sessions. As you can see below, the user “Raghul” is initiating a “Data hoarding” activity by having remotely logged into the DESKTOP-ONFHG3.
In our ever-changing world, where the application represents the business itself and the level of digitization it provides is directly related to the perception of the brand; enterprises must ensure they stand differentiated by providing exceptional user experience – both for their customers as well as their employees alike. When the pandemic hit us, expectations by customers and employees initially were driven by empathy, with disruptions to services expected – but 18 months on, today everyone expects the same level of service they got pre-pandemic, irrespective of where people are working from. This drives a higher-level of expectation on the infrastructure and teams alike – towards providing an exceptional digital experience.
It is evident that application services are becoming increasingly distributed and reimagining applications through customer priorities is a key differentiator going ahead. A recent study on Global Cloud adoption by Frost & Sullivan has indicated a 70% jump in multi-cloud adoption in the Financial Services space. This is driven by a renewed focus towards innovation, along with the digitalization and streamlining of the businesses. On average, financial firms have placed more than half of their workloads in the cloud (public or private hosted) and that number is expected to grow faster than other industries over the next five years.
Digital Experience Visibility
In today’s world of applications moving to edge, applications moving to the cloud, and data everywhere – we really need to be able to manage IT irrespective of where we work, as well as where the applications are hosted or consumed from. It’s relatively easy to write up code for a new application; however, the complexity we are solving for in the current real-world scenario is that of deploying that code in today’s heterogenous environment, like that of a bank. Our traditional networks that we currently use to deploy into the data centers, predates cloud, predates SASE, Colo’s, IoT, 5G and certainly predates COVID and working from home.
In today’s world cloud is the new data center and internet is the new WAN – thereby removing the concept of an enterprise perimeter and making identity the new perimeter. To provide that seamless experience, IT needs to not just monitor application performance, but also enable application resource monitoring and application dependency monitoring – holistically. This should enable the organization to figure out the business impact of an issue – be that a drop in conversion rate or a degradation in a service, and decide almost proactively if not predictively the kind of resources to allocate towards fixing that problem and curbing the business impact.
Observability rather than Visibility
In today’s world operations are complex with various teams relying on different tools, trying to trouble shoot and support their respective domains. This visibility across individual silos still leaves the organization miles away; left to collate the information and insights via war rooms, only then being able to identify the root cause of a problem. What is required is the ability to trouble shoot more holistically – via a data driven operating model.
Thus, it is important to use the network as a Central Nervous System and utilize Full Stack Observability to be able to look at visibility and telemetry from every networking domain, every cloud, the application, the code, and everything in between. Then use AI/ML to consume the various data elements in real time, figure out dynamically how to troubleshoot and get to the root cause of a problem faster and more accurately.
A FSO platform’s end goal is to have the single pane of glass, that would be able to:
◉ Ingest anything: any telemetry, from any 3rd party, from any domain, into a learning engine which has a flexible meta data model, so that it knows what kind of data it’s ingesting
◉ Visualize anything: end to end in a unified connected data format
◉ Query anything: providing cross domain analytics connecting the dots, providing closed loop analytics to faster pinpointed root cause analysis – before it impacts the user experience, which is critical
AI to tackle Experience Degradation
AI within an FSO platform is used not just to identify the dependencies across the various stacks of an application, but also to correlate the data, address issues, and right size the resources as they relate to performance and costs across the full life cycle of the application.
It is all about utilizing the Visibility Insights Architecture across a hybrid environment that enables balancing of performance and costs through real time analytics powered by AI. The outcome to solve for is Experience Degradation which cannot be solved individually in each of the domains (application, network, security, infrastructure) but by intelligently taking a holistic approach, with the ability to drill down as required.
Cisco is ideally positioned to provide this FSO platform with AppDynamics™ and Secure App at the core, combined with ThousandEyes™ and Intersight™ Workload Optimizer, providing a true end to end view of analyzing and in turn curbing the Business Impact of any issue in real time. This enables the Infrastructure Operators and the Application Operators of the enterprise, to work closely together, breaking the silos and enable this closed loop operating model that is paramount in today’s heterogenous environment.
Information technology has transformed our lives entirely in the present day. Both organizations and individuals are excited as regards cloud computing, and leading organizations are interested in engaging skilled IT professionals to enforce the latest technologies to better their business operations. The 200-901 DEVASC is the required exam that you need to take to achieve the Cisco Certified DevNet Associate certification that will confirm your skills in Automation, cloud computing, and infrastructure of networks and will qualify you for job profiles such as software developers, DevOps engineers, and automation specialists. Though, you will be distinctive from other cloud computing professionals as your skills will be confirmed by one of the top vendors of most-sought-after IT certifications in the world - Cisco.
All the Detailed Information of Cisco 200-901 DEVASC Exam
Cisco 200-901 is indeed essential for your career as it can
help you acquire advanced skills in software development and Automation. If you
register to take this exam, you will be examined on the following topics:
Software Development and Design (15%)
Understanding and Using APIs (20%)
Cisco Platforms and Development (15%)
Application Deployment and Security (15%)
Infrastructure and Automation (20%)
Network Fundamentals (15%)
When it comes to the prerequisites of this exam, they are simple. Cisco does state that applicants should have a profound knowledge of the topics assessed by the Cisco 200-901 exam. Also, your chances of passing the exam are higher if you have work experience of one year working as a software developer and worked with Python programming prior.
When it comes to the peculiarities of this certification, applicants will have to answer 90-110 questions in two hours. Hence, you require to have a solid understanding of all the exam topics if you want to have sufficient time to answer all the questions. That is why it's essential to obtain the 200-901 syllabus topics before you start preparation. This will help you understand what preparation resources you require to use to acquire the right skills to pass your exam.
Cisco 200-901 DEVASC Exam Preparation Options
Understanding the exam objectives and their sun topics is
the first step that you should take to in your preparation journey. After
knowing these domains and all the topics, the next step incorporates
determining what study materials will offer the understanding required for each
topic.
Cisco itself provides the training course and other helpful resources to acquire relevant skills to tackle these Cisco exam questions.
Cisco training is important for those aspirants who want to prepare and pass the test on the first attempt. A certified instructor will give you all the required knowledge to ace 200-901 exam questions and get a passing score. Apart from the official course, other useful Cisco DevNet 200-901 study resources, comprising e-Learning, hands-on labs, and online videos. You can also buy Cisco Certified DevNet Associate DEVASC 200-901 Official Cert Guide from Amazon and Cisco-press store.
You can also take advantage of some third-party sources and attempt Cisco 200-901 practice tests. Most applicants choose this option as an excellent addition to their preparation methods to get even more possibilities to crack Cisco 200-901 with an amazing score. With DEVASC 200-901 practice exam, you can perceive what score you can get in the actual exam. If you answer some questions wrong, you can review the correct answers and go back to the topic and work on it and improve this area.
How Will Your Career Benefit from the Cisco 200-901 DEVASC Exam?
There is a massive upswing in Information Technology professionals in today's world; passing the Cisco 200-901 exam and becoming Cisco Certified DevNet Associate has its advantages. Because of the prevalence of Cisco, it is straightforward to perceive why professionals with Cisco certifications are distinguished over those who don't have certification. Other than standing out from the crowd of non-certified professionals, you also get an opportunity to evaluate and confirm your skillset.
On the other hand, after thorough preparation, you will not only perceive software design and development techniques, APIs, Cisco platform, application deployment, security, Automation, and network, but you will also get to authenticate yourself that you are a skilled DevNet professional and give organizations a solid reason to employ you. And if you're already working professionally in the network field, you will see a rise in your salary due to the 200-901 exam and appropriate associate-level Cisco certification. For instance, a Network Engineer with certified Cisco Networking skills can qualify to receive almost $75k a year, as reported by Payscale.com.
Conclusion
Any professionals who hold Cisco certification are reliable to be reassuring better performing and in their careers. The same refers to the applicants who passed the Cisco 200-901 DEVASC exam and achieved the Cisco Certified DevNet Associate certification. So, if you are a software developer, DevOps engineer, system integration programmer, network automation engineer, or any relevant IT professional, do not delay to sit for the Cisco DevNet Associate exam and validate your expertise in working with Cisco network to Cisco APIs to get even higher toward your professional goal.
SecureX is Cisco’s free, acronym-defying security platform. (“Is it XDR? Is it SOAR? Does it solve the same problems as a SIEM? As a TIP?” “Yes.”) From the very beginning, one of the pillars of SecureX was the ability to consume and operationalize your local security context alongside global threat intelligence.
And to that end, SecureX includes, by default, a few very respectable threat intelligence providers:
➥ The Cisco Secure Endpoint File Reputation database (formerly AMP FileDB) composed of reputation ratings for billions of file hashes collected from multiple sources including Talos, Cisco Malware Analysis and Secure Endpoint
➥ The AMP Global Intelligence database, aka SecureX Public Intelligence, curated from several internal and open source thereat intelligence sources
➥ And, of course, the TALOS intelligence database, full of all manner of information discovered by the global TALOS research team and their advanced and often custom tooling
Also included is the Private Intelligence repository, which allow you to upload or create your own intelligence for inclusion in SecureX investigations.
But, there is a lot more to the world of threat intelligence than those three sources alone. Every research organization, whether free or paid, open or private, has their own area of focus, their own methods, their own guidelines and policies and practices, and their own view on any given threat. While it’s not true that more automatically equals better, a more complete and holistic view is often more valuable than a narrower view. That is, in fact, one of the primary design considerations for, and motivating reasons for the very existence of, SecureX itself.
And, many of our customers are already using additional sources – we knew that on day one, several years ago now, when we incorporated support for Virus Total into the first version of what would become SecureX threat response.
That was also a driving reason behind the roll out the remote relay modules last summer, that allow users to tie in arbitrary data sources. This design allows SecureX users to “roll their own” modules, deploy the code in their environments, and thereby leverage whatever they want as a resource in investigations.
Then we wrote and published a number of relays that were for specific well-known threat intelligence sources for users to deploy.
Recently, we have internalized these relays and are hosting them ourselves to simplify the way our customers incorporate them into their own SecureX environment. For Cisco-provided 3rd party relays, there is no longer a need to download, configure, and stand up a relay service.
What this does, is drastically decrease the investment in time and effort required in order to benefit from a multitude of available tools. Some of these tools are on-premises and are security controls or detection tools, but many are global threat intelligence providers – and many of those, are free to use.
As I was setting up a few of them myself, I realized how easy and fast this was – a click, perhaps a paste of an API key, another click, and it was done. Then I saw how many more there were. And I wondered… how long would it take to get 10 of these added, and how much would it change the nature of an investigation?
For this experiment, I used the following, chosen somewhat arbitrarily and listed purely in alphabetical order:
➥ APIvoid
➥ abuse IPdb
➥ CyberCrime Tracker
➥ FarSight DNSDB
➥ Google SafeBrowsing
➥ Pulsedive
➥ Shodan
➥ ThreatScore
➥ io
➥ VirusTotal
Several additional providers of threat intelligence options are available, and several of those are also free or at very low cost (literally under $5/mo in one case).
So, how fast can 10 completely free threat intel sources be added into SecureX, and how does it enhance the scope of that investigation? You can see the video detailing the results, here:
How can manufacturers accelerate digitization? The payoffs are huge. Think predictive maintenance to reduce operational costs. Or, “digital twinning” to simulate changes to assets or processes and create new business opportunities. Using network devices as sensors to improve cybersecurity. With rewards like this at stake, what’s stopping manufacturers from going all-in on the industrial IoT?
The sticking point isn’t connecting assets like robots, cameras, and sensors to industrial switches. That’s now simple, thanks to interoperability standards like Profinet, ODVA, and OPC-UA. The tricky part is what comes next—network management. Operational technology (OT) teams need to prevent unplanned downtime, optimize network performance, and improve security. But they typically don’t have the network management skills or the tools. IT’s tools require lots of expertise to set up and use.
I can’t count the times I heard some version of the following from OT teams:
“I’m not a network expert. If I could automate industrial switch configuration, be assured that things are working right, and get concrete suggestions when they’re not, I’d be in heaven.”
It’s high time to grant that wish. IT and OT need a common platform that meets both teams’ requirements.
Cisco DNA Center – common ground for OT and IT
The solution is now available with Cisco DNA Center. Cisco DNA Center is a network controller, proven in the largest IT networks over several years. It translates business intent into polices (aka intent-based networking) to automate network functions and improve performance. It’s made IT’s job much simpler—and it can do the same for OT. Cisco DNA Center gives you the assurance and automation you need to manage the industrial network without deep network expertise. With a few clicks you can configure or update industrial switches, identify the source of problems – whether it’s a network device or connected system, and receive suggested actions for remediation.
Assurance: quickly see the source of problems, for swift remediation
Say a factory-floor scanner is acting erratically. The typical protocol today is to log into each industrial switch to look for the problem. Meanwhile, your expensive equipment remains idle for hours. With Cisco DNA Center, you can quickly spot important network problems and see suggested actions. In this case, you might see that that scanner’s port is going up and down more often than normal, a clue that the problem is in the scanner, not the network. Cisco DNA Center might recommend you check the scanner configuration.
You can also use Cisco DNA Center to spot brewing problems before they affect production. Using AI/ML, for instance, Cisco DNA Center might learn that network congestion is starting to impact industrial automation traffic and suggest bandwidth upgrades or quality-of-service setting enhancements to maintain network performance for critical industrial applications.
Network automation: configure industrial switches faster, consistently, and at scale
Cisco DNA automation also simplifies management. Imagine you’re adding three new manufacturing cells with 50 industrial switches during an overnight downtime window. Manual configuration might take so long you can’t finish on time, delaying production. And just one typo on one industrial switch configuration can cause security vulnerabilities or prevent equipment from connecting to the right VLAN or transmitting the right telemetry information.
With Cisco DNA Center, you create a configuration template with the right operating system version, access controls, and settings. Then you apply the template to all switches with a click. Consistent configuration helps OT keep the network working and gives IT the confidence that network and security policies are consistent.
Bring OT and IT together
OT teams need to know when network issues affect operations and fix problems quickly. IT teams have the experience and network understanding to help. Cisco DNA Center brings both teams together for collaborative solutions. Sounds like heaven to me.
In April 2020, the Federal Communications Commission (FCC) allocated 1,200 megahertz of spectrum for unlicensed use in the 6GHz band. That was the largest fleet of spectrum approved for WiFi since 1989. This Opening of the 6 GHz band more than doubles the amount of spectrum available for Wi-Fi, allowing for less congested airwaves, broader channels, and higher-speed connections and enabling a range of innovations across industries. Since the FCC decision to open the 6 GHz band, 70 countries with 3.4B people have approved or have 6 GHz regulations under consideration (Source- WiFi-Alliance)
Currently, as organizations increase their use of bandwidth-hungry video, cope with increasing numbers of client and IoT devices connecting to their networks and speed up their network edge. As a result, wireless networks are becoming oversubscribed, throttling application performance. This frustrates all network users by negatively impacting the user experience, reduces productivity.
Throughout this post, I have tried to cover the basics and the operating rules for Wi-Fi 6E in the 6 GHz band.
What is the “E” in Wifi6E?
The 802.11ax standard (Wi-Fi 6) also operates in the 2.4 GHz and 5 GHz bands. Due to this, Wi-Fi in the 6 GHz band will be identified by the name of WiFi-6E. This naming was chosen by the WiFi-Alliance to avoid confusion for 802.11ax devices that also support 6 GHz. The “6” represents the sixth generation of Wi-Fi and the “E” represents extended.
WIFI-6E: Increase in number of channels
The 6 GHz band represents 1200 MHz of spectrum that will be available from 5.925 GHz to 7.125 GHz. Knowing that 2.4 GHz band only had 11 channels, with the new spectrum, Wi-Fi will have access to 59 20-MHz channels, 29 40-MHz channels, 14 80-MHz channels, and 7 160-MHz channels. In addition to 2.4GHz and 5GHz, this not only represents a lot of channels, but also a lot of wide channels to operate on high speeds.
Advantage of a huge spectrum
Wi-Fi has always had a very less amount of spectrum. Typically, Wi-Fi had only 80 MHz of spectrum in the 2.4 GHz band and 500 MHz in the 5 GHz band. DFS channel occupy a part of the 500MHz on 5GHz band.
This left very limited contiguous spectrum. It made it difficult to find or enable 80 MHz or 160 MHz channel width, but the maximum Wi-Fi data speeds can only be achieved with these channel widths.
With the 59 20-MHz channels, Wi-Fi 6E will effectively remove congestion issues. At least for the foreseeable future, there will always be at least one 20 MHz channel available without congestion. Thanks to the contiguous spectrum and the 14 80-MHz channels or the 7 160-MHz channels to choose from, a radio will be able to find a channel available, free of congestion. This enables the technology to deliver the highest speeds.
Background on Wi-Fi Standards
Two main groups are responsible for shaping Wi-Fi’s evolution. The Wi-Fi Alliance and IEEE. The IEEE 802.11 defines the technical specifications of the wireless LAN standard. The WiFi-Alliance focuses on certification of Wi-Fi devices for compliance and interoperability, as well as the marketing of Wi-Fi technology
Over time, different classifications of WiFi networks were given different naming conventions by the Wi-Fi Alliance. Rather than “802.11b”, it’s just “WiFi 1.” Much like how mobile phone companies refer to 3G and 5G as different network speeds even though the term is almost always just a marketing tool. This classification is supposed to help make it easier for consumers to understand — instead of understanding a whole alphabet soup, users can just look for “WiFi 4” or “WiFi 6” as what they need.
The IEEE 802.11ax standard for high efficiency (or HE) covers MAC and PHY layer operation in the 2.4 GHz, 5 GHz and 6 GHz bands.
IEEE Rules for WIFI-6E
HE (High Efficiency) only operation in the 6 G
One of the most important decisions made by the IEEE 802.11ax group is that it disallows older generation Wi-Fi devices in the 6 GHz band. This is very important because it means that only high efficiency 802.11ax devices will be able to operate in this band.
Generally, upcoming Wi-Fi standards have always provided backward compatibility with previous standards. This was a boon to customers as well as vendors, since network equipment doesn’t need to be completely overhauled at each new standard. The flip side to this is it will be a source of congestion on the protocol, since legacy equipment is also sharing the available spectrum with the newer devices. In the 6 GHz however, only new high efficiency devices will be allowed to operate.
When using the analogy of road transport to describe Wi-Fi, the 2.4 GHz and 5 GHz band can be compared to congested roads where both fast and slow vehicles travel, while the 6 GHz band is the equivalent of a new, large highway that only allows the fastest cars.
Fast Passive Scanning
With 1200 MHz of spectrum and 59 new 20 MHz channels, a station with a dwell time of 100 ms per channel would require almost 6 seconds to complete a passive scan of the entire band. The standard implements a new efficient process for clients to discover nearby access points (APs). In Wi-Fi 6E, a process called fast passive scanning is being used to focus on a reduced set of channels called preferred scanning channels (PSC). For 6 GHz-only operation, a specific subset of channels will be identified as preferred scanning channels (PSC) where the primary channel of a wide channel BSS should reside, limiting the channels a client needs to scan to discover a 6 GHz-only AP. PSCs are spaced 80 MHz apart, so a client would only need to scan 15 channels
Out of band discovery
Dual-band or tri-band APs operating in the 6 GHz band as well as in a lower band (2.4 GHz or 5 GHz) will be discoverable by scanning the lower bands. In the lower band, APs will include information about the 6 GHz BSS in a reduced neighbour report in beacons and probe response frames. The client will first go into the lower bands, discover the AP there and then move to the 6 GHz band. This will reduce the probe requests that are sent by stations just trying to find APs because it will not be allowed unless it is a PSC channel.
Wi-Fi 6E Channelization
The 802.11ax standard defines channel allocations for the 6 GHz band. This allocation determines the center frequencies for 20 MHz, 40 MHz, 80 MHz, and 160 MHz channels over the entire 6 GHz band. However, regulatory domains specifications take precedence over the IEEE specification and channels that are falling on frequencies or overlapping on frequencies that are not supported in a regulatory domain cannot be used.
AFC and Avoiding Incumbent Users
The FCC defines two types of device classifications with very different transmit power rules. The goal here is to avoid potential interference with existing 6 GHz incumbents. Several classes of APs are being defined to adapt to the U-NII bands and conditions where they will be operating. The standard power (SP) AP and the low power indoor (LPI) AP and very low power (VLP) AP. The low power APs, as the name implies, have reduced power levels since they are only used indoors.
The outdoor, or standard power APs, have a serious potential of interfering with existing 6 GHz users in the geographic area. Fixed satellite services (FSS) used in the broadcast and cable industries might already have a license for the channels in use. Therefore, any new unlicensed users (Wi-Fi) must ensure they do not impact the current services. The answer to this is to create a way to coordinate the spectrum use to avoid interference issues. The basic concept would be that a new wireless device (access point) will consult a registered database to confirm its operation will not impact a registered user. For 6 GHz operation, this is called an Automated Frequency Coordination (AFC) provider.
Standard power APs must use an AFC service to protect incumbent 6 GHz operations from RF interference.
With the digital transformation of businesses, security is moving to the cloud. This is driving a need for converged services to reduce complexity, improve speed and agility, enable multicloud networking and secure the new SD-WAN-enabled architecture. Secure Access Service Edge (SASE) is the convergence of networking and security that is transforming the way organizations are delivering these services from the cloud. One of the key functions in SASE is SD-WAN that enables customers to connect users securely to applications and data regardless of location. Miercom recently did an independent study validating the setup simplicity of Cisco SD-WAN powered by Viptela with Cisco Umbrella integration, offering customers a foolproof, intelligible and complete Secure Access Service Edge (SASE) solution.
In today’s fast paced technology driven world, customers want a simplistic and seamless experience and expect the solution to be easy to deploy right from day 0 to day N setup. Cisco’s solution is simple to setup, intuitive and includes a true zero-touch SD-WAN solution that is faster to deploy and configure. Conversely, Competition setups are complex with multiple touchpoints which require manual intervention with no automated process, lacks template-based guided workflows and is more confusing to navigate. Cisco offers a cloud-hosting subscription where customers can self-provision SD-WAN controllers after order submission through a simple workflow on Cisco SD-WAN self-service portal (SSP) which incorporates vManage, vSmart and vBond in public cloud with secondary vBond and vSmart for high availability on desired region. The customers can sit back and relax while control plane/management plane setup is done via the cloud infrastructure automation tool and without any support intervention. Cisco’s SD-WAN integration with Cisco Umbrella via Cisco Smart Account licensing allows for template-based configuration workflows and automated secure tunnel deployment between SD- WAN routers and the nearest Cisco Umbrella data center. As validated by Miercom, Cisco proved more efficient in unified management mostly from a single platform (vManage), making it simple for even lean IT teams to manage via preloaded templates and troubleshooting features.
Conversely, the competition offers complex integration between its SD-WAN and cloud security offerings which involves multiple touchpoints/steps making the process time consuming and requires support intervention at multiple stages during setup. To start with, the competition requires multiple accounts leading to complexity for customers even before entering the deployment stage. Also, during the deployment stage, lot of technical expertise is required in terms of integration process of the competitive SASE solution, making it complicated for customers. Also, the competition requires support intervention in multiple stages during the Day 0, Day 1 experience that it takes days instead of a couple of hours for the whole deployment process.
Cisco also provides multiple browser options (i.e., Google Chrome, Safari, Firefox), providing flexibility to customers for accessing the vManage dashboard. Cisco vManage dashboard offers customers a network topology with guided workflows for troubleshooting to make it easy for customers to remediate issues. The competition out there has browser dependencies and does not offer the same flexibility as Cisco. Troubleshooting process for competition was proven to be basic and ineffective by Miercom.
When we look at Cloud OnRamp IaaS/SaaS setup, Cisco provides template-based configuration workflows within Cisco vManage that, once complete, integrates with AWS to automatically deploy virtual instances of Cisco SD-WAN routers within defined AWS data centers. These routers are deployed with redundancy and dynamic routing services. When it comes to the competition, Cloud OnRamp for IaaS/SaaS has a manual configuration process, with no templates or automated workflows for ease of deployment.
Finally, Cisco SD-WAN presents the customer with plethora of deployment options for the control/management plane which the competition fails to offer. Customers can either choose to deploy the SD-WAN controllers on their premises with virtual machine options or they can utilize the Cloud Ops deployment – a completely cloud-hosted solution where every component of the control plane is deployed transparently by Cisco and handed over to the customer for management.
Cisco Viptela SD-WAN with Umbrella offers an easy to deploy, flexible, robust and cost-effective SASE solution making it a perfect choice for customers.