Thursday, 14 September 2023

How to run Cisco Modeling Labs in the Cloud

Cisco Modeling Labs, Cisco Certification, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation

You might think one answer to these problems would be to use CML in the cloud. And you’d be right. However, up until recently, the only supported platforms to run CML were either on bare metal servers or on VMware vSphere.

We have heard requests to have CML Software-as-a-Service (SaaS), and we’re working hard to make this a reality in the future. Our first step in this direction is to provide tooling and automation so you can deploy your CML instance into Amazon Web Services (AWS)! This tooling is available as of today on GitHub.

Setting expectations


With this first step of automation and tooling comes a few limitations, including:

  • Tooling is currently only supported on AWS. We’re working on making this also available on Azure in a subsequent release.
  • It only supports an all-in-one deployment. Subsequent releases could include deployment of multiple instances to form a CML cluster.
  • This approach needs a bare-metal flavor to support all node types. Metal flavors are more expensive than virtualized instances; however, AWS does not support virtualization extensions on their non-bare-metal flavors. This is different from Azure.
  • You need to bring your own AWS instance AND your own CML license. No pay-as-you-go consumption model is available as of today.
  • CML software and reference platform files from the “refplat ISO” need to be made available in a bucket.
  • Automation must run locally on your computer, particularly a Linux machine with Terraform.

Due to the nature of CML’s function, the ability to run it in the cloud will never be cheap (as in free-tier). CML requires a lot of resources, memory, disk, and CPU, which comes at a cost, regardless of whether you run it locally on your laptop, in your data center, or in the cloud. The idea behind the cloud is to simplify operation and provide elasticity but not necessarily to save money.

Meeting software requirements


The software requirements you’ll need to successfully use the tooling include:

  • a Linux machine (should also work on a Mac with the same packages installed via Homebrew)
  • a Bash shell (in case you use the upload tool, which is a Bash script)
  • a Terraform installation
  • the AWS CLI package (awscli with the aws command)
  • the CML software package (.pkg) and the CML reference platform ISO from CCO/cisco.com

An existing CML controller satisfies the first two requirements, and you can use that to install Terraform and the AWS CLI. It also has the reference platform files available to copy to an AWS S3 bucket. You also must download the CML distribution package from the Cisco support website and copy it to the AWS S3 bucket.

Cisco Modeling Labs, Cisco Certification, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation
Select the distribution package circled in the following screenshot (the version might be different, but it ends in .pkg.zip), and you’ll need to unzip it for the upload tool to recognize it

For more detail, refer to the “Upload script” section of the README.md that is included in the cml-cloud repository.

Getting up and running


Once you’ve installed the requirements and copied the files, you’ll find the actual procedure straight forward and meticulously documented in the README.md.

Here are the fundamental steps:

1. Configure the required S3 bucket, user, policies, secrets, and rules via AWS console (once).
2. Upload the binary files (images and software) into the created bucket (once or whenever new software is available).
3. Configure the tooling by editing the config.json file (once).
4. Run terraform plan followed by terraform apply to bring up an instance
5. Wait 5-10 minutes for the system to become ready; the address of the controller is provided as a result (“output” from Terraform)
6. Use CML in the cloud and profit!

Once you’re done, tear down the cloud infrastructure by executing terraform destroy.

Note: While no cost is incurred when you are not running CML instances, you’ll still need to pay for storing the files inside the created S3 bucket.

Taking the next steps


While CML AWS automation tooling is a first step toward CML SaaS, the tooling in its current form might not fit your needs exactly because of cost for bare-metal instances or the current dependency on AWS. Or you might want a pay-as-you-go service or something else. Let us know!

Just remember subsequent steps are ahead! Stay tuned, and tell us what you think in the meantime. We are extremely interested in how useful (or not) this first iteration of cloud tooling is to you and your organization and, going forward, what your specific requirements are.

Source: cisco.com

Tuesday, 12 September 2023

Cisco Catalyst IE9300 Rugged Series switches: Enterprise-grade industrial-strength

Realizing the full potential of industrial digitization requires extensive connectivity of operations assets wherever they might be – at busy city intersections, inside utility substations, in rail and subway stations, along extreme temperatures and high-vibration production lines, within wind or solar farms, in mines and in oilfields. In these kinds of harsh environments, organizations need to deploy, secure, and maintain a wide range of connected devices. Full connectivity is the starting point and needs a network that is scalable, resilient, secure, and incorporates proven IT practices to keep the network performing up to expectations.

A new class of industrial rackmount switches


In January last year, Cisco launched the first two products in the Cisco Catalyst IE9300 Rugged Series Switches portfolio. These switches are closely related to the widely adopted Cisco Catalyst 9000 family with the same hardware ASICs, the same IOS XE operating system, and offer the same level of network automation, assurance, and policy enforcement by Cisco Catalyst Center (previously known as Cisco DNA Center). This year, we are extending that portfolio with one of the industry’s most innovative and comprehensive product sets.

Cisco Catalyst IE9300, Cisco Career, Cisco Prep, Cisco Career, Cisco Tutorial and Materials, Cisco Certification
Figure 1: Catalyst IE9300 Rugged Series all-fiber models

The new all-fiber and all-copper models of these rackmount, Layer 3 switches deliver the same security, scalability, and automation that customers have come to expect from our Catalyst 9000 enterprise-grade rackmount switches. But the Catalyst IE9300 switches are ruggedized for industrial environments – unlocking new opportunities to bring enterprise-grade networking to industrial networks.

One switch family, unlimited possibilities


Specific features make these multifunction switches especially powerful and versatile. For example, the latest models offer higher Power over Ethernet (PoE) wattage and high PoE budget (up to 720W). That means organizations can connect more – and higher-power, higher-bandwidth – endpoints, including Wi-Fi 6/6E access points, 4K UHD and PTZ cameras, digital signage, and even thin clients and user laptops, to name a few.

These models also provide higher bandwidth – up to 2.5GE downlinks and 10GE uplinks – for high-bandwidth endpoints and to enable data to be backhauled from many access switches in field deployments such as road intersections, railroads, and manufacturing environments. For utilities, the products’ high-density fiber ports and IEC 61850 compliance make them ideal for substation automation. Across industry sectors and use cases, Software-Defined Access makes it easier to interface industrial networks with enterprise networks. They also unlock the benefits of Cisco Cyber Vision and Endpoint Analytics to enhance visibility and security throughout industrial networks.

Cisco Catalyst IE9300, Cisco Career, Cisco Prep, Cisco Career, Cisco Tutorial and Materials, Cisco Certification
Figure 2: Catalyst IE9300 Rugged Series all-copper models

The IE9300 family is built to withstand extreme temperatures and is hardened for vibration, shock and surge, and electrical noise. These switches offer extended durability thanks to no moving parts and their fanless, convection-cooled design. And they comply with specifications for several industries – from Intelligent Transport Systems (ITS) to utility substation environments.

To put it more conversationally, you can think of the Catalyst IE9300 Rugged Series as the Layer 3 switches that you can use for (almost) everything and (nearly) everywhere!

Use cases for the Catalyst IE9300 Rugged Series Switches


One of the best ways to illustrate the potential of these new products is to describe some of the use cases they make possible:

  • High density fiber access. Fiber ports offer several benefits over copper. Fiber cables are immune to electromagnetic radiation, offer safer transmission in hazardous conditions due to their electric isolation, and can transmit data over much longer distances without experiencing signal degradation or loss of quality. Use cases for fiber include industries such as utilities that are modernizing substations using native fiber devices, and traffic backhaul from field deployments.
  • Clock input and precision timing. GPS and IRIG-B inputs that allow network synchronization ensure that different devices across the network are working with the same time reference, which is crucial for applications requiring coordinated actions. For example, in energy sectors, accurate time synchronization is crucial for monitoring power grid events, fault detection, and grid stability. Further, Precision Timing Protocol (PTP) Power Profile built into the IE9300 ensures 50ns per-hop accuracy that keeps the delay within 1µs over 16 switch hops.
  • Aggregation and cost-efficiency. 10G uplink fiber aggregation switch makes it possible to connect Resilient Ethernet Protocol (REP) and Media Redundancy Protocol (MRP) rings in non-climate-controlled field points-of-presence. This use case has broad potential in field deployment such as in roadways, wind, and solar farms. The 10G uplinks help avoid oversubscription that could occur in Gigabit only switches.
  • Distribution layer switching. Uplink ports open new opportunities for IE9300 to be used as distribution layer switches that you can deploy right in dusty and hot environments ensuring that critical data flows smoothly between access switches and the core network. Stacking capabilities of IE9300 ensure scale and redundancy.
  • High-wattage and high-density PoE. Copper models of IE9300 offer a variety of PoE options and can provide power to connected devices with a total of up to 720W per switch. Note that the IE9300 delivers 720W of PoE power while still maintaining a 1RU form factor, a first in the industry. Moreover, you can configure the switch to deliver up to 90W on a single 2.5GE port. This combines high bandwidth with high-power on a single port enabling new use cases.
  • Flexibility and scalability. Although the IE9300 switches have a fixed port count, and multiple units can be stacked to increase the number of available ports while still appearing virtually as a single switch, which reduces configuration complexity. Management by Cisco Catalyst Center makes onboarding and reconfigurations easy, increasing flexibility to keep pace with operations.
  • Visibility and security. Granular visibility into connected assets and network traffic is the necessary first step in ensuring operations security. Compute capabilities within the IE9300 allow it to run Cisco Cyber Vision sensors that provides visibility, risk assessments, and helps form the basis for network segmentation for security.

As you look to evolving your industrial network and gain from Industry 4.0 opportunities, look to the Catalyst IE9300 Rugged Series as your solution to connect everything – everywhere.

Source: cisco.com

Saturday, 9 September 2023

The New Normal is Here with Secure Firewall 4200 Series and Threat Defense 7.4

What Time Is It?


It’s been a minute since my last update on our network security strategy, but we have been busy building some awesome capabilities to enable true new-normal firewalling. As we release Secure Firewall 4200 Series appliances and Threat Defense 7.4 software, let me bring you up to speed on how Cisco Secure elevates to protect your users, networks, and applications like never before.

The New Normal is Here with Secure Firewall 4200 Series and Threat Defense 7.4

Secure Firewall leverages inference-based traffic classification and cooperation across the broader Cisco portfoliowhich continues to resonate with cybersecurity practitioners. The reality of hybrid work remains a challenge to the insertion of traditional network security controls between roaming users and multi-cloud applications. The lack of visibility and blocking from a 95% encrypted traffic profileis a painful problem that hits more and more organizations; a few lucky ones get in front of it before the damage is done. Both network and cybersecurity operations teams look to consolidate multiple point products, reduce noise, and do more with less; Cisco Secure Firewall and Workload portfolio masterfully navigates all aspects of network insertion and threat visibility.

Protection Begins with Connectivity


Even the most effective and efficient security solution is useless unless it can be easily inserted into an existing infrastructure. No organization would go through the trouble of redesigning a network just to insert a firewall at a critical traffic intersection. Security devices should natively speak the network’s language, including encapsulation methods and path resiliency. With hybrid work driving much more distributed networks, our Secure Firewall Threat Defense software followed by expanding the existing dynamic routing capabilities with application- and link quality-based path selection.

Application-based policy routing has been a challenge for the firewall industry for quite some time. While some vendors use their existing application identification mechanisms for this purpose, those require multiple packets in a flow to pass through the device before the classification can be made. Since most edge deployments use some form of NAT, switching an existing stateful connection to a different interface with a different NAT pool is impossible after the first packet. I always get a chuckle when reading those configuration guides that first tell you how to enable application-based routing and then promptly caution you against it due to NAT being used where NAT is usually used.

Our Threat Defense software takes a different approach, allowing common SaaS application traffic to be directed or load-balanced across specific interfaces even when NAT is used. In the spirit of leveraging the power of the broader Cisco Secure portfolio, we ported over a thousand cloud application identifiers from Umbrella,which are tracked by IP addresses and Fully Qualified Domain Name (FQDN) labels so the application-based routing decision can be made on the first packet. Continuous updates and inspection of transit Domain Name System (DNS) traffic ensures that the application identification remains accurate and relevant in any geography.

This application-based routing functionality can be combined with other powerful link selection capabilities to build highly flexible and resilient Software-Defined Wide Area Network (SD-WAN) infrastructures. Secure Firewall now supports routing decisions based on link jitter, round-trip time, packet loss, and even voice quality scores against a particular monitored remote application. It also enables traffic load-balancing with up to 8 equal-cost interfaces and administratively defined link succession order on failure to optimize costs. This allows a branch firewall to prioritize trusted WebEx application traffic directly to the Internet over a set of interfaces with the lowest packet loss. Another low-cost link can be used for social media applications, and internal application traffic is directed to the private data center over an encrypted Virtual Tunnel Interface (VTI) overlay. All these interconnections can be monitored in real-time with the new WAN Dashboard in Firewall Management Center.

Divide by Zero Trust


The obligatory inclusion of Zero Trust Network Access (ZTNA) into every vendor’s marketing collateral has become a pandemic of its own in the last few years. Some security vendors got so lost in their implementation that they had to add an internal version control system. Once you peel away the colorful wrapping paper, ZTNA is little more than per-application Virtual Private Network (VPN) tunnel with an aspiration for a simpler user experience. With hybrid work driving users and applications all over the place, a secure remote session to an internal payroll portal should be as simple as opening the browser – whether on or off the enterprise network. Often enough, the danger of carelessly implemented simplicity lies in compromising the security.

A few vendors extend ZTNA only to the initial application connection establishment phase. Once a user is multi-factor authenticated and authorized with their endpoint’s posture validated, full unimpeded access to the protected application is granted. This approach often results in shamingly successful breaches where valid user credentials are obtained to access a vulnerable application, pop it, and then laterally spread across the rest of the no-longer-secure infrastructure. Sufficiently motivated bad actors can go as far as obtaining a managed endpoint that goes along with those “borrowed” credentials. It’s not entirely uncommon for a disgruntled employee to use their legitimate access privileges for less than noble causes. The simple conclusion here is that the “authorize and forget” approach is mutually exclusive with the very notion of Zero Trust framework.

Secure Firewall Threat Defense 7.4 software introduces a native clientless ZTNA capability that subjects remote application sessions to the same continuous threat inspection as any other traffic. After all, this is what Zero Trust is all about. A granular Zero Trust Application Access (ZTAA – see what we did there?) policy defines individual or grouped applications and allows each one to use its own Intrusion Prevention System (IPS) and File policies. The inline user authentication and authorization capability interoperates with every web application and Security Assertion Markup Language (SAML) capable Identity Provider (IdP). Once a user is authenticated and authorized upon accessing a public FQDN for the protected internal application, the Threat Defense instance acts as a reverse proxy with full TLS decryption, stateful firewall, IPS, and malware inspection of the flow. On top of the security benefits, it eliminates the need to decrypt the traffic twice as one would when separating all versions of legacy ZTNA and inline inspection functions. This greatly improves the overall flow performance and the resulting user experience.

Let’s Decrypt


Speaking of traffic decryption, it is generally seen as a necessary evil in order to operate any DPI functions at the network layer – from IPS to Data Loss Prevention (DLP) to file analysis. With nearly all network traffic being encrypted, even the most efficient IPS solution will just waste processing cycles by looking at the outer TLS payload. Having acknowledged this simple fact, many organizations still choose to avoid decryption for two main reasons: fear of severe performance impact and potential for inadvertently breaking some critical communication. With some security vendors still not including TLS inspected throughput on their firewall data sheets, it is hard to blame those network operations teams who are cautious around enabling decryption.

Building on the architectural innovation of Secure Firewall 3100 Series appliances, the newly released Secure Firewall 4200 Series firewalls kick the performance game up a notch. Just like their smaller cousins, the 4200 Series appliances employ custom-built inline Field Programmable Gateway Array (FPGA) components to accelerate critical stateful inspection and cryptography functions directly within the data plane. This industry-first inline crypto acceleration design eliminates the need for costly packet traversal across the system bus and frees up the main CPU complex for more sophisticated threat inspection tasks. These new appliances keep the compact single Rack Unit (RU) form factor and scale to over 1.5Tbps of threat inspected throughput with clustering. They will also provide up to 34 hardware-level isolated and fully functional FTD instances for critical multi-tenant environments.

Those network security administrators who look for an intuitive way of enabling TLS decryption will enjoy the completely redesigned TLS Decryption Policy configuration flow in Firewall Management Center. It separates the configuration process for inbound (an external user to a private application) and outbound (an internal user to a public application) decryption and guides the administrator through the necessary steps for each type. Advanced users will retain access to the full set of TLS connection controls, including non-compliant protocol version filtering and selective certificate blocklisting.

Not-so-Random Additional Screening


Applying decryption and DPI at scale is all fun and games, especially with hardware appliances that are purpose-built for encrypted traffic handling, but it is not always practical. The majority of SaaS applications use public key pinning or bi-directional certificate authentication to prevent man-in-the-middle decryption even by the most powerful of firewalls. No matter how fast the inline decryption engine may be, there is still a pronounced performance degradation from indiscriminately unwrapping all TLS traffic. With both operational costs and complexity in mind, most security practitioners would prefer to direct these precious processing resources toward flows that present the most risk.

Lucky for those who want to optimize security inspection, our industry-leading Snort 3 threat prevention engine includes the ability to detect applications and potentially malicious flows without having to decrypt any packets. The integral Encrypted Visibility Engine (EVE) is the first in the industry implementation of Machine Learning (ML) driven flow inference for real-time protection within the data plane itself. We continuously train it with petabytes of real application traffic and tens of thousands of daily malware samples from our Secure Malware Analytics cloud. It produces unique application and malware fingerprints that Threat Defense software uses to classify flows by examining just a few outer fields of the TLS protocol handshake. EVE works especially well for identifying evasive applications such as anonymizer proxies; in many cases, we find it more effective than the traditional pattern-based application identification methods. With Secure Firewall Threat Defense 7.4 software, EVE adds the ability to automatically block connections that classify high on the malware confidence scale. In a future release, we will combine these capabilities to enable selective decryption and DPI of those high-risk flows for truly risk-based threat inspection.

The other trick for making our Snort 3 engine more precise lies in cooperation across the rest of the Cisco Secure portfolio. Very few cybersecurity practitioners out there like to manually sift through tens of thousands of IPS signatures to tailor an effective policy without blowing out the performance envelope. Cisco Recommendations from Talos has traditionally made this task much easier by enabling specific signatures based on actually observed host operating systems and applications in a particular environment. Unfortunately, there’s only so much that a network security device can discover by either passively listening to traffic or even actively poking those endpoints. Secure Workload 3.8 release supercharges this ability by continuously feeding actual vulnerability information for specific protected applications into Firewall Management Center. This allows Cisco Recommendations to create a much more targeted list of IPS signatures in a policy, thus avoiding guesswork, improving efficacy, and eliminating performance bottlenecks. Such an integration is a prime example of what Cisco Secure can achieve by augmenting network level visibility with application insights; this is not something that any other firewall solution can implement with DPI alone.

Light Fantastic Ahead


Secure Firewall 4200 Series appliances and Threat Defense 7.4 software are important milestones in our strategic journey, but it by no means stops there. We continue to actively invest in inference-based detection techniques and tighter product cooperation across the entire Cisco Secure portfolio to bring value to our customers by solving their real network security problems more efficiently. As you may have heard from me at the recent Nvidia GTC event, we are actively developing hardware acceleration capabilities to combine inference and DPI approaches in hybrid cloud environments with Data Processing Unit (DPU) technology. We continue to invest in endpoint integration both on the application side with Secure Workload and the user side with Secure Client to leverage flow metadata in policy decisions and deliver a truly hybrid ZTNA experience with Cisco Secure Access. Last but not least, we are redefining the fragmented approach to public cloud security with Cisco Multi-Cloud Defense.

The light of network security continues to shine bright, and we appreciate you for the opportunity to build the future of Cisco Secure together.

Source: cisco.com

Wednesday, 6 September 2023

Taming AI Frontiers with Cisco Full-Stack Observability Platform

Cisco Full-Stack Observability Platform, Cisco Certification, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation

The Generative AI Revolution: A Rapidly Changing Landscape


The public unveiling of ChatGPT has changed the game, introducing a myriad of applications for Generative AI, from content creation to natural language understanding. This advancement has put immense pressure on enterprises to innovate faster than ever, pushing them out of their comfort zones and into uncharted technological waters. The sudden boom in Generative AI technology has not only increased competition but has also fast-tracked the pace of change. As powerful as it is, Generative AI is often provided by specific vendors and frequently requires specialized hardware, creating challenges for both IT departments and application developers.

It is not a unique situation with technology breakthroughs, but the scale and potential for disruption in all areas of business is truly unprecedented. With proof-of-concept projects easier than ever to demonstrate potential with ChatGPT prompt-engineering, the demand for building new technologies using Generative AI was unprecedented. Companies are still walking a tight rope, balancing between safety of compromising their intellectual properties and confidential data and urge to move fast and leverage the latest Large Language Models to stay competitive.

Kubernetes Observability


Kubernetes has become a cornerstone in the modern cloud infrastructure, particularly for its capabilities in container orchestration. It offers powerful tools for the automated deployment, scaling, and management of application containers. But with the increasing complexity in containers and services, the need for robust observability and performance monitoring tools becomes paramount. Cisco’s Cloud Native Application Observability Kubernetes and App Service Monitoring tool offers a solution, providing comprehensive visibility into Kubernetes infrastructure.

Many enterprises have already adopted Kubernetes as a major way to run their applications and products both for on-premise and in the cloud. When it comes to deploying Generative AI applications or Large Language Models (LLMs), however, one must ask: Is Kubernetes the go-to platform? While Cloud Native Application Observability provides an efficient way to gather data from all major Kubernetes deployments, there’s a hitch. Large Language Models have “large” in the name for a reason. They are massive, compute resource-intensive systems. Generative AI applications often require specialized hardware, GPUs, and big amounts of memory for functioning—resources that are not always readily available in Kubernetes environments, or the models are not available in every place.

Infrastructure Cloudscape


Generative AI applications frequently push enterprises to explore multiple cloud platforms such as AWS, GCP, and Azure, rather than sticking to a single provider. AWS is probably the most popular cloud provider among enterprise, but Azure’s acquisition of OpenAI and making GPT-4 available as part of their cloud services was ground breaking. With Generative AI it is not uncommon for enterprises to go beyond one cloud, often spanning different services in AWS, GCP, Azure and hosted infrastructure. However, GCP and AWS are expending their toolkits from a standard pre-GPT MLOps world to fully- managed Large Language Models, Vector databases, and other newest concepts. So we will potentially see even more fragmentation in enterprise cloudscapes.

Troubleshooting distributed applications spanning across cloud and networks may be a dreadful task consuming engineering time and resources and affecting businesses. Cisco Cloud Native Application Observability provides correlated full-stack context across domains and data types. It is powered by Cisco FSO Platform, which provide building blocks to make sense of the complex data landscapes with an entity-centric view and ability to normalize and correlate data with your specific domains.

Beyond Clouds


As Generative AI technologies continue to evolve, the requirements to utilize them efficiently are also becoming increasingly complex. As many enterprises learned, getting a project from a very promising prompt-engineered proof of concept to a production-ready scalable service may be a big stretch. Fine-tuning and running inference tasks on these models at scale often necessitate specialized hardware, which is both hard to come by and expensive. The demand for specialized, GPU-heavy hardware, is pushing enterprises to either invest in on-premises solutions or seek API-based Generative AI services. Either way, the deployment models for advanced Generative AI often lie outside the boundaries of traditional, corporate-managed cloud environments.

To address these multifaceted challenges, Cisco FSO Platform emerges as a game-changer, wielding the power of OpenTelemetry (OTel) to cut through the complexity. By providing seamless integrations with OTel APIs, the platform serves as a conduit for data collected not just from cloud native applications but also from any applications instrumented with OTel. Using the OpenTelemetry collector or dedicated SDKs, enterprises can easily forward this intricate data to the platform. What distinguishes the platform is its exceptional capability to not merely accumulate this data but to intelligently correlate it across multiple applications. Whether these applications are scattered across multi-cloud architectures or are concentrated in on-premises setups, Cisco FSO Platform offers a singular, unified lens through which to monitor, manage, and make sense of them all. This ensures that enterprises are not just keeping pace with the Generative AI revolution but are driving it forward with strategic insight and operational excellence.

Bridging the Gaps with Cisco Full-Stack Observability


Amazon Web Services (AWS), Cisco FSO Platform, Kubernetes
Cisco FSO Platform serves as a foundational toolkit to meet your enterprise requirements, regardless of the complex terrains you traverse in the ever-evolving landscape of Generative AI. Whether you deploy LLM models on Azure OpenAI Services, operate your Generative AI API and Authorization services on GCP, build SaaS products on AWS, or run inference and fine-tune tasks in your own data center – the platform enables you to cohesively model and observe all your applications and infrastructure and empowers you to navigate the multifaceted realm of Generative AI with confidence and efficiency.

Cisco FSO Platform extends its utility by offering seamless integrations with multiple partner solutions, each contributing unique domain expertise. But it doesn’t stop there—it also empowers your enterprise to go a step further by customizing the platform to cater to your unique requirements and specific domains. Beyond just Kubernetes, multi-clouds, and Application Performance Monitoring, you gain the flexibility to model your specific data landscape, thereby transforming this platform into a valuable asset for navigating the intricacies and particularities of your Generative AI endeavors.

Source: cisco.com

Tuesday, 5 September 2023

From frustration to clarity: Embracing Progressive Disclosure in security design

There are so many areas to consider when dealing with protecting and detecting threats, unfortunately cognitive overload is one problem that is often overlooked. Remember when search engines had a million news articles, reading suggestions, and market analysis on the home page. Users had to sift through the mountain of information and decide what was the best source for them. This is a prime example of cognitive overload, and this is something most SOC analysts know too well. Too many options and complex steps make users feel frustrated and confused. Their brain is being given too much information to process and gets overwhelmed. When Google came on the scene with a single search bar, users flocked to it because it changed the game. It helped organize data and surfaced up the most relevant pieces of information. The single search bar on the page made it very easy for users to understand what they had to do. A clean results page made it abundantly clear which links were most important. Finally, very few prominent buttons on the page made it easy to know what the next step was.

The same concepts and problems appear in the security space, frustrating SOC analysts and making their jobs much harder. They deal with having too much information, too many choices and no real way to organize the data to help users make better data-driven decisions. To have the best user experience possible, designers leverage a technique called progressive disclosure. It is a pattern used to break down the information into bite sized pieces and feed it to the user as and when needed. A good example of this in everyday life is the average ATM. The first screen just shows a few options like withdraw, deposit, and check account balances. Within seconds, you understand what action you must take to deposit your money. Once you choose an option, it takes you to the next bite sized step. Easy!

Similarly, the security world is filled with alerts, metrics, targets, etc. It is easy to fall into the cognitive overload trap. Cisco XDR uses progressive disclosure to help reduce that cognitive load, support novice and expert users, and help users to focus on high priority incidents and remediate quickly. Now, let us look at how we achieve that.

1. Risk Score


Incidents are ranked based on a color-coded risk score. Immediately the user’s focus is drawn to the high priority incidents that are marked with a red coded score. Novice users who are not familiar with the scoring method can hover over the score and see a popup with an explanation.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

2. View Incident Details


Once an incident is selected, a drawer opens on the side. This provides a high-level overview of the incident. In a single glance the user can see the incident status, assignees, description, breakdown of risk score, and assets. The user can assess if this incident must be prioritized without having to leave the page. For further details, they can click on ‘View Incident Details’ to load a detailed page of the incidents.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

3. Control Center Tiles


The tiles displayed on the control center give a high-level overview of key metrics to better understand the health of the system without being too granular on the details. A user can create new dashboards or edit existing ones. This also helps the user see patterns and focus on areas that need to be prioritized.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

4. Navigation Menu


Often, the overwhelming amount of information and actions that can be taken are spread across numerous screens. It can be easy for analysts to get lost in the maze. With Cisco XDR, we have grouped actions into 7 main categories, which are further broken down into 26 subcategories. We progressively take the user deeper into the product to get them to where they want to go.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

5. Investigate Node Map


Mapping out an incident can sometimes look like a map of the Labyrinth. Files, assets, and IP addresses, to name a few, connected with numerous lines can be hard to decipher. Classic cognitive overload problem. XDR has grouped these so only key nodes are displayed in the map. On hover, each key node will expand to show more nodes and the lines connecting them will display more information on the relationship between each node. Clicking on a node will bring up a popup that displays options for further investigation.

Cisco XDR was built by SOC practitioners, for SOC practitioners, and lays out information in a consistent and easy to follow format – first a summary view of the data, then users can drill down to a detailed view of that same data, and finally if necessary (or out of pure interest and curiosity!) users can drill down again to see the raw data view. Using progressive disclosure and this consistent display of information, Cisco XDR helps SOC analysts view the information they need to move forward and take next steps to effectively mitigate threats. No more analysis paralysis, only data-based decisions here!

Friday, 1 September 2023

New Cisco Services Help You Achieve Business Outcomes— Faster

Cisco Services, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Materials

In my role, I have the incredible opportunity to meet trailblazing IT leaders just like you every day. Each has told me that in order to continue to innovate and help their organization thrive, they must align technology investments to business priorities and achieve remarkable, tangible results. But they can’t do it alone! IT leaders have also shared with me the need for strategic advisors with deep technical expertise and understanding of their business to inform their decisions and accelerate technology adoption.

In response to what we hear from customers like you, we are continually evolving our Customer Experience (CX) services portfolio. Today, I am excited to announce that we are launching a brand-new outcomes-driven offering – Cisco Lifecycle Services (LCS). These services shift your focus from IT challenges to business outcomes. LCS lets you start with your desired outcomes, then helps you identify and execute IT initiatives aligned to those outcomes, which allows you to demonstrate measurable results. You also get Cisco experts with advanced tools, automation, and AI/ML insights to accelerate time-to-outcomes.

“Companies require IT services that provide the scalability and adaptability to align to changing business and technology needs. Organizations of all sizes and across multiple industries need the ability to orient technology initiatives to discreet business outcomes with measurable KPIs. I believe that Cisco’s new Lifecycle Services is novel in its delivery mechanism to this end and leans into its depth of knowledge and capabilities. ”
– Will Townsend, VP & Principal Analyst, Moor Insights and Strategy

Focus on Business Outcomes


We understand your business, industry, and technologies. Distilled from over 30 years of experience helping thousands of organizations worldwide, Cisco Lifecycle Services empowers YOU to:

1. Drive business outcomes with continuous engagement.

Let’s say your priority is to reduce risk, enhance customer experience, and increase operational excellence – these are your desired business outcomes (and we have 11 in our catalog). With your desired outcomes as the compass, Cisco experts help you identify and develop IT optimization and transformation strategies. We then work with your team and partners to prioritize, implement, and drive the adoption of these strategies so that you achieve tangible business outcomes.

2. De-risk and accelerate time-to-outcomes.

With this service, you make informed decisions. Our experts have AI/ML insights, tools, and automation to translate telemetry data into actions. You also accelerate time-to-outcomes by removing execution roadblocks. You close skills-gap and talent shortages with Cisco’s team of deep technical experts to fast-track planning, designing, implementing, and automating your IT environment.

3. Demonstrate measurable success.

At the beginning of the engagement, together with you, we identify outcomes aligned KPIs (Key Performance Indicators). Next, we use our automated KPI measurement tools and telemetry to create a baseline. Throughout the engagement, Cisco experts track, measure, translate, and report the impact aligned with your desired outcomes.

4. Exercise flexible choices that align with the way you work.

At Cisco, we’re committed to your success. We understand that your organization is unique and has ways of working. With this service, you get the flexibility to engage Cisco experts and our partners in the best way for you.

  • Provide deep and meaningful advice with actionable recommendations.
  • Work with you as part of your team.
  • Do it for you with end-to-end delivery ownership.

And, should your business priorities change during the engagement, we realign the experts and IT initiatives to your new direction.

Simple, consistent, and integrated engagement model


When you choose Cisco and our partners, you expect a simple, consistent, and high-quality experience. Rooted in learnings from our delivery experts and customer feedback, we expect to exceed your expectations with the new engagement model.

Cisco Services, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Materials

  1. Baseline: We begin by understanding your business objectives and tailoring KPIs to align with your goals. Then, we establish a baseline using telemetry and other methods.
  2. Analyze: Using telemetry and high-touch discovery, our experts analyze your IT environment and identify strategies to achieve your desired business outcomes.
  3. Recommend: We make recommendations, help you prioritize IT initiatives, and build an execution plan.
  4. Execute: We and our partners work with you to remove roadblocks to ensure the execution of prioritized initiatives – aligned with the way you work.
  5. Measure: To demonstrate progress consistently, we track, measure, translate, and report KPIs at regular intervals using Automated Dashboard and Quarterly Business Reviews (QBR).

When you start with a business outcome, you know multiple IT initiatives will get you there. It gets complex. With Integrated Service Delivery, our experts handle the complexity and coordinate with your teams, partners, and the Cisco team to keep everyone in sync and focused on the ultimate objective. All you experience is simplicity, consistency, and measurable business outcomes.

Previews surpassed initial expectations.

We organized field trials with select customers to validate our new approach. The initial response surpassed our highest expectations. A broad range of organizations representing service providers, manufacturing, healthcare, retail, finance, education, and the public sector signed up for the preview, and the feedback tells us our impact with outcomes exceeds the value of previous services provided. Now we’re ready to bring this tremendous value to you.

Amplify with Cisco Partners

Cisco Lifecycle Services complements the capabilities and scale of our extensive partner ecosystem. Suppose you are already working with one of our partners. In that case, Cisco Lifecycle Services allows you and the partner to deepen the strategic relationship and achieve greater alignment on your business priorities and the business outcomes you desire. As Cisco and Cisco Partner experts analyze your environment and make recommendations to transform and optimize your IT environment, our flexible model allows you to engage your preferred partners to deliver a variety of implementation and managed services.

Let’s shift the focus from challenges to business outcomes.

To learn how your IT organization can accelerate their ability to deliver new and better business outcomes, visit Cisco Lifecycle Services here. You can also contact your Cisco account representative or authorized partner directly to set up an introductory meeting.

Source: cisco.com

Thursday, 24 August 2023

How SD-WAN Solves Multicloud Complexity

Cloud is the undisputed center of gravity when supporting distributed workforces. But managing secure connectivity in a growing multicloud environment continues to be more complex, expensive, and time consuming.

Enter the software-defined WAN (SD-WAN), a powerful, abstracted software layer that serves as a centralized control plane to enable organizations to automate, simplify, and optimize their network transport for any application to any cloud.   

Are you ready to steer traffic on demand, based on centralized policy, network insights, and predictive AI, and further enhanced by end-to-end visibility? Do you want to be more proactive instead of reactive in how you manage this traffic and run your network? If so, read on! 

Abstracting the complexity of multicloud 

Enterprises accelerated their transition to cloud and software-as-a-service (SaaS) during the pandemic to support their distributed workforces at home and on the go. This has seen multicloud environments become the norm. Our 2023 Global Networking Trends Report found that 92% of respondents used more than one public cloud in their infrastructure and 69% used over five SaaS applications.  

Connecting to different providers and network layers in multicloud environments has led to a patchwork of infrastructure and management controllers. This results in more complexity and cost for organizations looking to ensure a secure, consistent user experience.  

Networking complexity, from first to last mile 

Let’s look at these networking layers and why IT simplification is crucial in connecting today’s highly mobile workforce to business-critical applications.  

In the first mile, users access services from offices and campuses near data centers or remotely, from uncontrolled facilities using various devices (Figure 1). Workers connect through Multiprotocol Label Switching (MPLS), broadband, Wi-Fi, and cellular. Remote workers use their internet service provider (ISP) to connect them to concentrators at regional peering points of presence (PoPs).

SD-WAN Solves Multicloud Complexity, Cisco Career, Cisco Skills, Cisco Prep, Cisco Preparation, Cisco Skills
Figure 1. New architecture for the distributed workplace  

The middle mile is the long-haul transport layer that has grown in complexity with the migration to the cloud. It serves as the connective tissue between first and last mile, interconnecting different types of cloud services, cloud applications (e.g., SaaS, IaaS), and data centers. Specialized middle-mile providers like Equinix and Megaport provide cross-connects between business networks, the internet, and cloud providers globally. Adding to the array of choices in the middle mile, public cloud providers like AWS, Google Cloud, and Microsoft Azure offer customers the ability to access their apps with site-to-cloud, site-to-site, region-to-region, cloud-to-cloud, and other connection options with different quality of experience metrics.  

The last mile is the connection between the data center or service provider and the end user’s device and application.    

Managing multicloud complexity with SD-WAN integrations  


Using applications distributed across multiple clouds and SaaS, workers have widely different experiences depending on their location. Adverse and unpredictable amounts of downtime, latency, and speed, for example, can threaten business continuity. So, establishing reliable, consistent, high-quality experiences is very much on the minds of enterprise IT managers today. 

More than half (53%) of respondents to the 2023 Global Networking Trends Report said they are prioritizing integration with cloud providers to improve connectivity to cloud-based apps from distributed locations. Additionally, 49% said they are using SD-WAN integrations across providers and multiple clouds to provide a simpler, consistent, optimized, and secure IT and application experience. 

SD-WAN unifies the entire WAN backbone and brings secure, private, cloud-aware connectivity that is agnostic to all kinds of link types, providers, and geographies (Figure 2).  

SD-WAN Solves Multicloud Complexity, Cisco Career, Cisco Skills, Cisco Prep, Cisco Preparation, Cisco Skills
Figure 2. SD-WAN integrations with IaaS, SaaS, and middle-mile providers are vital for a better IT and user experience 

With SD-WAN providing connectivity between cloud, SaaS, and middle-mile providers, real-time traffic steering based on centralized policy and end-to-end analytics is possible. Network admins can be proactive instead of reactive, changing traffic parameters on demand, according to application, congestion, location, user, device, and other factors. 

SD-WAN multicloud integrations in action 


Tamimi Markets, a major Saudi Arabian supermarket chain, was having trouble providing a consistent experience to users at markets, warehouses, branch offices, and remote locations. Dependent on three ISPs for end-to-end connectivity in a hub-and-spoke architecture, they moved to a cloud architecture to eliminate the need to backhaul network traffic through the headquarters and in the process quadrupled bandwidth speeds. An integrated SD-WAN enables them to steer their traffic over a variety of link options based on network demand, cost, and quality of experience metrics.  

Asian food manufacturer Universal Robina Corporation shifted to a multicloud architecture to support remote workers after the pandemic. It uses SD-WAN to connect users and apps to its multicloud architecture securely, wherever they are located. The multicloud integrations enable secure connectivity from branches to the Microsoft Azure cloud and with Microsoft 365 for a superior application experience with informed network routing (INR) that enables the exchange of telemetry between Cisco and Microsoft while providing full visibility to Universal Robina’s IT team. 

Foundational for a SASE architecture 


Another benefit of SD-WAN is that it is one half of a converged secure access service edge (SASE) architecture. SASE radically simplifies security and networking through unified and centralized management to connect users to applications in complex and highly distributed environments. By combining SD-WAN networking infrastructure and routing traffic through a cloud-centric security service edge (SSE) solution, companies can maintain the same level of security for cloud users as data center users (Figure 3).


SD-WAN Solves Multicloud Complexity, Cisco Career, Cisco Skills, Cisco Prep, Cisco Preparation, Cisco Skills
Figure 3. SD-WAN is foundational to a SASE architecture 

It’s a multicloud world and SD-WAN―with tight integrations to leading cloud, SaaS, and middle-mile providers―is the connective tissue from first mile to last, managing complexity and driving agility throughout sprawling multicloud environments.

What’s more, SD-WAN multicloud integrations bring together each organization’s many different types of transport connections and policies under one management system for secure, consistent service.

The cost savings from automation and the ability to steer traffic on demand with optimized routing are further compelling reasons why SD-WAN continues to grow in popularity. Once established, these features enable IT departments to build an optimized global network in a simplified, fully automated way, within hours. 

Source: cisco.com