Showing posts with label Cisco Secure Firewall. Show all posts
Showing posts with label Cisco Secure Firewall. Show all posts

Saturday, 16 December 2023

Secure Workload and Secure Firewall: The recipe for a robust zero trust cybersecurity strategy

You hear a lot about zero trust microsegmentation these days and rightly so. It has matured into a proven security best-practice to effectively prevent unauthorized lateral movement across network resources. It involves dividing your network into isolated segments, or “microsegments,” where each segment has its own set of security policies and controls. In this way, even if a breach occurs or a potential threat gains access to a resource, the blast radius is contained.

And like many security practices, there are different ways to achieve the objective, and typically much of it depends on the unique customer environment. For microsegmentation, the key is to have a trusted partner that not only provides a robust security solution but gives you the flexibility to adapt to your needs instead of forcing a “one size fits all” approach.

Now, there are broadly two different approaches you can take to achieve your microsegmentation objectives:

◉ A host-based enforcement approach where the policies are enforced on the workload itself. This can be done by installing an agent on the workload or by leveraging APIs in public cloud.
◉ A network-based enforcement approach where the policies are enforced on a network device like an east-west network firewall or a switch.

While a host-based enforcement approach is immensely powerful because it provides access to rich telemetry in terms of processes, packages, and CVEs running on the workloads, it may not always be a pragmatic approach for a myriad of reasons. These reasons can range from application team perceptions, network security team preferences, or simply the need for a different approach to achieve buy-in across the organization.

Long story short, to make microsegmentation practical and achievable, it’s clear that a dynamic duo of host and network-based security is key to a robust and resilient zero trust cybersecurity strategy. Earlier this year, Cisco completed the native integration between Cisco Secure Workload and Cisco Secure Firewall delivering on this principle and providing customers with unmatched flexibility as well as defense in depth. Let’s take a deeper look at what this integration enables our customers to achieve and some of the use cases.

Use case #1: Network visibility via an east-west network firewall


The journey to microsegmentation starts with visibility. This is a perfect opportunity for me to insert the cliché here – “What you can’t see, you can’t protect.” In the context of microsegmentation, flow visibility provides the foundation for building a blueprint of how applications communicate with each other, as well as users and devices – both within and outside the datacenter.

The integration between Secure Workload and Secure Firewall enables the ingestion of NSEL flow records to provide network flow visibility, as shown in Figure 1. You can further enrich this network flow data by bringing in context in the form of labels and tags from external systems like CMDB, IPAM, identity sources, etc. This contextually enriched data set allows you to quickly identify the communication patterns and any indicators of compromise across your application landscape, enabling you to immediately improve your security posture.

Figure 1: Secure Workload ingests NSEL flow records from Secure Firewall

Secure Workload and Secure Firewall: The recipe for a robust zero trust cybersecurity strategy

Use case #2: Microsegmentation using the east-west network firewall


The integration of Secure Firewall and Secure Workload provides two powerful complimentary methods to discover, compile, and enforce zero trust microsegmentation policies. The ability to use a host-based, network-based, or mix of the two methods gives you the flexibility to deploy in the manner that best suits your business needs and team roles (Figure 2).

And regardless of the approach or mix, the integration enables you to seamlessly leverage the full capabilities of Secure Workload including:

  • Policy discovery and analysis: Automatically discover policies that are tailored to your environment by analyzing flow data ingested from the Secure Firewall protecting east-west workload communications.
  • Policy enforcement: Onboard multiple east-west firewalls to automate and enforce microsegmentation policies on a specific firewall or set of firewalls through Secure Workload.
  • Policy compliance monitoring: The network flow information, when compared against a baseline policy, provides a deep view into how your applications are behaving and complying against policies over time. 

Figure 2: Host-based and network-based approach with Secure Workload

Secure Workload and Secure Firewall: The recipe for a robust zero trust cybersecurity strategy

Use case #3: Defense in depth with virtual patching via north-south network firewall


This use case demonstrates how the integration delivers defense in depth and ultimately better security outcomes. In today’s rapidly evolving digital landscape, applications play a vital role in every aspect of our lives. However, with the increased reliance on software, cyber threats have also become more sophisticated and pervasive. Traditional patching methods, although effective, may not always be feasible due to operational constraints and the risk of downtime. When a zero-day vulnerability is discovered, there are a few different scenarios that play out. Consider two common scenarios: 1) A newly discovered CVE poses an immediate risk and in this case the fix or the patch is not available and 2) The CVE is not highly critical so it’s not worth patching it outside the usual patch window because of the production or business impact. In both cases, one must accept the interim risk and either wait for the patch to be available or for the patch window schedule.

Virtual patching, a form of compensating control, is a security practice that allows you to mitigate this risk by applying an interim protection or a “virtual” fix to known vulnerabilities in the software until it has been patched or updated. Virtual patching is typically done by leveraging the Intrusion Prevention System (IPS) of Cisco Secure Firewall. The key capability, fostered by the seamless integration, is Secure Workload’s ability to share CVE information with Secure Firewall, thereby activating the relevant IPS policies for those CVEs. Let’s take a look at how (Figure 3):

  • The Secure Workload agents installed on the application workloads will gather telemetry about the software packages and CVEs present on the application workloads.
  • A workload-CVE mapping data is then published to Secure Firewall Management Center. You can choose the exact set of CVEs you want to publish. For example, you can choose to only publish CVEs that are exploitable over network as an attack vector and has CVSS score of 10. This would allow you to control any potential performance impact on your IPS.
  • Finally, the Secure Firewall Management Center then runs the ‘firepower recommendations’ tool to fine tune and enable the exact set of signatures that are needed to provide protection against the CVEs that were found on your workloads. Once the new signature set is crafted, it can be deployed to the north-south perimeter Secure Firewall.

Figure 3: Virtual patching with Secure Workload and Secure Firewall

Secure Workload and Secure Firewall: The recipe for a robust zero trust cybersecurity strategy

Flexibility and defense in depth is the key to a resilient zero trust microsegmentation strategy


With Secure Workload and Secure Firewall, you can achieve a zero-trust security model by combining a host-based and network-based enforcement approach. In addition, with the virtual patching ability, you get another layer of defense that allows you to maintain the integrity and availability of your applications without sacrificing security. As the cyber threat landscape continues to evolve, harmony between different security solutions is undoubtedly the key to delivering more effective solutions that protect valuable digital assets.

Source: cisco.com

Saturday, 4 November 2023

The myth of the long-tail vulnerability

Modern-day vulnerability management tends to follow a straightforward procedure. From a high level, this can be summed up in the following steps:

  • Identify the vulnerabilities in your environment
  • Prioritize which vulnerabilities to address
  • Remediate the vulnerabilities

When high-profile vulnerabilities are disclosed, they tend to be prioritized due to concerns that your organization will be hammered with exploit attempts. The general impression is that this malicious activity is highest shortly after disclosure, then decreases as workarounds and patches are applied. The idea is that we eventually reach a critical mass, where enough systems are patched that the exploit is no longer worth attempting.

In this scenario, if we were to graph malicious activity and time, we end up with what is often referred to as a long-tail distribution. Most of the activity occurs early on, then drops off over time to form a long tail. This looks something like the following:

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

A long tail distribution of exploit attempts sounds reasonable in theory. The window of usefulness for an exploit is widest right after disclosure, then closes over time until bad actors move on to other, more recent vulnerabilities.

But is this how exploitation attempts really play out? Do attackers abandon exploits after a certain stage, moving on to newer and more fruitful vulnerabilities? And if not, how do attackers approach vulnerability exploitation?

Our approach


To answer these questions, we’ll look at Snort data from Cisco Secure Firewall. Many Snort rules protect against the exploitation of vulnerabilities, making this a good data set to examine as we attempt to answer these questions.

We’ll group Snort rules by the CVEs mentioned in the rule documentation, and then look at CVEs that see frequent exploit attempts. Since CVEs are disclosed on different dates, and we’re looking at alerts over time, the specific time frame will vary. In some cases, the disclosure date is earlier than the range our data set covers. While we won’t be able to examine the initial disclosure period for these, we’ll look at a few of these as well for signs of a long tail.

Finally, looking at a count of rule triggers can be misleading—a few organizations can see many alerts for one rule in a short time frame, making the numbers look larger than they are across all orgs. Instead, we’ll look at the percentage of organizations that saw an alert. We’ll then break this out on a month-to-month basis.

Log4J: The 800-pound gorilla


The Log4J vulnerability has dominated our vulnerability metrics since it was disclosed in December 2021. However, looking at the percentage of exploit attempts each month since, there was neither a spike in use right after disclosure, nor a long tail afterwards.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

That first month, 27 percent of organizations saw alerts for Log4J. Since then, alerts have neither dropped off nor skyrocketed from one month to the next. The percent of organizations seeing alerts range from 25-34 percent through June 2023, averaging out at 28 percent per month.

Perhaps Log4J is an exception to the rule. It’s an extremely common software component and a very popular target. A better approach might be to look at a lesser-known vulnerability to see how the curve looks.

Spring4Shell: The Log4J that wasn’t


Spring4Shell was disclosed at the end of March 2022. This was a vulnerability in the Spring Java framework that managed to resurrect an older vulnerability in JDK9, which had initially been discovered and patched in 2010. At the time of Spring4Shell’s disclosure there was speculation that this could be the next Log4J, hence the similarity in naming. Such predictions failed to materialize.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

We did see a decent amount of Spring4Shell activity immediately after the disclosure, where 23 percent of organizations saw alerts. After this honeymoon period, the percentage did decline. But instead of exhibiting the curve of a long tail, the percentages have remained between 14-19 percent a month.

Keen readers will notice the activity in the graph above that occurs prior to disclosure. These alerts are for rules covering the initial, more-than-a-decade-old Java vulnerability, CVE-2010-1622. This is interesting in two ways:

1. The fact that these rules were still triggering monthly on a 13-year-old vulnerability prior to Spring4Shell’s disclosure provides the first signs of a potential long tail.

2. It turns out that Spring4Shell was so similar to the previous vulnerability that the older Snort rules alerted on it.

Unfortunately, the time frame of our alert data isn’t long enough to say what the initial disclosure phase for CVE-2010-1622 looked like. So since we don’t have enough information here to draw a conclusion, what about other older vulnerabilities that we know were in heavy rotation?

ShellShock: A classic


It’s hard to believe, but the ShellShock vulnerability recently turned nine. By software development standards this qualifies it for senior citizen status, making it a perfect candidate to examine. While we don’t have the initial disclosure phase, activity remains high to this day.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Our data set begins approximately seven years after disclosure, but the percentage of organizations seeing alerts ranges from 12-23 percent. On average across this timeframe, about one in five organizations see ShellShock alerts in a month.

A pattern emerges


While we’ve showcased 3-4 examples here, a pattern does emerge when looking at other vulnerabilities, both old and new. For example, here is CVE-2022-26134, a vulnerability discovered in Atlassian Confluence in June 2022.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Here is ProxyShell, which was initially discovered in August 2021, followed by two more related vulnerabilities in September 2022.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

And here is another older, commonly targeted vulnerability in PHPUnit, originally disclosed in June 2017.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Is the long tail wagging the dog?


What emerges from looking at vulnerability alerts over time is that, while there is sometimes an initial spike in usage, they don’t appear to decline to a negligible level. Instead, vulnerabilities stick around for years after their initial disclosure.

So why do old vulnerabilities remain in use? One reason is that many of these exploitation attempts are automated attacks. Bad actors routinely leverage scripts and applications that allow them to quickly run exploit code against a large swaths of IP addresses in the hopes of finding vulnerable machines.

This is further evidenced by looking at the concentration of alerts by organization. In many cases we see sudden spikes in the total number of alerts seen each month. If we break these months down by organization, we regularly see that alerts at one or two organizations are responsible for the spikes.

For example, take a look at the total number of Snort alerts for an arbitrary vulnerability. In this example, December was in line with the months that preceded it. Then in January, the total number of alerts began to grow, peaking in February, before declining back to average levels.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

The cause of the sudden spike, highlighted in light blue, is one organization that was hammered by alerts for this vulnerability. The organization saw little-to-no alerts in December before a wave hit that lasted from January through March. It then completely disappeared by April.

This is a common phenomenon seen in overall counts (and why we don’t draw trends from this data alone). This could be the result of automated scans by bad actors. These attackers may have found one such vulnerable system at this organization, then proceeded to hammer it with exploit attempts in the months that followed.

So is the long tail a myth when it comes to vulnerabilities? It certainly appears so—at least when it comes to the types of attacks that target the perimeter of an organization. The public facing applications that reside here present a large attack surface. Public proof-of-concept exploits are often readily available and are relatively easy to fold into attacker’s existing automated exploitation frameworks. There’s little risk for an attacker involved in automated exploit attempts, leaving little incentive to remove exploits once they’ve been added to an attack toolkit.

What is left to explore is whether long-tail vulnerabilities exist in other attack surfaces. The fact is that there are different classes of vulnerabilities that can be leveraged in different ways. We’ll explore more of these facets in the future.

It only takes one


Finding that one vulnerable, public-facing system at an organization is a needle-in-a-haystack operation for attackers, requiring regular scanning to find it. But all it takes is one new system without the latest patches applied to give the attackers an opportunity to gain a foothold.

The silver lining here is that a firewall with an intrusion prevention system, like Cisco Secure Firewall, is designed specifically to prevent successful attacks.  Beyond IPS prevention of these attacks, the recently introduced Cisco Secure Firewall 4200 appliance and 7.4 OS bring enterprise-class performance and a host of new features including SD-WAN, ZTNA, and the ability to detect apps and threats in encrypted traffic without decryption.

Also, if you’re looking for a solution to assist you with vulnerability management, Cisco Vulnerability Management has you covered. Cisco Vulnerability Management equips you with the contextual insight and threat intelligence needed to intercept the next exploit and respond with precision.

Source: cisco.com

Saturday, 9 September 2023

The New Normal is Here with Secure Firewall 4200 Series and Threat Defense 7.4

What Time Is It?


It’s been a minute since my last update on our network security strategy, but we have been busy building some awesome capabilities to enable true new-normal firewalling. As we release Secure Firewall 4200 Series appliances and Threat Defense 7.4 software, let me bring you up to speed on how Cisco Secure elevates to protect your users, networks, and applications like never before.

The New Normal is Here with Secure Firewall 4200 Series and Threat Defense 7.4

Secure Firewall leverages inference-based traffic classification and cooperation across the broader Cisco portfoliowhich continues to resonate with cybersecurity practitioners. The reality of hybrid work remains a challenge to the insertion of traditional network security controls between roaming users and multi-cloud applications. The lack of visibility and blocking from a 95% encrypted traffic profileis a painful problem that hits more and more organizations; a few lucky ones get in front of it before the damage is done. Both network and cybersecurity operations teams look to consolidate multiple point products, reduce noise, and do more with less; Cisco Secure Firewall and Workload portfolio masterfully navigates all aspects of network insertion and threat visibility.

Protection Begins with Connectivity


Even the most effective and efficient security solution is useless unless it can be easily inserted into an existing infrastructure. No organization would go through the trouble of redesigning a network just to insert a firewall at a critical traffic intersection. Security devices should natively speak the network’s language, including encapsulation methods and path resiliency. With hybrid work driving much more distributed networks, our Secure Firewall Threat Defense software followed by expanding the existing dynamic routing capabilities with application- and link quality-based path selection.

Application-based policy routing has been a challenge for the firewall industry for quite some time. While some vendors use their existing application identification mechanisms for this purpose, those require multiple packets in a flow to pass through the device before the classification can be made. Since most edge deployments use some form of NAT, switching an existing stateful connection to a different interface with a different NAT pool is impossible after the first packet. I always get a chuckle when reading those configuration guides that first tell you how to enable application-based routing and then promptly caution you against it due to NAT being used where NAT is usually used.

Our Threat Defense software takes a different approach, allowing common SaaS application traffic to be directed or load-balanced across specific interfaces even when NAT is used. In the spirit of leveraging the power of the broader Cisco Secure portfolio, we ported over a thousand cloud application identifiers from Umbrella,which are tracked by IP addresses and Fully Qualified Domain Name (FQDN) labels so the application-based routing decision can be made on the first packet. Continuous updates and inspection of transit Domain Name System (DNS) traffic ensures that the application identification remains accurate and relevant in any geography.

This application-based routing functionality can be combined with other powerful link selection capabilities to build highly flexible and resilient Software-Defined Wide Area Network (SD-WAN) infrastructures. Secure Firewall now supports routing decisions based on link jitter, round-trip time, packet loss, and even voice quality scores against a particular monitored remote application. It also enables traffic load-balancing with up to 8 equal-cost interfaces and administratively defined link succession order on failure to optimize costs. This allows a branch firewall to prioritize trusted WebEx application traffic directly to the Internet over a set of interfaces with the lowest packet loss. Another low-cost link can be used for social media applications, and internal application traffic is directed to the private data center over an encrypted Virtual Tunnel Interface (VTI) overlay. All these interconnections can be monitored in real-time with the new WAN Dashboard in Firewall Management Center.

Divide by Zero Trust


The obligatory inclusion of Zero Trust Network Access (ZTNA) into every vendor’s marketing collateral has become a pandemic of its own in the last few years. Some security vendors got so lost in their implementation that they had to add an internal version control system. Once you peel away the colorful wrapping paper, ZTNA is little more than per-application Virtual Private Network (VPN) tunnel with an aspiration for a simpler user experience. With hybrid work driving users and applications all over the place, a secure remote session to an internal payroll portal should be as simple as opening the browser – whether on or off the enterprise network. Often enough, the danger of carelessly implemented simplicity lies in compromising the security.

A few vendors extend ZTNA only to the initial application connection establishment phase. Once a user is multi-factor authenticated and authorized with their endpoint’s posture validated, full unimpeded access to the protected application is granted. This approach often results in shamingly successful breaches where valid user credentials are obtained to access a vulnerable application, pop it, and then laterally spread across the rest of the no-longer-secure infrastructure. Sufficiently motivated bad actors can go as far as obtaining a managed endpoint that goes along with those “borrowed” credentials. It’s not entirely uncommon for a disgruntled employee to use their legitimate access privileges for less than noble causes. The simple conclusion here is that the “authorize and forget” approach is mutually exclusive with the very notion of Zero Trust framework.

Secure Firewall Threat Defense 7.4 software introduces a native clientless ZTNA capability that subjects remote application sessions to the same continuous threat inspection as any other traffic. After all, this is what Zero Trust is all about. A granular Zero Trust Application Access (ZTAA – see what we did there?) policy defines individual or grouped applications and allows each one to use its own Intrusion Prevention System (IPS) and File policies. The inline user authentication and authorization capability interoperates with every web application and Security Assertion Markup Language (SAML) capable Identity Provider (IdP). Once a user is authenticated and authorized upon accessing a public FQDN for the protected internal application, the Threat Defense instance acts as a reverse proxy with full TLS decryption, stateful firewall, IPS, and malware inspection of the flow. On top of the security benefits, it eliminates the need to decrypt the traffic twice as one would when separating all versions of legacy ZTNA and inline inspection functions. This greatly improves the overall flow performance and the resulting user experience.

Let’s Decrypt


Speaking of traffic decryption, it is generally seen as a necessary evil in order to operate any DPI functions at the network layer – from IPS to Data Loss Prevention (DLP) to file analysis. With nearly all network traffic being encrypted, even the most efficient IPS solution will just waste processing cycles by looking at the outer TLS payload. Having acknowledged this simple fact, many organizations still choose to avoid decryption for two main reasons: fear of severe performance impact and potential for inadvertently breaking some critical communication. With some security vendors still not including TLS inspected throughput on their firewall data sheets, it is hard to blame those network operations teams who are cautious around enabling decryption.

Building on the architectural innovation of Secure Firewall 3100 Series appliances, the newly released Secure Firewall 4200 Series firewalls kick the performance game up a notch. Just like their smaller cousins, the 4200 Series appliances employ custom-built inline Field Programmable Gateway Array (FPGA) components to accelerate critical stateful inspection and cryptography functions directly within the data plane. This industry-first inline crypto acceleration design eliminates the need for costly packet traversal across the system bus and frees up the main CPU complex for more sophisticated threat inspection tasks. These new appliances keep the compact single Rack Unit (RU) form factor and scale to over 1.5Tbps of threat inspected throughput with clustering. They will also provide up to 34 hardware-level isolated and fully functional FTD instances for critical multi-tenant environments.

Those network security administrators who look for an intuitive way of enabling TLS decryption will enjoy the completely redesigned TLS Decryption Policy configuration flow in Firewall Management Center. It separates the configuration process for inbound (an external user to a private application) and outbound (an internal user to a public application) decryption and guides the administrator through the necessary steps for each type. Advanced users will retain access to the full set of TLS connection controls, including non-compliant protocol version filtering and selective certificate blocklisting.

Not-so-Random Additional Screening


Applying decryption and DPI at scale is all fun and games, especially with hardware appliances that are purpose-built for encrypted traffic handling, but it is not always practical. The majority of SaaS applications use public key pinning or bi-directional certificate authentication to prevent man-in-the-middle decryption even by the most powerful of firewalls. No matter how fast the inline decryption engine may be, there is still a pronounced performance degradation from indiscriminately unwrapping all TLS traffic. With both operational costs and complexity in mind, most security practitioners would prefer to direct these precious processing resources toward flows that present the most risk.

Lucky for those who want to optimize security inspection, our industry-leading Snort 3 threat prevention engine includes the ability to detect applications and potentially malicious flows without having to decrypt any packets. The integral Encrypted Visibility Engine (EVE) is the first in the industry implementation of Machine Learning (ML) driven flow inference for real-time protection within the data plane itself. We continuously train it with petabytes of real application traffic and tens of thousands of daily malware samples from our Secure Malware Analytics cloud. It produces unique application and malware fingerprints that Threat Defense software uses to classify flows by examining just a few outer fields of the TLS protocol handshake. EVE works especially well for identifying evasive applications such as anonymizer proxies; in many cases, we find it more effective than the traditional pattern-based application identification methods. With Secure Firewall Threat Defense 7.4 software, EVE adds the ability to automatically block connections that classify high on the malware confidence scale. In a future release, we will combine these capabilities to enable selective decryption and DPI of those high-risk flows for truly risk-based threat inspection.

The other trick for making our Snort 3 engine more precise lies in cooperation across the rest of the Cisco Secure portfolio. Very few cybersecurity practitioners out there like to manually sift through tens of thousands of IPS signatures to tailor an effective policy without blowing out the performance envelope. Cisco Recommendations from Talos has traditionally made this task much easier by enabling specific signatures based on actually observed host operating systems and applications in a particular environment. Unfortunately, there’s only so much that a network security device can discover by either passively listening to traffic or even actively poking those endpoints. Secure Workload 3.8 release supercharges this ability by continuously feeding actual vulnerability information for specific protected applications into Firewall Management Center. This allows Cisco Recommendations to create a much more targeted list of IPS signatures in a policy, thus avoiding guesswork, improving efficacy, and eliminating performance bottlenecks. Such an integration is a prime example of what Cisco Secure can achieve by augmenting network level visibility with application insights; this is not something that any other firewall solution can implement with DPI alone.

Light Fantastic Ahead


Secure Firewall 4200 Series appliances and Threat Defense 7.4 software are important milestones in our strategic journey, but it by no means stops there. We continue to actively invest in inference-based detection techniques and tighter product cooperation across the entire Cisco Secure portfolio to bring value to our customers by solving their real network security problems more efficiently. As you may have heard from me at the recent Nvidia GTC event, we are actively developing hardware acceleration capabilities to combine inference and DPI approaches in hybrid cloud environments with Data Processing Unit (DPU) technology. We continue to invest in endpoint integration both on the application side with Secure Workload and the user side with Secure Client to leverage flow metadata in policy decisions and deliver a truly hybrid ZTNA experience with Cisco Secure Access. Last but not least, we are redefining the fragmented approach to public cloud security with Cisco Multi-Cloud Defense.

The light of network security continues to shine bright, and we appreciate you for the opportunity to build the future of Cisco Secure together.

Source: cisco.com

Saturday, 24 December 2022

Cisco Joins the Launch of Amazon Security Lake

The Cisco Secure Technical Alliance supports the open ecosystem and AWS is a valued technology alliance partner, with integrations across the Cisco Secure portfolio, including SecureX, Secure Firewall, Secure Cloud Analytics, Duo, Umbrella, Web Security Appliance, Secure Workload, Secure Endpoint, Identity Services Engine, and more.

Cisco Secure and AWS Security Lake


We are proud to be a launch partner of AWS Security Lake, which allows customers to build a security data lake from integrated cloud and on-premises data sources as well as from their private applications. With support for the Open Cybersecurity Schema Framework (OCSF) standard, Security Lake reduces the complexity and costs for customers to make their security solutions data accessible to address a variety of security use cases such as threat detection, investigation, and incident response. Security Lake helps organizations aggregate, manage, and derive value from log and event data in the cloud and on-premises to give security teams greater visibility across their organizations.

With Security Lake, customers can use the security and analytics solutions of their choice to simply query that data in place or ingest the OCSF-compliant data to address further use cases. Security Lake helps customers optimize security log data retention by optimizing the partitioning of data to improve performance and reduce costs. Now, analysts and engineers can easily build and use a centralized security data lake to improve the protection of workloads, applications, and data.

Cisco Secure Firewall


Cisco Secure Firewall serves as an organization’s centralized source of security information. It uses advanced threat detection to flag and act on malicious ingress, egress, and east-west traffic while its logging capabilities store information on events, threats, and anomalies. By integrating Secure Firewall with AWS Security Lake, through Secure Firewall Management Center, organizations will be able to store firewall logs in a structured and scalable manner.

eNcore Client OCSF Implementation


The eNcore client provides a way to tap into message-oriented protocol to stream events and host profile information from the Cisco Secure Firewall Management Center. The eNcore client can request event and host profile data from a Management Center, and intrusion event data only from a managed device. The eNcore application initiates the data stream by submitting request messages, which specify the data to be sent, and then controls the message flow from the Management Center or managed device after streaming begins.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

These messages are mapped to OCSF Network Activity events using a series of transformations embedded in the eNcore code base, acting as both author and mapper personas in the OCSF schema workflow. Once validated with an internal OCSF schema the messages are then written to two sources, first a local JSON formatted file in a configurable directory path, and second compressed parquet files partitioned by event hour in the S3 Amazon Security Lake source bucket. The S3 directories contain the formatted log are crawled hourly and the results are stored in an AWS Security Lake database. From there you can get a visual of the schema definitions extracted by the AWS Glue Crawler, identify fieldnames, data types, and other metadata associated with your network activity events. Event logs can also be queried using Amazon Athena to visualize log data.

Get Started


To utilize the eNcore client with AWS Security Lake, first go to the Cisco public GitHub repository for Firepower eNcore, OCSF branch.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Download and run the cloud formation script eNcoreCloudFormation.yaml.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

The Cloud Formation script will prompt for additional fields needed in the creation process, they are as follows:

Cidr Block:  IP Address range for the provisioned client, defaults to the range shown below

Instance Type:  The ec2 instance size, defaults to t2.medium

KeyName  A pem key file that will permit access to the instance

AmazonSecurityLakeBucketForCiscoURI: The S3 location of your Data Lake S3 container.

FMC IP: IP or Domain Name of the Cisco Secure Firewall Mangement Portal

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

After the Cloud Formation setup is complete it can take anywhere from 3-5 minutes to provision resources in your environment, the cloud formation console provides a detailed view of all the resources generated from the cloud formation script as shown below.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Once the ec2 instance for the eNcore client is ready, we need to whitelist the client IP address in our Secure Firewall Server and generate a certificate file for secure endpoint communication.

In the Secure Firewall Dashboard, navigate to Search->eStreamer, to find the allow list of Client IP Addresses that are permitted to receive data, click Add and supply the Client IP Address that was provisioned for our ec2 instance.  You will also be asked to supply a password, click Save to create a secure certificate file for your new ec2 instance.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Download the Secure Certificate you just created, and copy it to the /encore directory in your ec2 instance.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Use CloudShell or SSH from your ec2 instance, navigate to the /encore directory and run the command bash encore.sh test

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

You will be prompted for the certificate password, once that is entered you should see a Successful Communication message as shown below.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Run the command bash encore.sh foreground

This will begin the data relay and ingestion process. We can then navigate to the S3 Amazon Security Lake bucket we configured earlier, to see OCSF compliant logs formatted in gzip parquet files in a time-based directory structure. Additionally, a local representation of logs is available under /encore/data/* that can be used to validate log file creation.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Amazon Security Lake then runs a crawler task every hour to parse and consume the logs files in the target s3 directory, after which we can view the results in Athena Query.

Cisco Security, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides

Source: cisco.com

Thursday, 10 November 2022

Cisco Secure Firewall on AWS: Build resilience at scale with stateful firewall clustering

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS

Organizations embrace the public cloud for the agility, scalability, and reliability it offers when running applications. But just as organizations need these capabilities to ensure their applications operate where needed and as needed, they also require their security does the same. Organizations may introduce multiple individual firewalls into their AWS infrastructure to produce this outcome. In theory, this may be a good decision, but in practice—this could lead to asymmetric routing issues. Complex SNAT configuration can mitigate asymmetric routing issues, but this isn’t practical for sustaining public cloud operations. Organizations are looking out for their long-term cloud strategies by ruling out SNAT and are calling for a more reliable and scalable solution for connecting their applications and security for always-on protection.

To solve these challenges, Cisco created stateful firewall clustering with Secure Firewall in AWS.

Cisco Secure Firewall clustering overview


Firewall clustering for Secure Firewall Threat Defense Virtual provides a highly resilient and reliable architecture for securing your AWS cloud environment. This capability lets you group multiple Secure Firewall Threat Defense Virtual appliances together as a single logical device, known as a “cluster.”

A cluster provides all the conveniences of a single device (management and integration into a network) while taking advantage of the increased throughput and redundancy you would expect from deploying multiple devices individually. Cisco uses Cluster Control Link (CCL) for forwarding asymmetric traffic across devices in the cluster. Clusters can go up to 16 members, and we use VxLAN for CCL.

In this case, clustering has the following roles:

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 1: Cisco Secure Firewall Clustering Overview

The above diagram explains traffic flow between the client and the server with the insertion of the firewall cluster in the network. Below defines the roles of clustering and how packet flow interacts at each step.

Clustering roles and responsibilities 


Owner: The Owner is the node in the cluster that initially receives the connection.

◉ The Owner maintains the TCP state and processes the packets. 
◉ A connection has only one Owner. 
◉ If the original Owner fails, the new node receives the packets, and the Director chooses a new Owner from the available nodes in the cluster.

Backup Owner: The node that stores TCP/UDP state information received from the Owner so that the connection can be seamlessly transferred to a new owner in case of failure.

Director: The Director is the node in the cluster that handles owner lookup requests from the Forwarder(s). 

◉ When the Owner receives a new connection, it chooses a Director based on a hash of the source/destination IP address and ports. The Owner then sends a message to the Director to register the new connection. 
◉ If packets arrive at any node other than the Owner, the node queries the Director. The Director then seeks out and defines the Owner node so that the Forwarder can redirect packets to the correct destination. 
◉ A connection has only one Director. 
◉ If a Director fails, the Owner chooses a new Director.

Forwarder: The Forwarder is a node in the cluster that redirects packets to the Owner. 

◉ If a Forwarder receives a packet for a connection it does not own, it queries the Director to seek out the Owner.  
◉ Once the Owner is defined, the Forwarder establishes a flow, and redirects any future packets it receives for this connection to the defined Owner.

Fragment Owner: For fragmented packets, cluster nodes that receive a fragment determine a Fragment Owner using a hash of the fragment source IP address, destination IP address, and the packet ID. All fragments are then redirected to the Fragment Owner over Cluster Control Link.  

Integration with AWS Gateway Load Balancer (GWLB)


Cisco brought support for AWS Gateway Load Balancer (Figure 2). This feature enables organizations to scale their firewall presence as needed to meet demand.

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 2: Cisco Secure Firewall and AWS Gateway Load Balancer integration 

Cisco Secure Firewall clustering in AWS


Building off the previous figure, organizations can take advantage of the AWS Gateway Load Balancer with Secure Firewall’s clustering capability to evenly distribute traffic at the Secure Firewall cluster. This enables organizations to maximize the benefits of clustering capabilities including increased throughput and redundancy. Figure 3 shows how positioning a Secure Firewall cluster behind the AWS Gateway Load Balancer creates a resilient architecture. Let’s take a closer look at what is going on in the diagram.

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 3: Cisco Secure Firewall clustering in AWS

Figure 3 shows an Internet user looking to access a workload. Before the user can access the workload, the user’s traffic is routed to Firewall Node 2 for inspection. The traffic flow for this example includes:

User -> IGW -> GWLBe -> GWLB -> Secure Firewall (2) -> GLWB -> GWLBe -> Workload

In the event of failure, the AWS Gateway Load Balancer cuts off existing connections to the failed node, making the above solution non-stateful.

Recently, AWS announced a new feature for their load balancers known as Target Failover for Existing Flows. This feature enables forwarding of existing connections to another target in the event of failure.

Cisco is an early adaptor of this feature and has combined Target Failover for Existing Flows with Secure Firewall clustering capabilities to create the industry’s first stateful cluster in AWS.

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 4: Cisco Secure Firewall clustering rehashing existing flow to a new node

Figure 4 shows a firewall failure event and how the AWS Gateway Load Balancer uses the Target Failover for Existing Flows feature to switch the traffic flow from Firewall Node 2 to Firewall Node 3. The traffic flow for this example includes:

User -> IGW -> GWLBe -> GWLB -> Secure Firewall (3) -> GLWB -> GWLBe -> Workload

Source: cisco.com

Friday, 14 October 2022

Leveraging the Cloud to Scale your Industrial DMZ

Cisco, Cisco Career, Cisco Prep, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Certification

The iDMZ (industrial demilitarized zone) is a critical layer in a comprehensive end-to-end security strategy for an industrial operations environment. The primary function of the iDMZ is the enforcement of a secure boundary between the internal trusted operations environment and external entities that may need to exchange data with services that support the operation.

One of the challenges with an exclusively on-site iDMZ is the limited ability around expansion to meet future demand and capabilities. With the growth of Industrial IoT (IIoT), it will be necessary for hardware and resource growth to meet the demands of increasing data. This translates to a consistently increasing hardware footprint and utilities to provide cooling and power, which can be in limited supply on premises. In addition, operators must explore new ways to obtain deeper insights and introduce enhancements to the operation, which may require tighter alignment with partners and/or the ability to securely consume XaaS offers.

Operators also have a safety-first culture, keeping people out of the “line of fire.” Vendors and partners may need to maintain on-site hardware, applications and services, potentially exposing people to risk through their presence on-site. For heavy industry environments, accessibility to site and the equipment residing on it is not necessarily an easily accomplished task. Many industrial sites require site safety training and approved work permits as a prerequisite for physical access.

Finally, a lack of iDMZ consistency when comparing multiple sites, from a hardware and feature composition, creates challenges for operations staff. In some instances, product and feature selection is made locally. This impacts the ability to deliver consistent policies and end user experiences. It also complicates support across the operation for staff responsible for troubleshooting and minimizing time to resolution and maintaining different SOPs and training documents.

Operators exploring options to gain operational efficiencies through modern service offerings may benefit from exploring how to extend their iDMZ beyond the “four walls” of the operation.

One deployment alternative for iDMZ is extending the architecture to leverage a hybrid-cloud model. A hybrid cloud iDMZ model can be deployed as a centralized model or repeated regionally, based on geographic presence and/or regulatory or compliance requirements. While migrating the entirety of the iDMZ and its capabilities to the cloud may not be an option, a hybrid cloud iDMZ architecture does offer operational benefits and mitigates previously raised challenges.

First, the hybrid cloud iDMZ can secure the operation, and mitigate risk and exposure. Similar to an on-prem iDMZ, multiple tools and applications should be leveraged to take a holistic approach for enforcing security. This can include:

◉ Services that support a secure and encrypted pipe between an operations site and a regional iDMZ
◉ Segmentation and possible options for multi-tenancy
◉ Visibility to monitor applications and flows traversing the industrial zone

The solution should also include tools for consistently configuring, deploying, enforcing policies, and managing assets.

In addition to providing a holistic security strategy, a hybrid cloud iDMZ offers the benefit of shared resources and assets, as opposed to entirely duplicating unique stand-alone iDMZ deployments per site. The regional based approach offers a more repeatable and consistent architecture, delivering consistent policies, as well as easing the operational overhead and complexity mentioned previously.

Cisco, Cisco Career, Cisco Prep, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Certification

A hybrid cloud solution offers more flexibility to expand, and contract based on evolving requirements and demand. By leveraging public cloud services as part of the iDMZ architecture, operators have the ability to increase capabilities without physically maintaining hardware and space to house equipment. This approach affords the unique opportunity to foster tighter engagements with partners and ecosystem vendors, while leveraging cloud services to drive innovation, deeper operational insights and efficiencies. Adding tools like Thousand Eyes and App Dynamics, operators can verify adherence to application SLAs/SLOs, in accordance with operational requirements.

Finally, a hybrid cloud iDMZ aligns with the concept of the ROC (Regional Operations Center), which is top of mind for some industrial organizations, especially those with a global footprint. A ROC model seeks to leverage more automation and remote operations, thus reducing on-site headcount to mission essential resources, improving on-site safety and driving more operational efficiencies. With a regional based iDMZ deployment, the process of aggregating and presenting the status and data for operations within the region can become more streamlined and a regionally distributed model can facilitate compliance with local industry regulations, if applicable.

For more details on how to build a hybrid cloud iDMZ architecture and its benefits for securing industrial operations, we have just published a short white paper that you should read on the Hybrid Cloud Industrial DMZ. We’ll also be discussing this in a free webinar on September 20, 2022.

Source: cisco.com

Sunday, 11 September 2022

Scale security on the fly in Microsoft Azure Cloud with Cisco Secure Firewall

The release of Microsoft Azure Gateway Load Balancer is great news for customers, empowering them to simply and easily add Cisco Secure Firewall capabilities to their Azure cloud infrastructure. By combining Azure Gateway Load Balancer with Cisco Secure Firewall, organizations can quickly scale their firewall presence across their Azure cloud environment, providing protection for infrastructure and applications exactly where and when they need it.

With applications and resources hyper-distributed across hybrid-multicloud environments, organizations require agile security to protect their environment at each control point. This integration empowers organizations to dynamically insert Cisco’s security controls and threat defense capabilities in their Azure environment, removing the clunkiness of provisioning and deploying firewalls, as well as the need to rearchitect the network. Organizations can now enjoy highly available threat defense on the fly, protecting their infrastructure and applications from known and unknown threats.

Securing cloud infrastructure while reducing complexity


Combining Secure Firewall with Azure Gateway Load Balancer offers a significant reduction in operational complexity when securing cloud infrastructure. Azure Gateway Load Balancer provides bump-in-the-wire functionality ensuring Internet traffic to and from an Azure VM, such as an application server, is inspected by Secure Firewall without requiring any routing changes. It also offers a single entry and exit point at the firewall and allows organizations to maintain visibility of the source IP address. Complementing these features, organizations can take advantage of our new Cloud-delivered Firewall Management Center. It enables organizations to manage their firewall presence 100% through the cloud with the same look and feel as they’ve grown accustomed to with Firewall Management Center. With Cloud-delivered Firewall Management Center, organizations will achieve faster time-to-value with simplified firewall deployment and management.

Benefits of Cisco Secure Firewall with Azure Gateway Load Balancer


◉ Secure Firewall lowers cloud spend with Azure Autoscale support – Quickly and seamlessly scale virtual firewall instances up and down to meet demand.

◉ De-risk projects by removing the need to re-architect – Effortlessly insert Cisco Secure Firewall in existing network architecture without changes, providing win/win outcomes across NetOps, SecOps, DevOps, and application teams.

◉ Firewalling where and when you need it – Easily deploy and remove Secure Firewall and its associated security services, including IPS, application visibility and control, malware defense, and URL filtering as needed in the network path.

◉ Greater visibility for your applications – Simplify enablement of your intended infrastructure by eliminating the need for source and destination NAT. No additional configuration needed.

◉ Health monitoring – Ensure efficient routing with continuous health-checks that monitor your virtual firewall instances via Gateway Load Balancer.

◉ Included Cisco Talos® Threat Intelligence – Protect your organization from new and emerging threats with rapid and actionable threat intelligence updated hourly from one of the world’s largest commercial threat intelligence teams, Cisco Talos.

Use-cases
Inbound


Cisco Certification, Cisco Career, Cisco Jobs, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials
Figure 1: Inbound traffic flow to Cisco Secure Firewall with Azure Gateway Load Balancer

Cisco Certification, Cisco Career, Cisco Jobs, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials
Figure 2: Inbound traffic flow to a stand-alone server

Outbound


Cisco Certification, Cisco Career, Cisco Jobs, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials
Figure 3: Internal server is behind a public load balancer. Flow is the same as outbound flow for an inbound connection.

Cisco Certification, Cisco Career, Cisco Jobs, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials
Figure 4: Outbound flow where the internal server is a stand-alone server.

Azure Gateway Load Balancer support for Cisco Secure Firewall Threat Defense Virtual is available now. To learn more about how Cisco Secure Firewall drives security resilience across your hybrid-multicloud environment, see the additional resources below and reach out to your Cisco sales representative.

Source: cisco.com