Showing posts with label Cisco Cloud Security. Show all posts
Showing posts with label Cisco Cloud Security. Show all posts

Saturday 20 April 2024

Cisco Hypershield: Reimagining Security

It is no secret that cybersecurity defenders struggle to keep up with the volume and craftiness of current-day cyber-attacks. A significant reason for the struggle is that security infrastructure has yet to evolve to effectively and efficiently stymie modern attacks. The security infrastructure is either too unwieldy and slow or too destructive. When the security infrastructure is slow and unwieldy, the attackers have likely succeeded by the time the defenders react. When security actions are too drastic, they impair the protected IT systems to such an extent that the actions could be mistaken for the attack itself.

So, what does a defender do? The answer to the defender’s problem is a new security infrastructure — a fabric — that can autonomously create defenses and produce measured responses to detected attacks. Cisco has created such a fabric — Cisco Hypershield — that we discuss in the paragraphs below.

Foundational principles


We start with the foundational principles that guided the creation of Cisco Hypershield. These principles provide the primitives that enable defenders to escape the “damned-if-you-do and damned-if-you-don’t” situation we alluded to above.

Hyper-distributed enforcement

IT infrastructure in a modern enterprise spans privately run data centers (private cloud), public cloud, bring-your-own devices (BYOD) and the Internet of Things (IoT). In such a heterogeneous environment, centralized enforcement is inefficient as traffic must be shuttled to and from the enforcement point. The shuttling creates networking and security design challenges. The answer to this conundrum is the distribution of the enforcement point close to the workload.

Cisco Hypershield comes in multiple enforcement form factors to suit the heterogeneity in any IT environment:

1. Tesseract Security Agent: Here, security software runs on the endpoint server and interacts with the processes and the operating system kernel using the extended Berkeley Packet Filter (eBPF). eBPF is a software framework on modern operating systems that enables programs in user space (in this case, the Tesseract Security Agent) to safely carry out enforcement and monitoring actions via the kernel.
2. Virtual/Container Network Enforcement Point: Here, a software network enforcement point runs inside a virtual machine or container. Such enforcement points are instantiated close to the workload and protect fewer assets than the typical centralized firewall.
3. Server DPUs: Cisco Hypershield’s architecture supports server Data Process Units (DPUs). Thus, in the future, enforcement can be placed on networking hardware close to the workloads by running a hardware-accelerated version of our network enforcement point in these DPUs. The DPUs offload networking and security processing from the server’s main CPU complex in a secure enclave.
4. Smart Switches: Cisco Hypershield’s architecture also supports smart switches. In the future, enforcement will be placed in other Cisco Networking elements, such as top-of-rack smart switches. While not as close to the workload as agents or DPUs, such switches are much closer than a centralized firewall appliance.

Centralized security policy

The usual retort to distributed security enforcement is the nightmare of managing independent security policies per enforcement point. The cure for this problem is the centralization of security policy, which ensures that policy consistency is systematically enforced (see Figure 1).

Cisco Hypershield follows the path of policy centralization. No matter the form factor or location of the enforcement point, the policy being enforced is organized at a central location by Hypershield’s management console. When a new policy is created or an old one is updated, it is “compiled” and intelligently placed on the appropriate enforcement points. Security administrators always have an overview of the deployed policies, no matter the degree of distribution in the enforcement points. Policies are able to follow workloads as they move, for instance, from on-premises to the native public cloud.

Cisco Hypershield: Reimagining Security
Figure 1: Centralized Management for Distributed Enforcement
 
Hitless enforcement point upgrade

The nature of security controls is such that they tend to get outdated quickly. Sometimes, this happens because a new software update has been released. Other times, new applications and business processes force a change in security policy. Traditionally, neither scenario has been accommodated well by enforcement points — both acts can be disruptive to the IT infrastructure and present a business risk that few security administrators want to undertake. A mechanism that makes software and policy updates normal and non-disruptive is called for!

Cisco Hypershield has precisely such a mechanism, called the dual dataplane. This dataplane supports two data paths: a primary (main) and a secondary (shadow). Traffic is replicated between the primary and the secondary. Software updates are first applied to the secondary dataplane, and when fully vetted, the roles of the primary and secondary dataplanes are switched. Similarly, new security policies can be applied first to the secondary dataplane, and when everything looks good, the secondary becomes the primary.

The dual dataplane concept enables security administrators to upgrade enforcement points without fear of business disruption (see Figure 2).

Cisco Hypershield: Reimagining Security
Figure 2: Cisco Hypershield Dual Dataplane 

Complete visibility into workload actions

Complete visibility into a workload’s actions enables the security infrastructure to establish a “fingerprint” for it. Such a fingerprint should include the types of network and file input-output (I/O) that the workload typically performs. When the workload takes an action that falls outside the fingerprint, the security infrastructure should flag it as an anomaly that requires further investigation.

Cisco Hypershield’s Tesseract Security Agent form factor provides complete visibility into a workload’s actions via eBPF, including network packets, file and other system calls and kernel functions. Of course, the agent alerts on anomalous activity when it sees it.

Graduated response to risky workload behavior

Security tools amplify the disruptive capacity of cyber-attacks when they take drastic action on a security alert. Examples of such action include quarantining a workload or the entire application from the network and shutting down the workload or application. For workloads of marginal business importance, drastic action may be fine. However, taking such action for mission-critical applications (for example, a supply chain application for a retailer) often defeats the business rationale for security tools. The disruptive action hurts even more when the security alert turns out to be a false alarm.

Cisco Hypershield in general, and its Tesseract Security Agent in particular, can generate a graduated response. For example, Cisco Hypershield can respond to anomalous traffic with an alert rather than a block when instructed. Similarly, the Tesseract Security Agent can react to a workload, attempting to write to a new file location with a denial rather than shutting down the workload.

Continuous learning from network traffic and workload behavior

Modern-day workloads use services provided by other workloads. These workloads also access many operating system resources such as network and file I/O. Further, applications are composed of multiple workloads. A human security administrator can’t collate all the applications’ activity and establish a baseline. Reestablishing the baseline is even more challenging when new workloads, applications and servers are added to the mix. With this backdrop, manually determining anomalous behavior is impossible. The security infrastructure needs to do this collation and sifting on its own.

Cisco Hypershield has components embedded into each enforcement point that continuously learn the network traffic and workload behavior. The enforcement points periodically aggregate their learning into a centralized repository. Separately, Cisco Hypershield sifts through the centralized repository to establish a baseline for network traffic and workloads’ behavior. Cisco Hypershield also continuously analyzes new data from the enforcement points as the data comes in to determine if recent network traffic and workload behavior is anomalous relative to the baseline.

Autonomous segmentation


Network segmentation has long been a mandated necessity in enterprise networks. Yet, even after decades of investment, many networks remain flat or under-segmented. Cisco Hypershield provides an elegant solution to these problems by combining the primitives mentioned above. The result is a network autonomously segmented under the security administrator’s supervision.

The autonomous segmentation journey proceeds as follows:

  • The security administrator begins with top-level business requirements (such as isolating the production environment from the development environment) to deploy basic guardrail policies.
  • After initial deployment, Cisco Hypershield collects, aggregates, and visualizes network traffic information while running in an “Allow by Default” mode of operation.
  • Once there is sufficient confidence in the functions of the application, we move to “Allow but Alert by Default” and insert the known trusted behaviors of the application as Allow rules above this. The administrator continues to monitor the network traffic information collected by Cisco Hypershield. The monitoring leads to increased familiarity with traffic patterns and the creation of additional common-sense security policies at the administrator’s initiative.
  • Even as the guardrail and common-sense policies are deployed, Cisco Hypershield continues learning the traffic patterns between workloads. As the learning matures, Hypershield makes better (and better) policy recommendations to the administrator.
  • This phased approach allows the administrator to build confidence in the recommendations over time. At the outset, the policies are deployed only to the shadow dataplane. Cisco Hypershield provides performance data on the new policies on the secondary and existing policies on the primary dataplane. If the behavior of the new policies is satisfactory, the administrator moves them in alert-only mode to the primary dataplane. The policies aren’t blocking anything yet, but the administrator can get familiar with the types of flows that would be blocked if they were in blocking mode. Finally, with conviction in the new policies, the administrator turns on blocking mode, progressing towards the enterprise’s segmentation goal.

The administrator’s faith in the security fabric — Cisco Hypershield — deepens after a few successful runs through the segmentation process. Now, the administrator can let the fabric do most of the work, from learning to monitoring to recommendations to deployment. Should there be an adverse business impact, the administrator knows that rollback to a previous set of policies can be accomplished easily via the dual dataplane.

Distributed exploit protection


Patching known vulnerabilities remains an intractable problem given the complex web of events — patch availability, patch compatibility, maintenance windows, testing cycles, and the like — that must transpire to remove the vulnerability. At the same time, new vulnerabilities continue to be discovered at a frenzied pace, and attackers continue to shrink the time between the public release of new vulnerability information and the first exploit. The result is that the attacker’s options towards a successful exploit increase with time.

Cisco Hypershield provides a neat solution to the problem of vulnerability patching. In addition to its built-in vulnerability management capabilities, Hypershield will integrate with Cisco’s and third-party commercial vulnerability management tools. When information on a new vulnerability becomes available, the vulnerability management capability and Hypershield coordinate to check for the vulnerability’s presence in the enterprise’s network.

If an application with a vulnerable workload is found, Cisco Hypershield can protect it from exploits. Cisco Hypershield already has visibility into the affected workload’s interaction with the operating system and the network. At the security administrator’s prompt, Hypershield suggests compensating controls. The controls are a combination of network security policies and operating system restrictions and derive from the learned steady-state behavior of the workload preceding the vulnerability disclosure.

The administrator installs both types of controls in alert-only mode. After a period of testing to build confidence in the controls, the operating system controls are moved to blocking mode. The network controls follow the same trajectory as those in autonomous segmentation. They are first installed on the shadow dataplane, then on the primary dataplane in alert-only mode, and finally converted to blocking mode. At that point, the vulnerable workload is protected from exploits.

During the process described above, the application and the workload continue functioning, and there is no downtime. Of course, the vulnerable workload should eventually be patched if possible. The security fabric enabled by Cisco Hypershield just happens to provide administrators with a robust yet precise tool to fend off exploits, giving the security team time to research and fix the root cause.

Conclusion

In both the examples discussed above, we see Cisco Hypershield function as an effective and efficient security fabric. The innovation powering this fabric is underscored by it launching with several patents pending.

In the case of autonomous segmentation, Hypershield turns flat and under-segmented networks into properly segmented ones. As Hypershield learns more about traffic patterns and security administrators become comfortable with its operations, the segments become tighter, posing more significant hurdles for would-be attackers.

In the case of distributed exploit protection, Hypershield automatically finds and recommends compensating controls. It also provides a smooth and low-risk path to deploying these controls. With the compensating controls in place, the attacker’s window of opportunity between the vulnerability’s disclosure and the software patching effort disappears.

Source: cisco.com

Thursday 4 June 2020

Umbrella with SecureX built-in: Coordinated Protection

Cisco Prep, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Exam Prep

Cybercriminals have been refining their strategies and tactics for over twenty years and attacks have been getting more sophisticated. A successful cyberattack often involves a multi-step, coordinated effort. Research on successful breaches shows that hackers are very thorough with the information they collect and the comprehensive plans they execute to understand the environment, gain access, infect, move laterally, escalate privileges and steal data.

An attack typically includes at least some of the following steps:

◉ reconnaissance activities to find attractive targets
◉ scanning for weaknesses that present a good entry point
◉ stealing credentials
◉ gaining access and privileges within the environment
◉ accessing and exfiltrating data
◉ hiding past actions and ongoing presence

This whole process is sometime called the “attack lifecycle” or “kill chain” and a successful attack requires a coordinated effort throughout the process. The steps above involve many different elements across the IT infrastructure including email, networks, authentication, endpoints, SaaS instances, multiple databases and applications. The attacker has the ability to plan in advance and use multiple tactics along the way to get to the next step.

Security teams have been busy over the past couple of decades as well.  They have been building a robust security practice consisting of tools and processes to track activities, provide alerts and help with the investigation of incidents.  This environment was built over time and new tools were added as different attack methods were developed. However, at the same time, the number of users, applications, infrastructure types, and devices has increased in quantity and diversity.  Networks have become decentralized as more applications and data have moved to the cloud. In most instances, the security environment now includes over 25 separate tools spanning on-prem and cloud deployments. Under these conditions, it’s difficult to coordinate all of the activities necessary to block threats and quickly identify and stop active attacks.

As a consequence, organizations are struggling to get the visibility they need across their IT environment and to maintain their expected level of effectiveness. They are spending too much time integrating separate products and trying to share data and not enough time quickly responding to business, infrastructure, and attacker changes.  The time has come for a more coordinated security approach that reduces the number of separate security tools and simplifies the process of protecting a modern IT environment.

Cisco Umbrella with SecureX can make your security processes more efficient by blocking more threats early in the attack process and simplifying the investigation and remediation steps. Umbrella handles over 200 billion internet requests per day and uses fine-tuned models to detect and block millions of threats. This “first-layer” of defense is critical because it minimizes the volume of malicious activity that makes its way deeper into your environment.  By doing this, Umbrella reduces the stress on your downstream security tools and your scarce security talent.  Umbrella includes DNS Security, a secure web gateway, cloud-delivered firewall, and cloud access security broker (CASB) functionality. But no one solution is going to stop all threats or provide the quickly adapting environment described above. You need to aggregate data from multiple security resources to get a coordinated view of what’s going on in your environment but can’t sink all your operating expenses into simply establishing and maintaining the integrations themselves.

That’s where Cisco SecureX comes in. Cisco SecureX connects the breadth of Cisco’s integrated security portfolio – including Umbrella– and your other security tools for a consistent experience that unifies visibility, enables automation, and strengthens your security across network, endpoints, cloud, and applications. Let’s explore some of the capabilities of SecureX, the Cisco security platform and discuss what they mean in the context of strengthening breach defense.

◉ Visibility: Our SecureX platform provides visibility with one consolidated view of your entire security environment. The SecureX dashboard can be customized to view operational metrics alongside your threat activity feed and the latest threat intelligence. This allows you to save time that was otherwise spent switching consoles. With the Secure threat response feature, you can accelerate threat investigation and take corrective action in under two clicks.

◉ Automation: You can increase the efficiency and precision of your existing security workflows via automation to advance your security maturity and stay ahead of an ever-changing threat landscape. SecureX pre-built, customizable playbooks enable you to automate workflows for phishing and threat hunting use cases. SecureX automation allows you to build your own workflows including collaboration and approval workflow elements to more effectively operate as a team.   It enables your teams to share context between SecOps, ITOps, and NetOps to harmonize security policies and drive stronger outcomes.

◉ Integration: With SecureX, you can advance your security maturity by connecting your existing security infrastructure via out-of-the-box interoperability with third party solutions. In addition to the solution-level integrations we’ve already made available; new, broad, platform-level integrations have also been and continue to be developed. In short, you’re getting more functionality out of the box so that you can multiply your use cases and realize stronger outcomes.

Cisco Prep, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Exam Prep

Pre-built playbooks focus on common security use cases, and you can easily build your own using an intuitive, drag-and-drop interface. One example of the coordination between Umbrella and SecureX is in the area of phishing protection and investigation. Umbrella provides protection against a wide range of phishing attacks by blocking connections to known bad domains and URLs. SecureX extends this protection with a phishing investigation workflow that allows your users to forward suspicious email messages from their inbox. In addition, a dedicated inspection mailbox starts an automated investigation and enrichment process. This includes data from multiple solutions including Umbrella, email security, endpoint protection, threat response and malware analysis tools. Suspicious email messages are scraped for various artifacts and inspected in the Threat Grid sandbox. If malicious artifacts are identified, a coordinated response action, including approvals, is carried out automatically, in alignment with your regular operations process.

The SecureX platform is included with Cisco security solutions to advance the value of your investment. It connects Cisco’s integrated security portfolio, your other security tools and existing security infrastructure with out-of-the-box interoperability for a consistent experience that unifies visibility, enables automation, and strengthens your security across network, endpoints, cloud, and applications.

Saturday 29 June 2019

Using Amazon Web Services? Cisco Stealthwatch Cloud has all your security needs covered

Like many consumers of public cloud infrastructure services, organizations that run workloads in Amazon Web Services (AWS) face an array of security challenges that span from traditional threat vectors to the exploitation of more abstract workloads and entry points into the infrastructure.

This week at AWS re:Inforce, a new feature for AWS workload visibility was announced – AWS Virtual Private Cloud (VPC) Traffic Mirroring.  This feature allows for a full 1:1 packet capture of the traffic flowing within and in/out of a customer’s VPC environment.  This allows for vendors to provide visibility into the entire AWS traffic, and the ability to perform network and security analytics.  Cisco Steathwatch Cloud is able to fully leverage VPC Traffic Mirroring for transactional network conversation visibility, threat detection and compliance risk alerting.

Stealthwatch Cloud is actually unique in that we have had this level of traffic visibility and security analytics deep within an AWS infrastructure for a number of years now with our ability to ingest AWS VPC Flow Logs. VPC Flow Logs allow for a parallel level of visibility in AWS without having to deploy any sensors or collectors. This method of infrastructure visibility allows for incredibly easy deployment within many AWS VPCs and accounts at scale in a quick-to-operationalize manner with Stealthwatch Cloud’s SaaS visibility and threat detection solution. In fact, you can deploy Stealthwatch Cloud within your AWS environment in as little as 10 minutes!

Additionally, we are seeing that the majority of customer traffic in, out and within a VPC is encrypted. Stealthwatch Cloud is designed from the ground up to assume that the traffic is encrypted and to model every entity and look for threats leveraging a multitude of data points regardless of payload.

Stealthwatch Cloud takes the AWS visibility and protection capability even deeper by leveraging the AWS API to retrieve a wide array of telemetry from the AWS backend to tell a richer story of what’s actually going on throughout the AWS environment, far beyond just monitoring the network traffic itself. We illuminate API keys, user accounts, CloudTrail audit log events, instance tags, abstract services such as Redshift, RDS, Inspector, ELBs, Lambdas, S3 buckets, Nat Gateways and many other services many of our customers are using beyond just VPCs and EC2 instances.

Here is a screenshot from the customer portal with just a sample of the additional value Stealthwatch Cloud offers AWS customers in addition to our network traffic analytics:

Cisco Stealthwatch, Security, Cisco Certifications, Cisco Tutorials and Materials

The following screenshot shows how we are able to extend our behavioral anomaly detection and modeling far beyond just EC2 instances and are able to learn “known good” for API keys, user accounts and other entry points into the environment that customers need to be concerned about:

Cisco Stealthwatch, Security, Cisco Certifications, Cisco Tutorials and Materials

Combine this unique set of rich AWS backend telemetry with the traffic analytics that we can perform with either VPC Flow Logs or VPC Traffic Mirroring, and we are able to ensure that customers are protected regardless of where the threat vector into their AWS deployment may exist – at the VPC ingress/egress, at the AWS web login screen or leveraging API keys.  Cisco is well aware that our customers are using a broad set of services in AWS that stretch from virtual machines to serverless and Kubernetes.  Stealthwatch Cloud is able to provide the visibility, accountability and threat detection across the Kill Chain in any of these environments today.

Friday 26 April 2019

The New Network as a Sensor

Before we get into this, we need to talk about what the network as a sensor was before it was new. Conceptually, instead of having to install a bunch of sensors to generate telemetry, the network itself (routers, switches, wireless devices, etc.) would deliver the necessary and sufficient telemetry to describe the changes occurring on the network to a collector and then Stealthwatch would make sense of it.

The nice thing about the network as a sensor is that the network itself is the most pervasive. So, in terms of an observable domain and the changes within that domain, it is a complete map. This was incredibly powerful. If we go back to when NetFlow was announced, let’s say a later version like V9 or IPfix, we had a very rich set of telemetry coming from the routers and switches that described all the network activity. Who’s talking to whom, for how long, all the things that we needed to detect insider threats and global threats. The interesting thing about this telemetry model is that threat actors can’t hide in it. They need to generate this stuff or it’s not actually going to traverse the network. It’s a source of telemetry that’s true for both the defender and the adversary.

The Changing Network


Networks are changing. The data centers we built in the 90’s and 2000’s and the enterprise networking we did back then is different from what we’re seeing today. Certainly, there is a continuum here by which you as the customer happen to fall into. You may have fully embraced the cloud, held fast to legacy systems, or still have a foot in both to some degree. When we look at this continuum we see the origins back when compute was very physical – so called bare metal, imaging from optical drives was the norm, and rack units were a very real unit of measure within your datacenter. We then saw a lot of hypervisors when the age of VMware and KVM came into being. The network topology changed because the guest to guest traffic didn’t really touch any optical or copper wire but essentially traversed what was memory inside that host to cross the hypervisor. So, Stealthwatch had to adapt and make sure something was there to observe behavior and generate telemetry.

Moving closer to the present we had things like cloud native services, where people could just get their guest virtual machines from their private hypervisors and run them on the service providers networked infrastructure. This was the birth of the public cloud and where the concept of infrastructure as a service (IaaS) began. This was also how a lot of people, including Cisco Services and many of the services you use today, are run to this day. Recently, we’ve seen the rise of Docker containers, which in turn gave rise to the orchestration of Kubernetes. Now, a lot of people have systems running in Kubernetes with containers that run at incredible scale that can adapt to the changing workload demand. Finally, we have serverless. When you think of the network as a sensor, you have to think of the network in these contexts and how it can actually generate telemetry. Stealthwatch is always there to make sense of that telemetry and deliver the analytic outcome of discovering insider threats and global threats. Think of Stealthwatch as the general ledger of all of the activity that takes place across your digital business.

Now that we’ve looked at how networks have evolved, we’re going to slice the new network as a sensor into three different stories. In this blog, we’ll look at two of these three transformative trends that are in everyone’s life to some degree. Typically, when we talk about innovation, we’re talking about threat actors and the kinds of threats we face. When threats evolve, defenders are forced to innovate to counter. Here however, I’m talking about transformative changes that are important to your digital business in varying ways. We’re going to take them one by one and explain what they are and how they change what the network is and how it can be a sensor to you.

Cloud Native Architecture


Now we’re seeing the dawn of serverless via things like AWS: Lambda. For those that aren’t familiar, think of serverless as something like Uber for code. You don’t want to own a car or learn how to drive but just want to get to your destination. The same concept applies to serverless. You just want your code to run and you want the output. Everything else, the machine, the supporting applications, and everything that runs your code is owned and operated by the service provider. In this particular situation, things change a lot. In this instance, you don’t own the network or the machines. Serverless computing is a cloud computing execution model in which cloud solution providers dynamically manage the allocation of machine resources (i.e. the servers).

So how do you secure a server when there’s no server?

Stealthwatch Cloud does it by dynamically modeling the server (that does not exist) and holds that state overtime as it analyzes changes being described by the cloud-native telemetry.  We take in a lot of metadata and we build a model for in this case a server and overtime s everything changes around this model, we’re holding state as if there really was a server. We perform the same type of analytics trying to detect potential anomalies that would be of interest to you.

Cisco Tutorials and Materials, Cisco Learning, Cisco Guides, Cisco Certifications

In this image you can see that the modeled device has, in a 24-hour period, changed IP address and even its virtual interfaces whereby IP addresses can be assigned. Stealthwatch Cloud creates a model of a server to solve the serverless problem and treats it like any other endpoint on your digital business that you manage.

Cisco Tutorials and Materials, Cisco Learning, Cisco Guides, Cisco Certifications

This “entity modeling” that Stealthwatch Cloud performs is critical to the analytics of the future because even in this chart, you would think you are just managing that bare metal or virtual server over long periods of time. But believe it or not, these traffic trends represent a server that was never really there! Entity modeling allows us to perform threat analytics within cloud native serverless environments like these. Entity modeling is one of the fundamental technologies in Stealthwatch and you can find out more about it here.

We’re not looking at blacklists of things like IP addresses of threat actors or fully qualified domain names. There’s not a list of bad things, but rather telling you an event of interest that has not yet made its way to a list. It catches things that you did not know to even put on a list – things in potential gray areas that really should be brought to your attention.

Software Defined Networks: Underlay & Overlay Networks


When we look at overlay networks we’re really talking about software defined networks and the encapsulation that happens on top of them. The oldest of which I think would be Multiprotocol Label Switching (MPLS) but today you have techniques like VXLAN and TrustSec. The appeal is that instead of having to renumber your network to represent your segmentation, you use encapsulation to express the desired segmentation policy of the business.  The overlay network uses encapsulation to define policy that’s not based on destination-based routing but labels. When we look at something like SDWAN, you basically see what in traditional network architectural models changing.  You still have the access-layer or edge for your network but everything else in the middle is now a programmable mesh whereby you can just concentrate on your access policy and not the complexity of the underlay’s IP addressing scheme.

For businesses that have fully embraced software defined networking or any type, the underlay is a lie!  The underlay is still an observational domain for change and the telemetry is still valid, but it does not represent what is going on with the overlay network and for this there is either a way to access the native telemetry of the overlay or you will need sensors that can generate telemetry that include the overlay labeling.

Enterprise networking becomes about as easy to setup as a home network which is an incredibly exciting prospect. Whether your edge is a regular branch office, a hypervisor on a private cloud, an IAS in a public cloud, etc. as it enters the world or the rest of the Internet it crosses an overlay network that describes where things should go and provisions the necessary virtual circuits. When we look at how this relates to Stealthwatch there are a few key things to consider. Stealthwatch is getting the underlay information from NetFlow or IPfix. If it has virtual sensors that are sitting on hypervisors or things of that nature, it can interpret the overlay labels (or tags) faithfully representing the overlay.  Lastly, Stealthwatch is looking to interface with the actual software define networking (SDN) controller so it can then make sense of the overlay. The job of Stealthwatch is to put together the entire story of who is talking to whom and for how long by taking into account not just the underlay but also the overlay.

Wednesday 6 March 2019

Cisco Stealthwatch Cloud and Microsoft Azure: reliable cloud infrastructure meets comprehensive cloud security

Isn’t it great when the enterprise technology solutions you use to achieve various business outcomes partner and work seamlessly with each other? Cisco and Microsoft have done just that to provide you with a scalable and high-performance cloud infrastructure along with easy and effective cloud security.

In 10 minutes or less, Cisco Stealthwatch Cloud extends visibility, threat detection, and compliance verification to Microsoft Azure without agents or additional sensor deployments within your cloud environment.

A new way to think about security


Enterprises are continuously adopting the public cloud for many reasons, whether it’s greater scalability, better access to resources, cost savings, increased efficiency, faster time to market, or overall higher performance. While the move to the cloud offers great opportunities, it also means that the old ways of thinking about security aren’t working for most organizations anymore, especially when it comes to visibility in the cloud.

Cisco Stealthwatch, Cloud and Microsoft Azure, Cisco Security, Cisco Guides, Cisco Learning

Often this lack of visibility leads to challenges surrounding network traffic analysis, identity and access management, compliance and regulation, and threat investigation. We all know of organizations that have made security mistakes related to configuration and inadvertently exposed their private data, resulting in serious repercussions. Of course, training can be improved, configurations checked, and automated tools used to validate configuration parameters, but these efforts only address the preventative aspects of security practice.  Organizations also need to actively watch what is actually happening with their cloud assets and catch the threats that aren’t prevented. Active breach detection starts with improved visibility.  Complete visibility gives you a way to protect your cloud infrastructure in real-time, so you can be agile and address issues as they arrive.

Cloud security: a shared responsibility


While your cloud provider manages security of the cloud, security in the cloud is the responsibility of the customer. You as a customer retain control of what security you choose to implement in the cloud to protect your content, platform, applications, systems and networks, no differently than you would in your company’s private datacenter.

How do you know what is happening to data in the cloud? How do you know you’ve configured your cloud assets to be secure? How do you recognize cloud assets starting to communicate with new, possibly hostile internet sites?  How do you do it in real time and quickly enough to mitigate data loss?

To answer these questions, it’s critical to have an active breach detection solution for your public cloud. And for that solution to be effective, the cloud provider needs to enable the right visibility to tap into valuable cloud network and configuration telemetry. 

Cisco and Microsoft: better together


Cisco Stealthwatch, Cloud and Microsoft Azure, Cisco Security, Cisco Guides, Cisco Learning

In the continuous effort to provide customers with industry leading solutions, Cisco has been working with Microsoft to bring Cisco Steathwatch Cloud to Azure. Stealthwatch Cloud, a software as a service (SaaS) active breach detection solution based on security analytics, can now deliver comprehensive visibility, and effective threat detection in Azure environments in as little as 10 minutes.

Traditionally, organizations have tried to overlay a patchwork of agents across cloud assets to detect bad activity. This approach requires significant costs and effort to deploy, maintain, and manage in dynamic environments such as the cloud. Importantly, it frequently doesn’t scale with your cloud environment with regard to cost.  But Stealthwatch Cloud can deploy within your Azure environment with no need for an agent and scales up and down according to your actual cloud traffic utilization.

How does it work?


Microsoft provides Azure Network Security Group (NSG) flow logs that contain valuable information on north-south and east-west traffic within an Azure virtual network. Flow logs show outbound and inbound flows on a per flow basis, the network interface (NIC) the flow applies to, 5-tuple information about the flow (Source/destination IP, source/destination port, and protocol), if the traffic was allowed or denied, and in Version 2, throughput information (Bytes and Packets, and the NSG rule applied to the traffic). Organizations use this information to audit activity on their cloud network.  Stealthwatch Cloud can natively consume NSG flow logs V2 via APIs, without having to deploy any agents or sensors.

Additionally, Microsoft has also introduced Azure virtual network TAP (Terminal Access Point) that allows you to continuously and easily stream your virtual machine network traffic to Stealthwatch Cloud like a traditional, physical network SPAN or TAP. You can add a TAP configuration on a network interface that is attached to a virtual machine deployed in your virtual network. The destination is a virtual network IP address in the same virtual network as the monitored network interface or a peered virtual network. This approach provides access to not just flow logs, but also other network traffic like DNS data.

Cisco Stealthwatch, Cloud and Microsoft Azure, Cisco Security, Cisco Guides, Cisco Learning

Stealthwatch Cloud can be powered by both NSG flow logs v2 and vTAP data. Stealthwatch Cloud analyzes this data using entity modeling to identify suspicious and malicious activity. For every active entity on the network, Stealthwatch Cloud builds a behavioral model – a simulation of sorts – to understand what the entity’s role is, how it normally behaves, and what resources it normally communicates with. Then it uses this model to identify changes in behavior consistent with misuse, malware, compromise, or other threats.

For instance, if an Azure resource normally only communicates with internal hosts, but suddenly it begins sending large amounts of data to an unknown external server, it could be a sign of data exfiltration. Stealthwatch Cloud would detect this behavior in real-time and alert your security team.

Sunday 30 December 2018

A Hybrid Cloud Solution to Improve Service Provider Revenue

Media and Telecom service providers serve millions of customers, and it is a challenge to monitor and assure that customers have a satisfactory experience with the services. Service providers incur high operation costs through customer support and truck rolls. Reactive customer support often causes customer dissatisfaction resulting in churn and revenue loss. Large volume and variety of data (network, CPE, billing, customer issues etc.) is maintained across multiple systems but is underutilized to add value to business. Different business units work in silos and non-availability of integrated customer profile leads to half-matured marketing efforts, unsatisfactory customer experience and loss of business opportunities. Common roadblocks for business improvement include:

◈ Lack of consolidated data & accurate insights
◈ Extended cycle time to process data and delay in access to insights
◈ Dependency on legacy systems to process data

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Barriers to business improvement

A container-based, hybrid cloud solution


A container-based hybrid cloud analytics solution that will help service providers to understand their customers better. It will provide a unified view about end customers and help improve the services and grow their business.

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Inputs to gain customer insights

POC scope


Customer churn analysis and prediction

Aggregate data from different data sources (billing, customer support, service usage, CPE telemetry etc.), create an integrated view of customer data and analyze churn
Implement a simple churn prediction model using hybrid cloud service

Tools and services used

Cisco Container Platform for CI/CD and management of micro services
GCP Pub/Sub for data aggregation
GCP Datalab for data exploration
GCP Dataflow for stream and batch processing of data
GCP BigQuery for analysis and BigQuery ML for churn prediction

Solutions architecture

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Solution diagram

Model training and serving with Google Cloud Platform:

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Model Prediction Data Flow

Overview of steps involved to develop the POC


1. Preliminary analysis on data consolidated across all (US) regions is performed, for example, Customer Sentiment analysis. Once this data is ready with all the feature labeling, etc, Cisco Container Platform (CCP) and Google Cloud Platform (GCP) are leveraged for gaining meaningful insights about this data.

2. Service catalogue is installed on the master node of the CCP cluster. It will provision and bind service instances using registered service broker. Custom application will leverage these service bindings and enable true hybrid cloud use cases.

3. In the CCP platform, using the Pub/Sub application, Media telecom customer data gets posted to GCP Pub/Sub.

4. Once data gets published to GCP Pub/Sub topics from periodical batch program, published data object will be consumed through Cloud data flow Job

5. Cloud Dataflow allows user to create and run a job by choosing google predefined dynamic template Pub/Sub to big query dynamic templates which initialize pipeline implicitly to consume data from topics and ingest into appropriate Big Query data set configured while creating Dataflow.

6. Once Dataflow predefined template Job gets started, it begins consumption of data object from input topics which get ingested into BigQuery table dynamically as a pipeline. This table data is then explored using Datalab, and required data pre-processing steps — such as removing null values, scaling features, finding correlation among features, and so on — are performed (please see the Model prediction data flow diagram above). This data is then returned back to BigQuery for ML modeling.

7. ML model built using BigQuery will be used for prediction of Customer churn for subsequent data received.

8. This processed churn data is retrieved using service broker to CCP and later consumed by UI

Dashboard

1. From the Solution dashboard (see sample screen shot shown below) service providers can view the forecasted churn based on region, service, and reason. Customer reported issues, and the services currently being used by the customers can also be visualized.

2. Solution dashboard allows service providers to take quick action. For instance, improving the wireless service or 4K streaming service, thereby preventing customer churn.

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Customer Insights Dashboard

Solution Demo Video

Wednesday 19 December 2018

How Stealthwatch Cloud protects against the most critical Kubernetes vulnerability to-date, CVE-2018-1002105

The increasing popularity of traditional cloud computing technologies such as server-less, on-demand compute and containerized environments has made technologies like Kubernetes part of our daily vernacular as it relates to running our applications and workloads.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Kubernetes solves many of the problems with managing containers at-scale. Automation, orchestration, elasticity are a few of the major draws for organizations to leverage Kubernetes, either in the cloud, on premises, or hybrid. Kubernetes creates a network abstraction layer around the siloed containers that allows for this facilitation. Think of it as a wide open highway that allows you to route throughout the many containers that are actually performing your workloads.  From web servers to database servers, these containers are the flexible, scalable workhorses for an organization.

With great accessibility comes a drawback, however. Should an attacker gain access to a pod, a node or an internal Kubernetes service, then part or all of that cluster is at risk of compromise. Couple that with the fact that in many instances Docker containers are actually running scaled down Linux operating systems like CoreOS or Alpine Linux. Should one of those containers become exposed to the Internet (and many workloads require access to the Internet), you now have an exposed attack surface that expands along with the exposed workloads themselves.

Last week the most severe Kubernetes vulnerability discovered to-date was announced, CVE-2018-1002105. It scored a 9.8 out of a possible 10.0 on the CVSS severity score…which is unprecedented.  In a nutshell this vulnerability allows an attacker to send an unauthenticated API request to the Kubernetes API service. Despite being unauthenticated, the access request leaves a remaining TCP connection open for the API backend server. This connection then allows an attacker to exploit the connection to run commands that would grant them complete access to do anything they desire on the cluster.  Scary stuff!

This vulnerability underscores the fact that organizations need to have both the visibility to see such traffic and also the analytics to know if the traffic represents a risk or compromise. Suppose you unknowingly expose a group of Apache Kubernetes pods to the Internet to perform their intended web services and a new vulnerability is exploited on that pod, like Struts. The attacker would then have root access on the pod to perform recon, install necessary tools and pivot around the cluster. And, if they are aware of the API vulnerability, then it’s a walk in the park for them to take full control of your cluster in a matter of minutes.

Not a good day for an organization if – and more likely when – this occurs. Data theft, compute theft, skyrocketing bills….just to name a few, are immediate side effects to a takeover of this magnitude.  So how can Stealthwatch Cloud help in this scenario and similar potential exploits?

How Stealthwatch Cloud Protects Kubernetes Environments 


Stealthwatch Cloud deploys into a Kubernetes cluster via an agentless sensor that leverages Kubernetes itself to automatically deploy, expand and contract across a cluster. No user interaction is required. The solution deploys instantly to every node in a cluster and exposes every pod and the communication with those pods between internal nodes and clusters, as well as externally. This allows for an unprecedented level of visibility into everything a cluster is doing, from pods communicating to the internet to worker nodes communicating internally with the master node. We then add entity modeling which compares new behavior to previous behavior and machine learning based anomaly detection to alert on IOC’s throughout the Kill Chain to alert on over 60 indicators of suspicious activity across a cluster.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Hypothetically speaking, if one of your Kubernetes clusters were compromised, Stealthwatch Cloud would send alerts in real-time on various aforementioned activities. The tool would alert on the initial pod reconnaissance, and on connection activity once the pod was exploited. If the attacker moved towards the API server, Stealthwatch Cloud would alert on internal reconnaissance, suspicious connections to the API server itself, further data staging, data exfiltration and a variety of other alerts that would indicate a change from known good behavior across every component of a Kubernetes cluster…all in an agentless, automated, scalable solution.

Friday 23 March 2018

Serverless Security for Public Cloud Workloads with Stealthwatch Cloud

Each year goes by and we find more ways to own less and less of what it takes to operate our digital infrastructure. Information Technology began as a business having to build data centers owning everything starting with the real estate all the way to the applications, quickly it moved to public clouds whereby the infrastructure itself was a service managed by the provider and you only needed to manage the virtual servers up through your applications. The latest in this trend is serverless computing.  As you would guess, this is the latest evolution where the service provider owns and operates everything up to the application and you don’t even manage the servers running your code (thus the name “serverless”).