Sunday, 23 September 2018

Updated IOS-XR Programmability Learning Labs and Sandbox Expand Your Options

A few weeks back I shared this blog post New XR Programmability Learning Labs and Sandbox introducing the new IOS-XR Learning Lab and a dedicated sandbox environment for IOS-XR programmability. This IOS-XR Programmability sandbox and learning labs provide an environment where developers and network engineers can explore the programmability options available in this routing platform.

So, great news for all you IOS-XR programmability fans, we are pleased to bring you even more great XR Programmability learning content. Here is the full list of content, broken down by module and learning labs.

Module One: CLI automation: IOS-XR CLI automation.

Show commands, config-apply, config-replace, and more using on-box bash scripts or remote bash commands

Cisco IOS-XR offers a comprehensive portfolio of APIs at every layer of the network stack, allowing users to leverage automated techniques to provision and manage the lifecycle of a network device. In this module, we start with the basics: the Command Line Interface (CLI) has been the interaction point for expect-style scripters (TCL, expect, pexpect etc.) for ages.  But these techniques rely on send/receive buffers, thus are prone to errors and inefficient code. This is where the new onbox ZTP libraries come in handy. Use them for automated device bring-up automate Day1 and Day2 behavior of the device through deterministic APIs and return values in a rich Linux environment on the router.

◈ IOS-XR CLI automation – Bash
◈ IOS-XR CLI automation – Python

Setting up a Telemetry Client/Collector with “Pipeline” is a flexible, multi-function collection service that is written in Go.

Module Two: IOS-XR Streaming Telemetry changes networking monitoring for the better


SNMP is dead. It’s time to move away from slow, polling techniques employed by SNMP for monitoring that are unable to meet the cadence or scale requirements associated with modern networks. Further, Automation is often misunderstood to be a one-way street of imperative (or higher-layer declarative) commands that help bring a network to an intended state. However, a core aspect of automation is the ability to monitor real-time state of a system during and post the automation process to accomplish a feedback loop that helps make your automation framework more robust and accurate across varied circumstances. In this module, we learn how Streaming Telemetry capabilities in IOS-XR are all set to change network monitoring for the better – allowing tools to subscribe to structured data, contractually obliged to the YANG models representing operational state of the IOS-XR internal database (SYSDB) at a cadence and scale that are orders of magnitude higher than SNMP.

◈ IOS-XR Streaming Telemetry: Monitoring done the right way

◈ Creating your first python Telemetry Collector

◈ Creating your first c++ Telemetry Collector

◈ Deploying a Telemetry Collector on-box

On-Box agents and custom protocols that co-exist with standard protocols to influence routing. Facebook’s Open/R protocol that behaves like an IGP but runs as a third-party application on the router.

Module Three: IOS-XR Service-Layer APIs, programming is exposed through the service layer API


Cisco IOS-XR offers a comprehensive portfolio of APIs at every layer of the network stack. For most automation use cases, the manageability layer that provides the CLI, YANG models and Streaming Telemetry capabilities, is adequate. However, over the last few years, we have seen a growing reliance in web-scale and large-scale Service Provider networks on off-box Controllers or on-box agents.  These extract away the state machine of a traditional protocol, or feature and marry their operation to the requirements of a specific set of applications on the network. These agents/controllers require highly performant access to the lowest layer of the network stack called the Service Layer and the model-driven APIs built at this layer are called the Service-Layer APIs. With the ability to interact with RIB, the Label Switch Database (LSD), BFD events, and interface events. And with more capabilities coming in the future, now is the time to take your automation chops to the next level.

◈ Service-Layer APIs: Bring your own Protocol/Controller

◈ Your first python service-layer API client

◈ Your first c++ service-layer API client

◈ Deploying a Service-layer API client on-box

Friday, 21 September 2018

Automated Policy & Segmentation Violation Alerting with Stealthwatch Cloud

Stealthwatch Cloud is best known for network behavioral anomaly detection and entity modeling, but the level network visibility value it provides far exceeds these two capabilities. The underlying traffic dataset provides an incredibly accurate recording for every network conversation that has transpired throughout your global network.  This includes traffic at remote locations and deep into the access layer that is far more pervasive than sensor-based solutions could provide visibility into.

Stealthwatch Cloud can perform policy and segmentation auditing in an automated set-it and forget-it fashion. This allows security staff to detect policy violations across firewalls, hardened segments and applications forbidden on user endpoints. I like to call this setting virtual “tripwires” all over your network, unbeknownst to users, by leveraging your entire network infrastructure as a giant security sensor grid. You cannot hide from the network…therefore you cannot hide from Stealthwatch Cloud.

Here is how we set this framework up and put it into action!

1. Navigate to Alerts from the main dashboard under the gear icon:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

2. Click Configure Watchlists on the Settings screen:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

3. Click Internal Connection Blacklist on the Watchlist Config screen:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

4. Here are your options:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

5. From here you’ll want to fill out the above form as such:

Name:  Whatever you’d like to call this rule, for example “Prohibited outbound RDP” or “permitted internal RDP”Source IP:  Source IP address or CIDR rangeSource Block Size:  CIDR notation block size, for example (0, 8, 16, 24, etc.)Source Ports:  Typically this is left blank as the source ports are usually random ephemeral ports but you have the option if you require a specific source port to trigger the alert.
Destination IP:  Target IP Address or CIDR range

Destination Block Size:  CIDR notation block size, for example (0, 8, 16, 24, etc.)

Destination Ports:  The target port traffic you wish to allow or disallow, for example (21, 3389, etc)

Connections are Allowed checkbox:  Check this if this is the traffic you’re going to permit.  This is used in conjunction with a second rule to specify all other traffic that’s not allowed.

Reason:  Enter a user friendly description of the intent for this rule.

6. Click Add to make the rule active.

7. Here’s an example of a set of rules both permitting and denying traffic on Remote Desktop over TCP 3389:

1. Permit rule:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

2. Deny Rule:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

8. Resulting Alert set:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

9. Now to test this new ruleset, I will attempt two RDP connections within my Lab.  The first will be a lateral connection to another host on the 10.0.0.0/8 subnet and the second to an external IP residing on the public Internet.

10. Here is the resulting observation that triggered:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

11. And the resulting Alert:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

12. You can also see the observed ALLOWED traffic from my lateral RDP testing. This traffic did not trigger any observation or alert:

Cisco Stealthwatch, Cisco Stealthwatch Cloud, Security Analytics, Cisco Certifications

This policy violation alerting framework allows you to be fully accountable for all network prohibited network traffic that will inevitably transit your network laterally or through an egress point.  Firewall rules, hardening standards and compliance policies should be adhered to but how can you be certain that they are?  Human error, lack of expertise and troubleshooting can and will easily lead to a gap in your posture and Stealthwatch Cloud is the second line of defense to catch any violation the moment that first packet traverses a segment using a prohibited protocol.  It’s not a matter of IF your posture will be compromised but WHEN.

Wednesday, 19 September 2018

Secure Multi-Tenancy Part 2: Going Multi-Instance

Requirements Overview


In the previous blog post, we went over the common requirements for partitioning a single physical security appliance into multiple virtual firewalls. We talked about how this logical separation brings a lot of complexity into environments where true data and management plane isolation between different tenants is not required. It was further established that even the full isolation requirements are not truly addressed by the existing virtual firewall solutions. A single tenant can easily consume a disproportionally large amount of shared CPU and memory resources, thus impacting everyone else on the same physical appliance. As such, there was a clear need for a better solution to this problem.

Cisco Security, Cisco Guides, Cisco Study Material, Cisco Tutorial and Material
A multi-tenancy solution for Cisco Firepower Threat Defense (FTD) had to overcome these constraints. The goal was to address the management simplification and routing separation requirements through different features. We wanted to concentrate specifically on management and traffic separation in a multi-tenant environment. Our virtual firewall instances would be completely isolated from each other in terms of CPU and memory resources, such that no individual tenant could exceed its allocation and impact someone else on the same Firepower appliance. This approach would extend to management sessions, where each tenant could use a separate Firepower Management Center (FMC) instance and push configuration changes completely independently. Last but not least, we wanted to eliminate the disparity in feature support when running virtual firewalls. If we support something within a single application, the same support should extend to a multi-tenant deployment going forward. These were very ambitious goals, but we managed to come up with an elegant solution.

Sizing Expectations


Before diving deeper into our solution, I want to say a few words about virtual firewall scalability. Traditional stateful firewall platforms support up to several hundreds of virtual contexts. However, this scale obviously comes with a large degree of resource sharing. If a security appliance is capable of delivering 50Gpbs of basic stateful firewalling, dividing it into 200 security contexts yields about 250Mbps of average throughput per tenant. This may be suitable for some environments, but then one should also consider packet-per-second (PPS) rates. Assuming a relatively powerful stateful firewall that does around 20 million PPS in the best case scenario, it comes down to only about 100 thousand PPS per each of the 200 tenants – a level easily exceeded by a single server in a modern data center.

As we start looking at more advanced firewall features, such as Intrusion Prevention Services (IPS), URL filtering, file and malware inspection, cloud sandboxing, and especially encrypted traffic inspection, performance implications become even more pronounced. There is frequently an order of magnitude of difference when comparing a threat-centric security application to a basic stateful firewall running on the same hardware. Getting a little over 20Mbps of threat-protected throughput per tenant is rarely acceptable, especially when migrating from a classic firewall feature set. If a tenant required 250Mbps of protected throughput before transitioning to a threat-centric product, their needs would not change simply because the firewall has to spend more cycles on much deeper inspection after the migration. As such, the expectations for tenant scale will be significantly reduced when migrating from Cisco ASA (Adaptive Security Appliance) and similar classic stateful firewalls to FTD.

Firepower Multi-Instance Capability


Firepower 4100 and 9300 appliances were meant to deliver multi-service security capabilities by design. The currently support ASA, FTD, and Radware Virtual DefensePro applications. When we looked at all of the possible multi-tenancy solutions for FTD, I immediately thought of extending the physical platform capabilities to host multiple instances of security applications on a single security module — this is how the multi-instance term was coined. Leveraging a common hypervisor for this did not seem very exciting, so a Docker container was picked as a form factor of choice. This approach leverages a proven application virtualization framework and enables future portability beyond the hardware appliances. Container-based FTD instances on Firepower 4100 and 9300 appliances would become available first, but we envision building a similar ASA package with mix-and-match capabilities in the future.

Given our desire to provide complete data plane and management plane separation, each FTD instance would get a completely independent CPU, memory, and disk reservation. Unequally sized instances can be deployed, and the firewall administrator gets to decide a CPU core allocation for each instance – memory and disk are sized automatically based on this assignment. This is important to ensure that a single FTD instance cannot impact any other instances running on the same module or appliance. Given a finite number of available CPU cores, it obviously puts a constraint on the maximum total number of instances that can reside on a particular Firepower appliance. As we had established earlier, a total tenant count with a threat-centric security application is significantly lower than with a basic stateful firewall on the same hardware platform. As such, the full resource separation requirement is more important to most customers than scaling to hundreds of oversubscribed virtual firewalls.

Each FTD container behaves like a separate firewall with its own software image. This means that individual instances can be upgraded, downgraded, or rebooted completely independently. One would no longer have to stand up a separate physical appliance to test software upgrades on a single tenant. Furthermore, each FTD instance would have dedicated management CPU cores to ensure no contention between different tenants during configuration deployment, event generation, and monitoring. An administrator can even assign different FTD containers on a single blade to be managed by different FMC appliances. Most importantly, each instance would support the same complete feature set as a full-module FTD application – no more exceptions for multi-tenancy.

In order to support the new multi-instance capability, Firepower 4100 and 9300 platforms would introduce several new network interface assignment models. Physical and Etherchannel interfaces can be shared between two or more instances or assigned exclusively to a single FTD container. Furthermore, one would gain an ability to create VLAN subinterfaces directly on the chassis Supervisor and assign them to instances on the same shared or unique basis. Needless to say, the instances would be able to communicate to each other directly on the shared data interfaces or VLAN subinterfaces – this includes supporting inter-instance multicast connectivity for dynamic routing. A management interface can be shared across multiple FTD containers as well, but inter-instance communication would be blocked in order to support the fully isolated model.

The following figure illustrates a hypothetical deployment where a single Firepower module runs a set of unequally sized ASA and FTD instances with a combination of shared and unique interfaces:

Cisco Security, Cisco Guides, Cisco Study Material, Cisco Tutorial and Material

Looking Forward


The Firepower multi-instance capability definitely represents a unique and novel approach to deploying secure multi-tenancy. I am obviously very excited about this idea, and there are many new directions that it opens for us and our customers. As we are finalizing this feature for the public release, feel free to leave a comment about what additional details you would like me to cover in the next post on this topic.

Sunday, 16 September 2018

Security Multi-Tenancy Part 1: Defining the Problem

Pre-Virtual Virtual Firewalls


Nowadays, everyone likes to talk about network function virtualization. Most security vendors build their firewall products to run on a few popular hypervisors. However, the “virtual firewall” term predates this virtualization craze. Many firewall administrators use this nomenclature to describe an ability to create multiple virtual partitions or contexts within a single physical security appliance. Each of these virtual firewalls has its own configuration, stateful connection table, and management capabilities. However, they may not be as independent or isolated as one would assume – more on this later. Even though Cisco Adaptive Security Appliance (ASA) software supported virtual firewalls with multiple-context mode for quite some time, we deliberately delayed similar functionality in our threat-centric Firepower Threat Defense (FTD) product in order to get it right. As any decent engineer would tell you, getting the right solution starts with fully understanding the problem. Namely, why do our security customers deploy virtual firewalls?

Understanding Use Cases


As it turns out, not all customers deploy multiple security contexts specifically for multi-tenancy. Some look for routing table separation, where each virtual firewall represents a separate Virtual Routing and Forwarding (VRF) domain. This functionality comes in handy especially when trying to protect several internal organizations with overlapping IP spaces. Other firewall administrators leverage multiple-context mode to separate and simplify policy management across different domains. Instead of looking at a single flat policy, they break it up into smaller chunks based on individual network segments. This may also involve management separation, where administering individual security contexts is delegated to other organizations. A common example here is a big college where several departments manage their own networks and configure individual virtual firewalls on a shared physical appliance at the extranet edge. Other customers go even deeper and require complete traffic processing separation between different tenants or network segments. For instance, one typically does not want their production applications to be affected by some traffic from a lab environment. As these requirements add up, it becomes clear how most existing firewall multi-tenancy solutions come apart at the seams.

Reality Check


There are several operational considerations that need to be taken into account when deploying virtual firewalls.  All security contexts on a single appliance run the same software image, so you cannot test upgrades on a limited number of tenants. Similarly, they all live or die together – rebooting just one is not possible. When it comes to features, you need to keep track of which are not supported in the virtual firewall mode. Often enough, these subtle nuances come up when you are already so far down the implementation path that turning back is either expensive or completely impossible. But wait, there is more!

Cisco Study Material, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials, Cisco Certifications

While virtual firewalls can certainly be used for routing or policy domain separation, it comes with a lot of unnecessary complexity. One needs to create firewall contexts, assign different interfaces, configure them all independently, and then keep switching back and forth in order to manage policies and other relevant configuration. If you need a single access policy across all of your contexts, it must be independently programmed into each virtual firewall. Luckily, features like VRF help in avoiding multiple-context mode by enabling routing domain separation only. When it comes to policy simplification, some of my customers found managing multiple virtual firewalls too cumbersome, converged back into a single security context, and leveraged Security Group Tags (SGTs) to significantly reduce the rule set. Unless you indeed require complete separation between tenants, it makes very little sense to deploy virtual firewalls.

When it comes to management separation, multiple-context mode seems like a perfect fit. After all, each tenant gets their own firewall to play with, all without impacting anyone else. Or is that really true? Even though each virtual firewall has its own independent configuration, they all run within a single security application on a shared physical device. In most implementations, it means that the management plane is shared across all of the virtual contexts. If one tenant is pushing a lot of policy changes or constantly polling for connection information, this will inevitably impact every other virtual firewall that runs on the same device. However, the real problem lies within the shared data plane.

Despite the perceived separation, all virtual firewalls ultimately run on shared CPU, memory, and internal backplane resources. Even when assigning different physical interfaces to different security contexts, all of the traffic typically converges at the ingress classification function in the CPU. While one sometimes can configure maximum routing or connection table sizes on per-context basis, it still does not limit the amount of network traffic or CPU resources that each particular tenant consumes. In order to classify packets to a particular virtual firewall, the system must spend CPU cycles on processing them first. If a particular tenant is getting a lot of traffic from the network, it can consume a disproportionally large amount of system CPU resources even if this traffic is later dropped by a rate-limiter. As such, there is never a guarantee that one virtual firewall does not grow too big and impact every other security context on the same box. I have seen many cases where firewall administrators were caught completely unaware by this simple caveat. Not being a problem with any specific vendor, this is just how most virtual firewalls are implemented today.

Thinking Outside the Contexts


After looking at the use cases and analyzing challenges with existing virtual firewall implementations, I knew that our approach to implementing multi-tenancy in FTD must fundamentally change. An ideal solution would provide complete management and traffic processing separation across all tenants, so one virtual firewall truly cannot impact anyone else on the same box. This separation should extend to independent software upgrades and reloads. At the same time, all of the available FTD features should always be supported when implementing virtual firewalls. Not only must it simplify the experience for an end user, but also significantly cut down on both development and testing times.

While these may have seemed like impossible requirements, I had a really cool idea on how we can get there for our customers. This novel approach builds on the multi-service capabilities of our Firepower platforms as well as such developing trends as application containerization.

Thursday, 13 September 2018

What is SD-WAN?

The SD-WAN market is in high gear. The concept is solid and the benefits are real. There are, in fact, very few WAN situations that would not benefit greatly from this technology. However, all SD-WAN is not the same. There are multiple paths you choose as you endeavor to take your existing, running, trusted network…to a brand new modern one.

What is SD-WAN?


The primary value proposition for SD-WAN centers on the high cost of traditional WAN. As the internet has grown, it has become easier (and cheaper) to get broadband internet circuits just about anywhere. For many users, high speed bandwidth was no longer a benefit of driving to the office. I has become harder to explain why we had to build the networks that we did and as traffic patterns have migrated cloud-wise, these designs are showing their age.

More Options. Less Complexity.


MPLS has been the dominant form of enterprise WAN over the past few decades but it finally has a very viable competitor in SD-WAN. MPLS circuits provide a dedicated network that is completely distinct from any other network. Every remote connection has a specifically sized circuit delivered to them so you know exactly how much bandwidth you get at each site…it is all very predictable. Which is important. If any location needs to access ‘the internet’ than this is commonly done by routing that connection through a central office which has big pipes to the internet and various security mechanisms for filtering it.

Two big issues have come out of this:

1. All internet traffic from branch sites is using those precious/expensive MPLS in two directions. This is secure….but wasteful.

2. Internet use is rising fast along with it’s business critical nature with multiple Saas or IaaS resources are now used by the entire enterprise.

Enterprise IT has long been able to connect to the Internet directly from any remote office. This is not a new idea. It just came with too much risk.

SD-WAN is now offering a credible option for enabling a secure ‘hybrid’ WAN. The hybrid is a reference for how SD-WAN is here to augment, not necessarily replace those expensive MPLS circuits with a less expensive broadband internet.

There will be multiple, physical circuit terminations into the same edge point. Does the vendor have hardware routing experience? Some locations may need an MPLS line, pus two different sources of Internet connectivity. If it’s a really critical area, consider adding cellular failover, 4G LTE or other wireless that might be available. Make sure you can run active/active on those cabled circuits as well so that you are not paying for something ‘just in case.’

When SD-WAN is done right, it should offer a simplified ability to route enterprise traffic in a secure manner with a consistent quality of experience that is as good or better than what you are doing now.

If you are considering an SD-WAN solution, there are quite a few options in the market. Here is my shortlist for things you should make sure you dig into with any option under consideration:

1. Simplicity – the software defined part of SD-WAN refers to the control portion of your routers now being handled somewhere else. This is generally a cloud based that you access with what is hopefully a simple interface. Couple of quick things to check for here:

◈ Does the controller HAVE to be in the cloud? You may run a network that does not allow for this…make sure you know what you can do.

◈ Is ALL the policy control handled through this same interface? How granular can it get? You should be able to define and manage unique policies for every remote location down to the individual application requirements. Set it and forget it.

Cisco SD-WAN, Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Study Material

2. Security should be more than a passing mention to IPsec encryption.

◈ Check for how security is being handled across three dimensions: encryption, authentication and integrity. Zero-trust models are the goal but make sure that it’s not just a marketing term.

◈ The ease of bringing new sites onto the network is a common benefit. Ask what security is in place when doing this. Remote connections back to the centralized controller should have an authorization process that precedes any traffic flows.

◈ Security is very personal, unique to every organization. Make sure you like the options available for expanding security controls outside of the ones provided by your SD-WAN vendor.

◈ This move to SD-WAN is being driven by the incredible growth of cloud based applications we all now depend on. Security controls need to extend to these services as well..striking that balance between ‘secure connection’ and ‘most optimal route.’

◈ SD-WAN brings a lot of flexibility we have not had before. Take fully meshed connections for example. These were once too complex to configure in most situations. Dynamic, policy based routing should be easy for SD-WAN such that performance remains aligned with security. There should be no trade-offs here.

Cisco SD-WAN, Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Study Material

3. Quality of Experience – as opposed the ease of use pointer above, this QoE mention is really about the controls and design in place that benefit the end-user.

◈ The internet is still not controllable in the same sense as a private network. However, there are quite a few things that can now be done to minimize this. Hybrid network connectivity, combined with granular controls should allow for policies that can dictate the conditions under which an MPLS path might be chosen. This is a new middle ground option that previously did not exist. The idea is that your SD-WAN implementation should allow you to reduce the size of your MPLS circuits (which reduces operating costs) because you have policies that say that certain applications may work just fine over the internet ‘most of the time.’ What you want is a real time measurement that can choose that MPLS route for a specific conversation at a specific time…because the network is smart enough to pull it off.

◈ Non-core applications are generally the first to move to the cloud model. HR, scheduling, administrative stuff, these have become SaaS applications like Office 365 and Salesforce for example. User experience will vary by the state of multiple things that constantly change: from the internet gateway on one end, all the through to the hosting location on the other. How is this variation measured and then used to optimize the routing path?

Track Record


There are no shortage of SD-WAN vendors right now. This is truly where WAN networking is going, it is not a fad of any sort. But as much as networking changes, it still remains the same. Don’t overlook the importance of a good track record in both networking and security. Most vendors seem to have some experience in one but are then partnering for the other. Partnerships are hard. We do it. But if any one element that is important to you, is being handled through a partnership…make sure you are comfortable with how that will work for you if something goes awry. This is your network after all…everything and everyone is impacted.

Don’t run towards SD-WAN ONLY because it offers tremendous cost savings when compared to your private lines. There should be no increased risk or settling for sub-standard control options. SD-WAN is a technology your network should aspire to with better security, better visibility, control and ease of use. It’s all here and it’s fun to show off.

Wednesday, 12 September 2018

The Role of Visibility in SecOps

The bad guys aren’t going away.  In fact, they are getting smart, more creative, and just as determined to wreak havoc for profit as they have ever been.  The good news is Security solutions and methodologies are getting better.  Next Generation Firewalls, Malware Protection, and Access Control are not only improving, but in some cases, working in concert together.   This is good news for any security team and a lot of these solutions are part of any security stack.

But how do you know when you have been breached?  How long has that attack been roaming through the network?  Who is affected?  These are the questions that visibility helps answer.  Access Control solutions are now commonplace in letting us know the “Who”, “What”, “Where”, and “When”.  Now we need to know the “How” and “Why”.  We need to know these answers not just for the network, but for our Cloud solutions as well.

Visibility into the Conversation


Learning what “normal” behavior looks like is a great place to start. Knowing how hosts behave and interact allows you to react quickly when a host deviates from the norm.  Stealthwatch provides visibility into every conversation, baselines the network, and alerts to changes.  Not all changes are bad and this level of insight will also provide critical information for network planning.

Cisco Stealthwatch Cloud, Advanced Malware Protection, Security, Cisco Study Materials

Visibility into the Files


Since malware began, there has been a need to inspect files to ensure that they have not been compromised or the source of corruption themselves.  A downloaded file that was benign yesterday could morph into something detrimental tomorrow.  Advanced Malware Protection (AMP) does retrospective analysis such that it doesn’t just inspect a file once and moves on.  It has the ability to look back in time and see exactly when and how a file changed, what it did, who it effected.  Additionally, a file discovered to be malicious on one machine can be quarantined and an update can be sent to all machines that prevents them from ever even opening that file, now limiting the exposure to the rest of the network.

Cisco Stealthwatch Cloud, Advanced Malware Protection, Security, Cisco Study Materials

Visibility into the Threats


Since it’s not a matter if a breach will occur, but when, the ultimate goal is to limit exposure and remove the threat as quickly as possible.  Threat hunting is costly, time consuming, and necessary in getting operations back to normal.  Knowing where to begin, determining the impact, removing the threat, and ultimately protecting against it from happening again is a challenge in of itself.  The longer the threat is in the network, the more damage it will do.  AMP Visibility helps in finding the threats quickly, identifying those effected, and eliminating the threat faster than ever.  Visibility displays the entire path of the malicious event, including URL, SHA values, file information, and more.  This information effectively reduces the time spent threat hunting.

Cisco Stealthwatch Cloud, Advanced Malware Protection, Security, Cisco Study Materials

Visibility into the Internet


We are also constantly being misled and misdirected to go to sites that we shouldn’t.  Whether it’s as simple as a fat-finger or being intentionally misled, anyone can easily end up in a very dark place.  Embedded links within an email that appears legitimate brings our guard down.  The first URL you click on may be OK (think reddit.com ) but what happens as you go deeper?  Should your employees be allowed to click on a link that is two hours old?  How can I protect my employees when they are off-net?  These are questions asked every day.  Cisco Umbrella is built into the foundation of the Internet and as a DNS service, endpoints can be protected both on and off-net.  Umbrella’s Investigate lets you explore deeper into the URL to get a complete picture of everything; from where the site is hosted, who owns it, it’s reputation, and even Threat scores via integrations with AMP’s Threat Grid.  Umbrella’s view of the Internet can prevent up to 90% of threats from ever making it to the endpoint, thus making the rest of the security stack that much more efficient.

Visibility is the Key!


The bad guys only need to be successful at breaching the network once.  The good guys need to be successful EVERY time.  Firewalls, Intrusion Detection, Endpoint protection, and other security solutions are critical in handling 99% of the risks.  That’s a great number.  It’s that 1% that gets through that keeps security people awake at night and is going to cause the most harm.  Having the ability to see not just the north-south traffic, but the east-west, is vital to detecting anomalies early.  When there is an event that requires research, reducing the time it takes to get to the bottom of it and ultimately eliminating the threat quickly keeps business humming optimally.

Saturday, 8 September 2018

Deploying Stealthwatch Cloud in a Google GKE Kubernetes Cluster

Cisco Stealthwatch Cloud has the unique ability to provide an unprecedented level of visibility and security analytic capabilities within a Kubernetes cluster. It really doesn’t matter where the cluster resides, whether on-premise or in any public cloud environment. Stealthwatch Cloud deploys as a daemonset via a yaml file on the cluster master node, ensuring that it will deploy on every worker node in the cluster and both expand and contract as the cluster’s elasticity fluctuates. It’s very simple to configure this and once it’s configured, the sensor will deploy with each node and ensure full visibility into all node, pod and container traffic. This is done via deploying with a host-level networking shim that ensures full traffic visibility into every packet that involves any container, pod or node.

How’s this done? In this guide I’m going to walk you through how to deploy Stealthwatch Cloud within the Google Cloud Kubernetes Engine or GKE.  I’m choosing this because its incredibly simple to deploy a K8s cluster for labbing purposes in a few minutes in GKE, which will allow us to focus our attention on the nuts and bolts of deploying Stealthwatch Cloud step by step into an existing K8s cluster.

The first step is to login to the GKE utility within your Google Cloud Platform console:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Create your cluster:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Click Connect to get your option to connect using the built-in console utility. Click the option for “Run in Cloud Shell”:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Click Start Cloud Shell:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

You will now be brought into the GKE Cloud Shell where you can now fully interact with your GKE Kubernetes cluster:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

You can check  the status of the nodes in your 3-node cluster by issuing the following command:

kubectl get nodes

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

You can also verify that there are currently no deployed pods in the cluster:

kubectl get pods

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

At this point you’ll want to reference the instructions provided in your Stealthwatch Cloud portal on how to integrate Stealthwatch Cloud with your new cluster. In the Integrations page you find the Kubernetes integration page:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

First we’ll create a Kubernetes “Secret” with a Service Key as instructed in the setup steps:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Now we’ll create a service account and bind it to the read-only cluster role:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Next, create a k8s DaemonSet configuration file.  This describes the service that will run the sensor pod on each node. Save the contents below to obsrvbl-daemonset.yaml:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

Save the file and then create the sensor pod via:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

You can see that now we have a Stealthwatch Cloud sensor pod deployed on each of the 3 nodes. That daemonset.yaml will ensure that the pod is deployed on any new worker node replicas as the cluster expands, automatically. We can now switch over to the Stealthwatch Cloud portal to see if the new sensors are available and reporting flow telemetry into the Stealthwatch Cloud engine. Within a few minutes the sensor pods from GKE should start reporting in and when they do you’ll see them populate the sensors page as unique sensors in your Sensor List:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

At this point Stealthwatch Cloud is now providing full visibility into all pods on all nodes including the K8s Master node and the full capabilities of Stealthwatch Cloud including Entity Modeling and behavioral anomaly detection will be protecting the GKE cluster.

We can now deploy an application in our cluster to monitor and protect. For simplicity’s sake we’ll deploy a quick NGINX app into a pod in our cluster using the following command:

sudo kubectl create deployment nginx --image=nginx

You can verify the status of the application along with the Stealtwatch Cloud sensors with the following kubectl command:

kubectl get pods -o wide

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

You’ll see in the above that I actually have 2 NGINX instances running and it’s simply because I edited the YAML file for the NGINX app to ensure that 2 replicas were running upon deployment. This can easily be adjusted to set your needs as you scale your K8s cluster.

After a few minutes you can now query your Stealthwatch Cloud portal for anything with “NGINX” and you should see the following type of results:

Security, Kubernetes, Security Analytics, Cisco Stealthwatch, Cisco Stealthwatch Cloud

You’ll see both existing and non-existing NGINX pods in the search results above. This is because as the cluster expands and contracts and the pods deploy, each pod gets a unique IP Address to communicate on. The non-existent pods in the Stealthwatch Cloud search results represent previously existent pods in our cluster that were torn down to do reducing and increasing replica pods over time.

At this point you have full visibility into all of the traffic across the NGINX pods and full baselining and anomaly detection capabilities as well should any of them become compromised or begin behaving suspiciously.