Thursday, 2 May 2019

The Future is Now! Presenting the Cisco Catalyst 9100 Wi-Fi 6 Access Points

Cisco Mobility, Cisco Study Materials, Cisco Learning, Cisco Certifications
When I was a kid, the future meant flying cars and everyone wearing the same silver jumpsuits. It’s been a few years since I was a kid and while I may not own a flying car, but I don’t have to wear a jumpsuit and working at Cisco allows me to check out all of new innovations that we bring to the world.

With the launch of our newest Catalyst 9100 Access Points, we’re continuing our journey to bring Intent-based Networking to our customers—we’re bringing the future to now. The Catalyst 9100 Access Points are the new addition to the Catalyst family and they’re also our first access points that adhere to Wi-Fi 6 (802.11ax) standard.

A lot of people have been talking about the future of the network. You may have seen Cisco CEO Chuck Robbins present at this year’s MWC in Barcelona or perhaps you tuned in to our Virtual Event announcing our new Wi-Fi 6 innovations a few days ago.

I know that you’re thinking that this is just another access point that’s meeting another standard, this isn’t flying-car news. And you’re right, it won’t bring you a flying car, but these new devices have greater bandwidth, a more dependable connection to the network and features that will continue to automate your network. These new features are going to allow for a lot of really great uses–and in a lot of ways, that’s better.

How so? How about things such as robots and advanced virtual and augmented reality (VR/AR).

Like some of you, one of my all-time favorite TV shows is the Simpsons. There was a classic episode where Lisa dreamt of the perfect school and in that dream, her teacher told her to put on her virtual reality helmet and travel back in time to days of Genghis Khan. Thanks to the Catalyst 9100 Access Points and the increased bandwidth and strong connection, this won’t be a cartoon fantasy anymore. Students are able to learn by literally immersing themselves in their studies. Whether it’s using AR to go back thousand and reliving a historical battle or delving into a scientific study.

The VR and Wi-Fi 6 partnership isn’t just for pointy haired, second grade geniuses either. Surgeons can employ VR to work on patients at a hospital on the other side of the world. This means that geography and time will no longer be the deciding factors on whether patients get the treatment they need.

Cisco Mobility, Cisco Study Materials, Cisco Learning, Cisco Certifications
To make use of this new technology, you’re going to need a reliable, scalable and secure wireless network that can handle the additional number of devices and the data that they’re going to create. That’s where the Cisco Catalyst 9100 Access Points comes in. These access points are your first step to creating that robust network needed to handle the crush of devices and applications connecting to your network.

Here are some things you can expect:

• Enhanced features: Cisco RF ASIC delivers CleanAir, Wireless Intrusion Prevention System (WIPS), Dual Filter DFS in addition to Fast Locate and off-channel RRM, which will be available in future releases. The Cisco Catalyst 9100 access points also support Target Wake Time, which is a new power-saving mode allowing the client to stay asleep and to wake at prescheduled times to exchange data with the access point. The energy savings over 802.11n and 802.11ac is significant, with up to three to four times the older standards. In addition, this improves power and battery efficiency in end devices like smartphones, tablets and IOT devices.

• Addresses the growing IoT explosion: The Cisco Catalyst 9100 access points provide multi-lingual support and application hosting of IoT protocols such as Wi-Fi, BLE and Zigbee. IoT is more than lights, heating and security cameras. From life-saving medical equipment in hospitals to restocking robots—I told you that there would be robots!—in retail to heavy machinery in manufacturing, all of these devices are considered IoT. Everything is connected and since some of these devices are literally the difference between life and death, they must be always-on. Making sure that this equipment doesn’t have downtime is paramount.

• Customizable with a programable RF ASIC: The Catalyst 9120 access point has custom RF ASIC and provides real-time analytics. When combined with Cisco DNA Assurance allows you to gain RF intelligence and visibility that can be analyzed and used to run your network more efficiently. The custom RF ASIC also has a dedicated third radio that is automatically enabled during high density scenarios. This goes along with delivering other features such as RF Interference mitigation and rogue detection.

• Reliability: always-connected, always dependable; a seamless experience. The Catalyst 9100 access points have improved roaming features allowing a better Wi-Fi experience. Add Spectrum Intelligence and Interference and Rogue Detection to the reliability mix and you can be sure that your network is clear of any issues that will hinder a seamless connection.

• Capacity: Thanks to Wi-Fi 6, there is a reduced latency with 100+ devices communicating at the same time. The Catalyst 9100 access points will also provide support in the future for both OFDMA and MU-MIMO to help to dole out application resources. OFDMA is ideal for low-bandwidth applications and increases efficiency while reducing latency. For high-bandwidth applications, MU-MIMO increases capacity resulting in higher speeds per user. Look at MU-MIMO as multiple trucks serving users simultaneously, while OFDMA is one truck serving each user.

The new Catalyst 9100 access points are poised to take your infrastructure to the next level. And with more devices being added to the network every day, this next level is where you’re going to need to be.

The future is much closer than you think. Outfitting your infrastructure is the best way of bringing the future to now.

Wednesday, 1 May 2019

Cisco Trusted Platforms

Service Provider networks serve as critical infrastructure, and the security and trustworthiness of the network infrastructure is essential and the Trusted Infrastructure video from Mobile World Congress. Providers of digital infrastructure must be able to verify whether the hardware and software that comprise their infrastructure are genuine, uncompromised, and operating as intended.  As shown in Figure 1 below, there are security and trust requirements at every layer of the Network Operating System. I will address each of these layers in this and a subsequent blog.

Cisco Trusted Platforms, Cisco Study Materials, Cisco Guides, Cisco Learning

Figure 1: Network Operating System Layers and Security Requirements

Note that not all the features listed here are available on all Service Provider platforms. Please contact the sales team for details.

Foundations of Trust


The ability to verify that a Cisco device is genuine and running uncompromised code depends on Cisco Secure Boot and Trust Anchor module (TAm).  Cisco uses digitally-signed software images, a Secure Unique Device Identifier (SUDI), and a hardware-anchored secure boot process to prevent inauthentic or compromised code from booting on a Cisco platform.

Hardware Root of Trust


A trusted element in the scope of system software is a piece of code that is known to be authentic.  A trusted element must either be immutable (stored in such a way as to prevent modification) or authenticated through validation mechanisms. Cisco anchors the root of trust, which initiates the boot process, in tamper-resistant hardware.  The hardware-anchored root of trust protects the first code running on a system from compromise and becomes the root of trust for the system.

Trust Anchor module


The Trust Anchor Module (TAm) is a proprietary, tamper-resistant chip that features non-volatile secure storage, Secure Unique Device Identifier (SUDI), and crypto services including random number generation (RNG).  See below for additional information on SUDI.

Image signing


Image signing is a two-step process that creates a unique digital signature for a given block of code. First, a hashing algorithm, similar to a checksum, is used to compute a hash value of the block of code. The hash is then encrypted with a Cisco private key, resulting in a digital signature that is attached to and delivered with the image. Signed images can be checked at runtime to verify that the software has not been modified.

Chain of Trust


A chain of trust exists when the integrity of each element of code on a system is validated before that piece of code is allowed to run. A chain of trust starts with a root of trust element. The root of trust validates the next element in the chain (usually firmware) before it is allowed to start, and so on. Through the use of image signing and trusted elements, Cisco hardware-anchored secure boot establishes a chain of trust which boots the system securely and validates the integrity of the software.

Secure Boot


Cisco Secure Boot helps ensure that the code that executes on Cisco hardware platforms is genuine and untampered. A typical UEFI-based boot process starts at the UEFI firmware and works up to the boot loader and the operating system. A tampered UEFI firmware can result in the entire boot process being compromised.

Using a hardware-anchored root of trust, digitally-signed software images, and a unique device identity, Cisco hardware-anchored secure boot establishes a chain of trust which boots the system securely and validates the integrity of the software.  The root of trust (aka. microloader), which is protected by tamper-resistant hardware, first performs a self-check and then verifies the UEFI firmware, and thus kicks off the chain of trust leading up to the integrity verification of the entire IOS XR operating system.

Cisco Trusted Platforms, Cisco Study Materials, Cisco Guides, Cisco Learning

Secure Unique Device Identifier (SUDI)


The SUDI is an X.509v3 certificate and an associated key-pair which are protected in hardware in the Trust Anchor module (TAm).  The SUDI certificate contains the product identifier and serial number and is rooted in Cisco Public Key Infrastructure. This identity can be either RSA or ECDSA based. The key pair and the SUDI certificate are inserted into the Trust Anchor module during manufacturing, and the private key can never be exported. The SUDI provides an immutable identity for the router that is used to verify that the device is a genuine Cisco product, and to ensure that the router is well-known to the customer’s inventory system.

The SUDI-based identity can be used to perform authenticated and automated configuration using Zero Touch Provisioning (ZTP). A backend system can issue a challenge to the router to validate its identity and the router will respond to the challenge using its SUDI based identity. This allows the backend system to not only verify against its inventory that the right router is in the right location but also provide encrypted configuration that can only be opened by the specific router, thereby ensuring confidentiality in transit.

Secure Storage


Cisco’s Trust Anchor technology provides a mechanism to securely store secrets on the router. The encryption of the storage space is tied to the hardware root of trust, and data cannot be decrypted without the specific hardware that was used to encrypt it. The secrets that can be stored include user passwords, customer credentials for authentication protocols such as RADIUS or TACACS, customer certificates, and any type of keys.

The combination of SUDI-based ZTP and secure storage provide very strong protection of customer configuration and secrets.

Hardware Fingerprint


Tampered hardware, particularly in transit, is a clear vector of attack. This is especially a concern when the hardware is in transit from Cisco to our customers and partners; or when a Service Provider ships their router from a holding center to the deployment center. A malicious agent can intercept the hardware in transit and tamper the hardware in a non-detectable manner.

Cisco’s Hardware Fingerprinting technology provides the ability to detect tampered hardware using the Trust Anchor. Cisco fingerprints the critical hardware elements of a router, such as CPUs and ASICs, during manufacturing and stores the fingerprint in the tamper resistant Trust Anchor. This fingerprint is not only immutable once it is inserted into the Trust Anchor but it also cannot be read back from the Trust Anchor.

When the router boots up, UEFI firmware fingerprints the hardware elements of the router at boot and creates a fingerprint of the hardware elements. This fingerprint is sent to the Trust Anchor hardware, which will compare it against the master fingerprint stored inside the hardware. UEFI firmware will only boot the router if the Trust Anchor hardware can successfully verify the observed fingerprint at bootup against the master fingerprint.

As threats evolve, Cisco continues to enhance the security and resilience of our solutions.  While no vendor can guarantee security, we are committed to transparency and accountability and to acting as a trusted partner to our customers to address today’s, and tomorrow’s, security challenges.

Tuesday, 30 April 2019

TLS Fingerprinting in the Real World

To protect your data, you must understand the traffic on your network.  This task has become even more challenging with widespread use of the Transport Layer Security (TLS) protocol, which inhibits traditional network security monitoring techniques.  The good news is that TLS fingerprinting can help you understand your traffic without interfering with any of the security benefits TLS brings to applications and complements current solutions like Encrypted Traffic Analytics.   To help our customers better understand the benefits of the approach, and to help drive the development and adoption of defensive uses of traffic analysis, the Advanced Security Research team of Cisco’s Security and Trust Organization has published a large set of fingerprints with the support of the Cisco Technology Fund.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides

Transport Layer Security (TLS) fingerprinting is a technique that associates an application and/or TLS library with parameters extracted from a TLS ClientHello by using a database of curated fingerprints, and it can be used to identify malware and vulnerable applications and for general network visibility. These techniques gained attention in 2009 with mod_sslhaf, in 2012 with SSL fingerprinting for p0f, in 2015 with FingerprinTLS, and most recently with JA3.  We have been using this approach at Cisco since 2016.  The attention given to TLS fingerprinting has been warranted because it is a proven method that provides network administrators with valuable intelligence to protect their networks. And while more of the TLS handshake goes dark with TLS 1.3, client fingerprinting still provides a reliable way to identify the TLS client. In fact, TLS 1.3 has increased the parameter space of TLS fingerprinting due to the added data features in the ClientHello. While there are currently only five cipher suites defined for TLS 1.3, most TLS clients released in the foreseeable future will be backwards compatible with TLS 1.2 and will therefore offer many “legacy” cipher suites. In addition to the five TLS 1.3-specific cipher suites, there are several new extensions, such as supported versions, that allows us to differentiate between clients that supported earlier draft implementations of TLS 1.3.

Why is our approach different?


But here’s the catch: the visibility gained by TLS fingerprinting is only as good as the underlying fingerprint database, and until now, generating this database was a manual process that was slow to update and was not reflective of real-world TLS usage. Building on work we first publicly released in January 2016, we solved this problem by creating a continuous process that fuses network telemetry with endpoint telemetry to build fingerprint databases automatically. This allows us to leverage data from managed endpoints to generate TLS fingerprints that give us visibility into the (much larger) set of unmanaged endpoints and do so in a way that can quickly incorporate information about newly released applications. By automatically fusing process and OS data gathered by Cisco® AnyConnect® Network Visibility Module (NVM) with network data gathered by Joy, our system generates fingerprint databases that are representative of how a diverse set of real-world applications and operating systems use network protocols such as TLS. We also apply this process to data generated from Cisco Threat Grid, an automated malware analysis sandbox, to ensure that our system captures the most recent trends in malware. With ground truth from multiple sources like real-world networks and malware sandboxes, we can more easily differentiate fingerprints that are uniquely associated with malware versus fingerprints that need additional context for a confident conviction.

Our internal database has process and operating system attribution information for more than 4,000 TLS fingerprints (and counting) obtained from real-world networks (NVM) and a malware analysis sandbox (Threat Grid). The database also has observational information such as counts, destinations, and dates observed for more than 12,000 TLS fingerprints from a set of enterprise networks. We have open sourced a subset of this database that, at more than 1,900 fingerprints (and counting), is the largest and most informative fingerprint database released to the open-source community.   This database contains information about benign processes only; we are not able to publish fingerprints for malware at this time.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides

Given the records generated from the data fusion process, we report all processes and operating systems that use a TLS fingerprint, providing a count of the number of times we observed each process or operating system using the TLS fingerprint in real-world network traffic. This schema gives a more realistic picture of TLS fingerprint usage (in other words, many processes can map to a single fingerprint with some more likely than others).

Another advantage of our database is that it provides as much relevant contextual data per fingerprint as possible. The primary key into our database is a string that describes the TLS parameters that you would observe on the wire, which allows a system generating these keys to provide valuable information even in the case of no database matches. We associate each TLS parameter in the ClientHello with the RFC that first defined that parameter and use this information to report maximum and minimum implementation dates. These dates provide useful context on the age of the cryptographic parameters in the ClientHello and are not dependent on a database match.

Monday, 29 April 2019

Keys to a Successful Automation Project

We just finished working with the analyst firm Analysys Mason on a white paper exploring the factors behind successful automation projects. They talked to a number of tier 1 operators to capture lessons learned from the rollouts of their respective automation projects.  The white paper is more focused on process than technology and I think it’s a worthwhile read for anyone embarking on any automation project.

Automation projects tend to inspire an equal mix of excitement and fear and folks often come to us for advice on what to do.  Unfortunately, there is no one right answer to this; however, as the white paper establishes, there are a few guiding principles to keep in mind:

◈ Automation is not a “Big Bang” endeavor.  Successful companies view their automation initiatives as a series of discrete steps.  Like a staircase, each step builds upon the ones before it to increase scope and capabilties of the whole over time.

◈ Each step should have its own payoff, be it cost savings, increased efficiency or something else.  Having a backloaded payoff after several years of effort is seldom a great idea for a few reasons: 1) at some point, your leadership is going to wonder why they are spending money on a project with no apparent payoff, 2) your team will get tired of the churn, also with no apparent payoff, and 3) the needs of the organization will inevitably evolve and change and your initial plan will likely be dated in under a year. Instead, view your overall automation objective as a series of individual steps and make sure each step has tangible, measurable outcomes.  At the completion of each step, use what you learned along the way to re-asses the whole project.  By doing this, you gain credibility by showing the project is doing what you said it would do, your leadership is happy that they are seeing a return on their spend, your teams’ lives are getting better in measurable ways and they feel like their value to the company in growing, and you can be sure you are never too far out of sync with the business strategy.

◈ Of the three components of the infamous people, process, technology triangle,  “technology” is probably the most straightforward. A successful project is also going to entail investing in your teams’ skills. It’s critical to remember this is a non-compressible process: while you can install new tools in a matter of hours, your team can only absorb and operationalize new technologies at a given rate and you need to take that into account in your plans. In addition, you need processes that can be automated: spaghetti logic, gappy processes and disparate ways of doing the same task are all going to create friction in your quest to automate.  The good news is that an automation initiative is an excellent excuse to clean these all up as they are currently hurting your business whether you automate or not.  Again, allow time to do this: if you get three engineers in a room, you will have four opinions on how a task should be accomplished.

◈ Recognize that there is no one true path to automation.  Look at the image below. Some folks will follow the green line and focus on automating technical domains (i.e. data center networking, firewall policy) and then eventually stitch them all together.  Others will focus on automating specific business processes (i.e. auto-scaling video delivery, new employee on-boarding) and plan to eventually cover all the company’s activities.  Both approaches offer immediate benefits and both have longer-term implications.  Neither is right nor wrong, it’s simply which a matter of which approach best matches the needs of the company at the time.

Finally, and perhaps most importantly, it’s OK to try things and it’s OK if every step is not a complete success—it’s how you are going to learn. Internalize this but also set the expectation with your team and your leadership. Learn things and adapt. Expect to change tack–while the green and blue lines are nice and neat and look good on slides, reality more likely looks like the grey line going down the middle!

Cisco Tutorials and Materials, Cisco Guides, Cisco Learning, Cisco Certifications

Saturday, 27 April 2019

How to Find Relief for Your Network Infrastructure in the Age of Apps

If you’re like most IT people, never does a day go by that you’re not working on multiple tasks at once: ensuring on prem data centers and public cloud networks are running smoothly; monitoring the consistency of network security policies; and making sure all of it meets compliance demands. And that doesn’t even begin to address the enormous pressure applications have begun to put on the underlying network infrastructure. As a result, data centers are no longer a fixed entity, but rather a mesh of intelligent infrastructure that spans multiple clouds and geographies. With new applications constantly being added to an infrastructure, roadblocks are beginning to arise, making the role of IT teams more complicated than ever.

Dynamic Network Alignment with IT and Business Policies


The network industry has recognized its unique set of challenges and is addressing them in the form of an intent-based networking architectural approach that builds on software-defined networking to allow continuous, dynamic network alignment with IT and business policies. This means that application, security, and compliance policies can be defined once then enforced and monitored between any groups of users or things and any application or service – or even between application services themselves – wherever they are located.

Forward-looking companies are now using applications not just as a way to engage with customers but also as a means for employees and the organizations themselves to communicate and work together efficiently. To create a more streamlined infrastructure, Cisco has integrated Application Centric Infrastructure (ACI) with the application layer and the enterprise campus to help large and medium-sized organizations that need to adopt a holistic network infrastructure strategy. Designed to help businesses cope with the unique performance, security, and management challenges of highly distributed applications, data, users, and devices, Cisco ACI also addresses the issue of legacy approaches. Having relied on manual processes to secure data and applications and control access, these approaches are no longer adequate or sustainable, and therefore need to be modernized.

With the ACI and AppDynamics (AppD) integration, application performance correlates with network health, while the Cisco DNA Center and the Identity Services Engine (ISE) work together to deliver end-to-end identity-based policy and access control between users or devices on campus and applications or data anywhere.

Richer Diagnostic Capabilities for Healthier Networks and Apps


Simplifying the deployments and management of applications requires more than just providing and managing the infrastructure that supports them. Cicso’s AppD provides IT teams with the application-layer visibility and monitoring required in an intent-based architecture to validate that IT and business policies are being met across the network. The Cisco ACI and AppDynamics solution also offers high-quality app performance monitoring, richer diagnostic capability for app and network performance, and faster root-cause analysis of problems, with immediate triage sent to the right people quickly.

Cisco Tutorials and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

That said, failures in applications can happen for a variety of reasons, often leading to what’s commonly known as “the blame game,” with people asking questions like, “Is it the network failure or the application failure? Who is responsible – the network team or the apps team?” Manual methods are slow, cumbersome and oftentimes simply impossible to detect failures in an assertive fashion. The ACI and AppD integration offers deep visibility into the application processes andenables faster root cause analysis bytaking the ambiguity out and pinpointing the problem – saving time, money, and, most importantly, getting the application back up and running right away.

Network Segmentation is a Must


Hyper-distributed applications and highly mobile users, increased cyber-security threats, and even more regulatory requirements make network segmentation a must for reducing risk and better compliance. Cisco ACI and Cisco DNA Center/ISE policy integration allows the marrying of Cisco ACI’s application-based microsegmentation in the data center, with Cisco SD Access user-group based segmentation across the campus and branch. This integration automates the mapping and enforcement of segmentation policy based on the user’s security profile as they access resources within the data center, enabling security administrators to manage end-to-end, user-to-application segmentation seamlessly. A common and consistent identity-based microsegmentation capability is then provided from the user to the application.

Cisco Tutorials and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Experience ACI Integrations for Yourself


To practice using Cisco ACI, we’ve put together two-minute walkthroughs to help you experience the impact of the integrations and see first-hand how they can make an IT team’s life easier.


Watch how Cisco Cloud ACI helps policy-driven connectivity between on-premises data centers and AWS and Azure public clouds. The aim is to simplify routing and to ensure consistency of network security policies, ultimately helping to meet compliance demands.


Learn how to correlate application health and network constructs for optimal app performance, deeper monitoring, and faster root cause analysis with Cisco ACI and AppDynamics integration.


See how Cisco ACI and Cisco DNA Center/ISE policy integration allows the marrying of ACI’s application-based micro-segmentation in the data center with Cisco SD-Access and user group-based segmentation across the campus and branch.

Source: Cisco.com

Friday, 26 April 2019

The New Network as a Sensor

Before we get into this, we need to talk about what the network as a sensor was before it was new. Conceptually, instead of having to install a bunch of sensors to generate telemetry, the network itself (routers, switches, wireless devices, etc.) would deliver the necessary and sufficient telemetry to describe the changes occurring on the network to a collector and then Stealthwatch would make sense of it.

The nice thing about the network as a sensor is that the network itself is the most pervasive. So, in terms of an observable domain and the changes within that domain, it is a complete map. This was incredibly powerful. If we go back to when NetFlow was announced, let’s say a later version like V9 or IPfix, we had a very rich set of telemetry coming from the routers and switches that described all the network activity. Who’s talking to whom, for how long, all the things that we needed to detect insider threats and global threats. The interesting thing about this telemetry model is that threat actors can’t hide in it. They need to generate this stuff or it’s not actually going to traverse the network. It’s a source of telemetry that’s true for both the defender and the adversary.

The Changing Network


Networks are changing. The data centers we built in the 90’s and 2000’s and the enterprise networking we did back then is different from what we’re seeing today. Certainly, there is a continuum here by which you as the customer happen to fall into. You may have fully embraced the cloud, held fast to legacy systems, or still have a foot in both to some degree. When we look at this continuum we see the origins back when compute was very physical – so called bare metal, imaging from optical drives was the norm, and rack units were a very real unit of measure within your datacenter. We then saw a lot of hypervisors when the age of VMware and KVM came into being. The network topology changed because the guest to guest traffic didn’t really touch any optical or copper wire but essentially traversed what was memory inside that host to cross the hypervisor. So, Stealthwatch had to adapt and make sure something was there to observe behavior and generate telemetry.

Moving closer to the present we had things like cloud native services, where people could just get their guest virtual machines from their private hypervisors and run them on the service providers networked infrastructure. This was the birth of the public cloud and where the concept of infrastructure as a service (IaaS) began. This was also how a lot of people, including Cisco Services and many of the services you use today, are run to this day. Recently, we’ve seen the rise of Docker containers, which in turn gave rise to the orchestration of Kubernetes. Now, a lot of people have systems running in Kubernetes with containers that run at incredible scale that can adapt to the changing workload demand. Finally, we have serverless. When you think of the network as a sensor, you have to think of the network in these contexts and how it can actually generate telemetry. Stealthwatch is always there to make sense of that telemetry and deliver the analytic outcome of discovering insider threats and global threats. Think of Stealthwatch as the general ledger of all of the activity that takes place across your digital business.

Now that we’ve looked at how networks have evolved, we’re going to slice the new network as a sensor into three different stories. In this blog, we’ll look at two of these three transformative trends that are in everyone’s life to some degree. Typically, when we talk about innovation, we’re talking about threat actors and the kinds of threats we face. When threats evolve, defenders are forced to innovate to counter. Here however, I’m talking about transformative changes that are important to your digital business in varying ways. We’re going to take them one by one and explain what they are and how they change what the network is and how it can be a sensor to you.

Cloud Native Architecture


Now we’re seeing the dawn of serverless via things like AWS: Lambda. For those that aren’t familiar, think of serverless as something like Uber for code. You don’t want to own a car or learn how to drive but just want to get to your destination. The same concept applies to serverless. You just want your code to run and you want the output. Everything else, the machine, the supporting applications, and everything that runs your code is owned and operated by the service provider. In this particular situation, things change a lot. In this instance, you don’t own the network or the machines. Serverless computing is a cloud computing execution model in which cloud solution providers dynamically manage the allocation of machine resources (i.e. the servers).

So how do you secure a server when there’s no server?

Stealthwatch Cloud does it by dynamically modeling the server (that does not exist) and holds that state overtime as it analyzes changes being described by the cloud-native telemetry.  We take in a lot of metadata and we build a model for in this case a server and overtime s everything changes around this model, we’re holding state as if there really was a server. We perform the same type of analytics trying to detect potential anomalies that would be of interest to you.

Cisco Tutorials and Materials, Cisco Learning, Cisco Guides, Cisco Certifications

In this image you can see that the modeled device has, in a 24-hour period, changed IP address and even its virtual interfaces whereby IP addresses can be assigned. Stealthwatch Cloud creates a model of a server to solve the serverless problem and treats it like any other endpoint on your digital business that you manage.

Cisco Tutorials and Materials, Cisco Learning, Cisco Guides, Cisco Certifications

This “entity modeling” that Stealthwatch Cloud performs is critical to the analytics of the future because even in this chart, you would think you are just managing that bare metal or virtual server over long periods of time. But believe it or not, these traffic trends represent a server that was never really there! Entity modeling allows us to perform threat analytics within cloud native serverless environments like these. Entity modeling is one of the fundamental technologies in Stealthwatch and you can find out more about it here.

We’re not looking at blacklists of things like IP addresses of threat actors or fully qualified domain names. There’s not a list of bad things, but rather telling you an event of interest that has not yet made its way to a list. It catches things that you did not know to even put on a list – things in potential gray areas that really should be brought to your attention.

Software Defined Networks: Underlay & Overlay Networks


When we look at overlay networks we’re really talking about software defined networks and the encapsulation that happens on top of them. The oldest of which I think would be Multiprotocol Label Switching (MPLS) but today you have techniques like VXLAN and TrustSec. The appeal is that instead of having to renumber your network to represent your segmentation, you use encapsulation to express the desired segmentation policy of the business.  The overlay network uses encapsulation to define policy that’s not based on destination-based routing but labels. When we look at something like SDWAN, you basically see what in traditional network architectural models changing.  You still have the access-layer or edge for your network but everything else in the middle is now a programmable mesh whereby you can just concentrate on your access policy and not the complexity of the underlay’s IP addressing scheme.

For businesses that have fully embraced software defined networking or any type, the underlay is a lie!  The underlay is still an observational domain for change and the telemetry is still valid, but it does not represent what is going on with the overlay network and for this there is either a way to access the native telemetry of the overlay or you will need sensors that can generate telemetry that include the overlay labeling.

Enterprise networking becomes about as easy to setup as a home network which is an incredibly exciting prospect. Whether your edge is a regular branch office, a hypervisor on a private cloud, an IAS in a public cloud, etc. as it enters the world or the rest of the Internet it crosses an overlay network that describes where things should go and provisions the necessary virtual circuits. When we look at how this relates to Stealthwatch there are a few key things to consider. Stealthwatch is getting the underlay information from NetFlow or IPfix. If it has virtual sensors that are sitting on hypervisors or things of that nature, it can interpret the overlay labels (or tags) faithfully representing the overlay.  Lastly, Stealthwatch is looking to interface with the actual software define networking (SDN) controller so it can then make sense of the overlay. The job of Stealthwatch is to put together the entire story of who is talking to whom and for how long by taking into account not just the underlay but also the overlay.

Thursday, 25 April 2019

A Required Cloud Native Security Mindset Shift

There has been a significant shift in the public cloud infrastructure offerings landscape in the last 3-5 years. With that shift we as Information Security practitioners must also fundamentally shift our tactics and view of how these new services can be leveraged as a whole new landscape of possible vectors to be exploited into today’s public cloud infrastructure.

As a Pre-Sales Security Engineer, I talk to many customers on a daily basis that have gone all-in with their public cloud strategy and many that are hybrid with what they view as legacy on-premise workloads and those that are still relatively new to public cloud, and as such, only have lab and test workloads that they are looking to protect.

Regardless of whether an organization is cloud native from the ground up or somewhere along the path to transitioning to the public cloud, they must all recognize that the infrastructure landscape has changed drastically from what an on-premise datacenter traditionally looked like. That change includes security. Even public cloud capabilities themselves have changed drastically from server to serverless and containerized microservices from just a few short years ago.

In today’s public cloud landscape across all major providers, InfoSec teams must realize that their cloud assets that require protection stretch far beyond those of traditional virtual machines being hosted by their respective cloud provider. The introduction of point-in-time on-demand compute, serverless databases, machine learning services, public-facing storage buckets, and elastic containerized environments like Kubernetes has introduced a plethora of new attack surfaces that, if not secured properly, could potentially all be leveraged as a vector into a customer’s cloud environment.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Security

Combine the sprawl of serverless capabilities that are being offered by public cloud providers with the development culture shift to DevOps and the rise in Shadow IT, and the risk to an organization is very apparent. Their concern should not only be the cloud workloads they know about, but also the ones they are completely blind to.

The traditional mindset of attackers gaining entry into a network via a public-facing application vulnerability or perimeter firewall gap must shift knowing the many services and entry points into an organization’s public cloud network. Now InfoSec teams must focus their attention on non-traditional attack vectors like compromised API access keys, weakly-permissioned file storage buckets, an expanding credentialed access surface area, or one of dozens of public cloud unique services that can easily be exposed to the Internet with no firewall or ACL to protect them from adversaries. A Kubernetes cluster in the public cloud can easily grow from a possibly vulnerable surface area of a few nodes with a few pods each to a massive cluster with hundreds or thousands of internet-facing pods in a matter of minutes.

Organizations must have visibility into the underlying infrastructure if they want to have a chance at trying to protect this rapidly-expanding public cloud landscape. You can’t rely on agents or manual human oversight to ensure workloads and assets are accounted for and secured. The surface area of virtual machine sprawl, serverless compute applications, and DevOps/Shadow IT dictates that organizations have no choice but to leverage the public cloud’s underlying network infrastructure as a catch-all security sensor grid. If an organization can ensure that they can see everything and eliminate all possible blind spots despite the stated landscape, then they can see, secure and monitor everything that’s in their cloud environment.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Security

But how? This is where Cisco Stealthwatch Cloud plays an integral and necessary role in providing an organization this essential catch-all visibility layer. The solution leverages agentless API integrations and cloud-native network flow log ingestion to provide a complete record of every transaction that occurs within any public cloud environment or service, be-it server, serverless, or containerized. Stealthwatch Cloud generates a deep forensic history of every cloud entity known or unknown, learns known good behavior on each and then alerts on hundreds of indicators of compromise or policy violation that can put an organization at risk of breach.