Wednesday 31 October 2018

Layers of Security

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Do you remember the movie “Die Hard”? Arguably the best Christmas movie ever. All kidding aside, this movie has a great correlation into Security best practices. Before we go into that, let’s recap. The bad guys in the movie were going to steal $640M in bearer bonds. In order to do so, they needed to break through several layers of security:

◈ Infiltrate Nakatomi Plaza and get rid of the guards
◈ Get the vault password from Mr. Takagi
◈ Have your computer guy hack through the vault locks
◈ Have the FBI cut power to the building, which in turn disables the last lock

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

So, how does this relate to Security? Layers. Lots and lots of layers. Utilizing a layered approach to security means that there are several hurdles that the bad guys need to overcome in order to get to your “bearer bonds” (your data, user information, etc.). The more challenging it is for someone to gain access to your resources, the less likely they are to spend their resources in getting them. While that is not always the case, if your security methodology is such that you can stop a large percentage of malicious activity early, you can focus on the more sophisticated attempts. Former Cisco CEO John Chambers said “There are two types of companies: those that have been hacked, and those who don’t know they have been hacked. “. If you take this to heart, it will help in laying out the strategy you need to best protect your people, applications, and data.

There is not one way to accomplish setting up these layers and they are certainly not linear as attacks can come from both inside and outside of your network. Let’s take a look at some of these layers that could be considered foundational to any security plan.

Starting at the Front Door


From a technology point of view, this makes me think of the firewall. Granted, in many ways this is obvious. Limit access to/from the Internet. This is a great place to start as this is like a lock on the front door of your home. With today’s Next-Gen Firewalls, one can look at applications, provide deeper packet inspection, and ultimately more granular control. As the infrastructures change, we are now deploying firewall technology within segments of the network and now even into the Cloud.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Who? What? Where? When?


Nakatomi Plaza had locks on their doors, guards, and security cameras. Managing Whether it’s physical or network security, managing access is critical. When we understand who is accessing the network (employee, contractor, guest, CEO), how they are accessing it (corporate laptop, phone, personal tablet), where they are accessing (HQ, Branch Office, VPN), and even the time of day, decisions can be made to allow or deny access. Taking it a step further, access control today with a solution like Identity Service Engine (ISE) can take all of this into consideration to allow/deny access to specific resources on the network. For example, if a user in the Engineering group is at HQ and trying to update a critical server using their corporate issued laptop, the engineer may be able to do so. That same engineer still at HQ but on a personal laptop or tablet may be denied access. Managing access to resources is one of the most important and challenging areas of security.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

You’ve Got Mail


Email is still the number one threat vector when it comes to malware and breaches. The criminals are getting smarter and when they send out a phishing attack, SPAM, or malicious email, they look completely legitimate and it’s challenging to know what is real and what is not. Email security solutions can pour through all incoming and outgoing mail. These tools can verify the sender and receiver. They can look at the content and attachments. Based on policies and information from resources such as Talos, compromised emails may never even make it to the recipient. As long as email continues to be a primary source of communication, it will continue to be a primary way to be breached.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

We’re Not in Kansas (or the office) Anymore


Almost 50% of the workforce is mobile today. People are working from homes, hotels, coffee shops, and planes now. There is also the need to access data from anywhere at any time. Keeping that data secure is not the job of the cloud provider but the owner of the data. Additionally, the users accessing the data from so many not-so-secure locations are of course always using their VPNs every time, right? Wrong! In a recent survey over 80% polled admitted to not using their VPN when connecting to public networks. So, now the bad guys are through another layer of security. We need to protect the cloud users, applications, and data. CASB (Cloud Access Security Broker) is a technology that does just that. Cloudlock can detect compromised accounts, monitor and flag anomalies, and provide visibility into those cloud applications, users, and data.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

As work becomes a thing we do and less of a place we go, the risk of attack gets higher. I said earlier that the number one threat vector is email. Within those emails, many of the malware is launched from clicking on a link. That means DNS is yet another method that can be used by the bad guys. In fact, around 90% of malware is DNS based. Umbrella provides not only a better Internet access, but a secure Internet access. Regardless of where you go, Umbrella can protect you.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Having all of these layers to protect your “bearer bonds” doesn’t guarantee that nothing bad will happen. The bad guys have a lot of resources and time to get what they want. This methodology will hopefully help make it so difficult for them, that they don’t want to even try. People, applications, and data. It’s a lot to protect and a lost to lose. If you do it correctly though, you get to be the hero at the end.

Wednesday 24 October 2018

Cloud Covered – Are You Insured?

Security is a topic that is a top-of-mind for every CIO out there. It is interesting to know that according to a study performed by Research 451, 64% of enterprises report information security is not a separate line in terms of budget, instead, it is part of their IT infrastructure one.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material, Cisco Certifications

In other words, most of us take security for granted until something bad happens. Pete Johnson (“Cloud Unfiltered”) even referred to it as “insurance” and I believe that it is the appropriate term for it.

We all know we need insurance, but what is the right-coverage for me? Well, it really depends on what are the type of assets you are trying to protect and how your business would be impacted if something happened.

If we think about our daily lives, imagine having 20 doors/windows wide open and then just locking or adding video-surveillance to the one in the backyard (because your neighbor just told you he had been robbed the night before and that the thief broke into his house through the backyard door). Well, that’s a good start, however there are still more than 19 doors & windows still wide open and vulnerable for anybody to access right?

Well, that’s pretty much what happens in IT and only securing a few “doors” is called “black-listing”. Let me explain: every server has 65535 ports open (for TCP and the same amount for UDP). If we consider the black-listing approach, we may just close a few ports based on common vulnerabilities knowledge. Most of the times, we don’t know which ports our apps need to work on, therefore we need to follow this approach and just block a few ports while permitting the rest of them.

In today’s Multicloud world, constant and more sophisticated threats are a fact and black-listing security is definitely not enough.

All we must do is install a Tetration software sensor on top of Operating Systems like Windows, Linux, Solaris, AIX among others, it does not matter if they are running bare-metal, virtualized, container-based or even on any Public Cloud or non-Cisco hardware. Once installed, the sensors will continuously feed every flow going in and out of that host to the Tetration Engine, which will show us the Application Dependency Mappings.

Think of the sensors as continuous-feed cameras while the Tetration Engine performs as that person in the SoC watching 24×7, reporting any process-level/network anomalies and having all the recordings from the past available for you to analyze when needed. Before, we would only rely on “video-samples” from specific places and at specific times (using things like Netflow or SPAN sessions).

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material, Cisco Certifications
This provides us with great value, since now we know what specific ports our apps really need and we can close the rest, which is called “white-listing” or “zero-trust policies”. We can now use that information and execute our Zero-Trust Policies either manually or even automatically as shown in the video below.

Tetration supports enforcing those policies at the sensor level, turning the software sensor into an enforcement agent and executing segmentation at the OS level. We could also automate the configuration of those policies on ACI or on your own firewall using tools like Tuffin.

Tetration software sensors log every flow at the process level, therefore, they may help us to identify any anomalies or deviation from the standard (like privilege escalation, change in binary files, failed logons and many more).

There are many other types of coverage we may need for IT and our apps and a comprehensive solution may be needed. This is where Stealthwatch & Stealthwatch Cloud (which effectively report potential attacks), ACI (that can execute and complement our security strategy at the multicloud network level while encrypting VXLAN communications) and an effective Next-Generation Firewall like the Firepower Family among others, can further reduce blind-spots and help us react faster to potential threats.

Having multiple homes (in this case Clouds) where our applications may live, would normally force us into having multiple insurance policies. With solutions like these, we can have a single, continuous and consistent one, which should help us getting some extra hours of quality sleep at night!

Saturday 20 October 2018

Machine Learning is NOT Rocket Science: Part 2

In Part 1 of this blog, I point out that using machine learning algorithms is much easier today with packages such as scikit-learn, TensorFlow, PyTorch, and others. In fact, using machine learning has been relegated down to largely a data management problem and software development issue rather than the mythical complexity of a rocket science.

Yet, it has never been more important to take advantage of machine learning.  According to the McKinsey report, being one of the first to adopt artificial intelligence has huge implications on future cash flow.

Machine Learning, Cisco Learning, Cisco Tutorial and Material, Cisco Study Material
Relative Change in Cash Flow by AI Adoption

If indeed machine learning boils down to data management and using the machine learning packages, what challenges are enterprise facing today to make use of that data?  For many Cisco customers, we find that

◈ Data scientists are tasked with mining value out of the data.  As they explore the value of data source A and B, there may be petabytes of additional data, which represents huge changes in infrastructure requirement.  While data scientists can have a small set of data using curated version of machine learning software on their laptop, scaling to petabytes clearly requires working closely with IT teams.

◈ There are numerous machine learning software stacks.  Not only are there numerous options, many, like TensorFlow, even have nightly builds with new capabilities.  Hence, the machine learning software ecosystem is relatively immature compared to say relational databases.

◈ IT teams are trying to help the data scientists.  Yet, constantly changing data sources leads to drastic infrastructure requirement changes.  With immature software ecosystem, it is very difficult for IT to create a stable environment with the needed infrastructure to scale.

At Cisco, we understand these challenges.  Often times, we find that the IT team and the data scientists may be at odds with one another.  To help our customers, Cisco has developed Cisco Validated Designs (CVD), in partnership with the machine learning software ecosystem, to create a complete solution based on unified architecture that can quickly scale, enabling the IT teams to better support the data scientists.

Let’s highlight some examples of Cisco Validated Design supporting machine learning. One of the prerequisites of machine learning is data itself.  For many Cisco big data customers, they already have a data lake in Hadoop that requires further analysis to extract more value from the data.  Hence, Cisco has partnered with Cloudera to create a CVD using Cloudera Data Science Workbench enabling customers to tap into the Hadoop data lake and use the latest machine learning frameworks such as TensorFlow and PyTorch.  In a similar way, Hortonworks 3.0 also has the latest YARN scheduler that is able to schedule workloads on CPU and GPUs to support workloads like Apache Spark and TensorFlow as Docker containers.  This solution enables IT teams to scale the CPU, GPU, and storage.

Cisco’s proven approach helps simplify deployments, accelerate time to insight, and enables the data scientists to curate their own machine learning software stack. For some data scientists that may want to do some machine learning experiments in the cloud, Cisco is actively contributing code to the Kubeflow open source project ensuring that there are consistent tools for machine learning both on-premise and in the cloud enabling a hybrid cloud architecture for AI and ML.  In fact, Gartner points out that 57% of machine learning models are developed using resources on-prem.

Machine Learning, Cisco Learning, Cisco Tutorial and Material, Cisco Study Material
UCS AI ML Solutions

By expanding our UCS portfolio with the new C480 ML, we continue to diversify for any workload. All UCS Servers are based on a unified architecture and can be managed by Cisco Intersight, making it simple to integrate into existing UCS environments.

Machine Learning, Cisco Learning, Cisco Tutorial and Material, Cisco Study Material
UCS Unified Architecture

In short, Cisco has expanded the UCS portfolio to now include a system that is purpose built for deep learning. We are working with machine learning software ecosystem to demystify AI/ML with proven, full-stack solutions developed with industry leaders. Our goal is to help IT better support AI projects and their data scientists.  May your machine learning journey be fast and smooth.

Machine Learning, Cisco Learning, Cisco Tutorial and Material, Cisco Study Material
Activate Power of Data with UCS

Friday 19 October 2018

Machine Learning is NOT Rocket Science: Part 1

Movies have always created powerful mystique about artificial intelligence. For example, 2001: A Space Odyssey had the computer, Hal 9000, that recognized astronauts, spoke to them, and even locked the door to prevent an astronaut from entering the spacecraft. In the Terminator movies, Skynet was a self-aware computer set on destroying humans. The awesome computer capabilities depicted in these and other movies are very entertaining to be sure but also create a mysticism about computers being omniscient, omnipresent, and even omnipotent. Parts of these fictional computer superpowers are actually reality in our pockets. For many of us, the smart phone is able to recognize our voices, our faces, talk to us in different languages, and even lock doors. Despite how artificial intelligence and machine learning are embedded into our lives, the mystical powers of fictional computers still give many the impression that using artificial intelligence or machine learning capabilities in business requires the wizardry of Merlin, the intellect of Einstein, and the national effort of a moon landing.

Machine Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

The reality is that machine learning has advanced to a point where it is no longer in the realm of rocket science. To take advantage of machine learning today, one does NOT need know all the internal details of a machine learning algorithm such as

Auto-differentiation
Stochastic Gradient Descent
Kernel tricks of a support vector machine

One only has to be able to use software packages such as scikit-learn, TensorFlow, PyTorch, and many others. Rather than the mysticism of rocket science, the true technical barrier for entry to machine learning has been lowered to that of a software problem. Does having a data scientist that understand the machine learning algorithmic details help? Absolutely. However, data scientists do not need to know all the details of the machine learning algorithm to mine value out of data. This situation is very similar to that of a C programmer who may not understand the details of assembly language but can still develop sophisticated programs.

Is machine learning different than traditional programming? Yes. Historically, humans had to create software to take input data and have the computer to generate output data. For example, a programmer can try to write code that recognize photos of cats and dogs by describing all the characteristics of of cats and dogs (e.g., nose, ears, tails). Unfortunately, this is an exceptionally daunting problem because of the myriad variations among cats, dogs and their respective breeds. Instead of writing such detailed instructions to recognize a pet, with supervised machine learning, you feed the machine learning algorithm with lots of labeled examples, such as photos that are properly identified as cats and dogs. Then, the machine learning algorithm can create a program, also known as a model, that can recognize cats and dogs with amazing accuracy. With this ability to recognize patterns in data, machine learning can be used in a variety of tasks, not just academic examples such as dog recognition. The once complex pattern recognition problem has become as simple as managing the labeled data and using the machine learning algorithms.

Machine Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides
AI / ML Write Code Based on Examples

Wednesday 17 October 2018

Miercom Tests Endorse Cisco 1000 Series ISRs’ IPsec Encryption Performance

In both traditional and future SD-WAN network architectures, IPsec encryption performance is one of the most important technologies for secure delivery of customer traffic in branch routers. Higher IPsec throughput performance can also translate into improved customer experience and even revenue.

Miercom recently validated a few models of Cisco and Huawei fixed branch routers, measuring RFC 2544 IPsec encryption throughput performance. The testing shows that the Cisco 1111 Integrated Services Router (ISR) demonstrated the highest average IPsec throughput performance of 365 Mbps, compared to Huawei and HPE fixed branch routers. The Huawei AR1220E shows only 245 Mbps. The result is the average of 20 test results, so it is very reliable.

Table 1 shows the overall throughput performance comparison chart from the Miercom report.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

Table 1. Competitive WAN performance

Let’s look at the result variation among the 20 test runs. See Table 2.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

Table 2. WAN performance variation

The Huawei AR1220E fixed router shows the largest throughput variations. In other words, Huawei fixed router throughput performance is not the same when measured at different times under the same setup conditions and environments. To customers, this could mean very inconsistent throughput due to complex processing of I/O, buffering, table lookup, queuing, and forwarding sessions. For a service provider, this could result in poor customer satisfaction.

If we look at the overall test result variations reported by Miercom, the two Cisco fixed ISRs, the 1117 and 1111, have the lowest variations in IPsec throughput results, while the three Huawei fixed routers, the AR1220E, AR169FGW-L, and AR201, show the highest variations. See Table 3. To customers, this means that if you pick Cisco fixed routers as your branch router for WAN services, you will get better and more consistent IPsec throughput performance, while if you pick Huawei fixed routers, the service may be very inconsistent.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

Table 3. Competitive WAN performance variability

For the full details, download the comprehensive Miercom report and accompanying test results.

Sunday 14 October 2018

Building the 5G Business Case

2018 to date has been the year when 5G came out of the standards and into reality, with many trials throughout Asia Pacific (APAC). The learnings from these trials have shown us not only what services could be supported, but how 5G should be deployed and what the likely investments will be required for a commercial launch. During the recent 5G Asia event, the 5G discussions have moved from how and when 5G will be deployed, to how are we going to pay for it. So what will a 5G investment business case look like?

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

The way I see it, there are three broad areas of focus for the initial 5G business case, considering both top-line and bottom-line drivers:

1. First, the initial focus area is the economics of meeting the current projected data capacity growth requirements at a lower cost per bit. 3G/4G traffic is still growing at more than 100% year-over-year in some APAC markets and even jumped over 400% in India last year. The amount of capacity that 5G will enable of course depends primarily on the amount of new spectrum that regulators make available, but looking at allocations to date, we expect 5G to open up around 5x more bandwidth than 4G has today. Based on our modeling, 5G cost per bit could be less than half of what it is for 4G, and a quarter of 3G costs. This will drive traffic migration and spectrum re-farming initiatives once 5G is launched.

2. Second, 5G enables more customized services, or so-called slices. Most 4G services today are supported over the same generic “bearer” regardless of the requirements or value of that service. With 5G networks, attributes like bandwidth (BW), latency, resiliency and security can be customized, like per over-the-top (OTT), enterprise or Internet of Things (IoT) application, etc. The business case drivers here are both increased revenue share with new and differentiated services, and further improved cost to serve.

3. Third is the monetization of completely new services beyond what 4G can support today. This is where the new cool applications come in, like lower latency, Augmented Reality (AR)/Virtual Reality (VR), tactile internet, super high-definition media with Gbps bandwidth, and massively dense machine-to-machine (M2M) deployments. In this area, we have the most uncertainty for Return on Investment (RoI). Traditionally, the mobile industry hasn’t been great at predicting the next “killer” application, but we do know 5G is a step change in mobile capabilities and with the right device and application ecosystem, the next killer app will come.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

As we move from trial, to deployment, to launch of 5G services, the business model will continue to evolve, but at least for now, we have an initial view of how 5G will benefit service providers. Investment business cases based on lower cost per bit capacity and service differentiation are enough to show positive RoI, and monetizing the next 5G killer app will be the future icing on the cake.

Empowering Defenders: AMP Unity and Cisco Threat Response

Defenders have a lot of work to do, and many challenges to overcome. While conducting the Cisco 2018 Security Capabilities Benchmark Study, where we touched more than 3600 customers across 26 countries, these assumptions were confirmed. We have seen that defenders are struggling with the orchestration of a mix of security products and that, by itself, may obfuscate rather than clarify the security landscape.

Let’s take a moment to imagine a security team and the tasks it performs daily. Reviewing increasing numbers of alerts, attempting to correlate information from various sources to build a complete picture of each potential threat, triaging and assigning priorities, are all complex tasks performed under time pressures. The goal is to quickly come up with an adequate response strategy based on the clear understanding of the threat, its scope of compromise, and the potential damage it could cause. This process is often error-prone and time-consuming when it is manual. At the same time when understanding the alerts becomes a challenge, high severity threats can slip through the defenses.

We have heard from the majority of customers that an integrated approach is easier to implement and is more cost effective. Listening to and understanding the needs of our customers has always been a priority for us. Therefore, to empower security analysts with effective weapons to defend their organizations, Cisco has built a security architecture that helps streamline security operations. Most recently we have developed two offerings: one a platform and the other a capability: Cisco Threat Response and AMP Unity. Both are exciting developments and while they are different, they serve the same strategic goal.

AMP Unity


AMP Unity is a capability that allows organizations to register their AMP-enabled devices (Cisco NGFW, NGIPS, ESA, CES, WSA with a Malware/AMP subscription) in the AMP for Endpoints Console. In this way, those devices can be seen and queried (for sample observations) the same way the AMP for Endpoints Console already provides for endpoints. This integration allows correlating file propagation data across all of the threat vectors in a single User Interface (Global File Trajectory view).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Global File Trajectory view (showcasing file transfer through an email gateway, down to the endpoint, across the network to another endpoint)

But it doesn’t stop there. AMP Unity also allows you to create common file whitelists and file blacklists (through the same AMP for Endpoints Console) and enforce them across all of the registered AMP-enabled devices in the organization alongside your AMP endpoints (Global Outbreak Control).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Global Outbreak Control (adding a file to a Simple Detection list which enforces a blocking action across all AMP-enabled devices and endpoints)

In an incident response scenario, being able to quickly understand the scope of compromise and the way threats propagate across the environment, is essential. Being able to enforce policy across the malware inspection gateways and endpoints consistently helps security teams save time and address threats that matter.

Keep in mind that AMP Unity is a capability. It doesn’t introduce new dashboards or policies – it’s all managed through the AMP for Endpoints Console. That helps you derive more value out of your existing AMP investments.

Cisco Threat Response


Cisco Threat Response is an innovative platform that brings together security-related information from Cisco and third-party sources into a single, intuitive investigation and response console. It does so through a modular design that serves as an integration framework for event logs and threat intelligence. Modules allow for the rapid correlation of data by building relationship graphs that in turn, enable security teams to obtain a clear view of the attack, as well as to quickly make effective response actions.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Cisco Threat Response Relationship Graph

As of the time of publishing this blog, Cisco Threat Response brings together event logs and threat intelligence from multiple Cisco and 3rd party modules. It’s likely that by the team you read this blog, the platform has added additional modules and capabilities.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Cisco Threat Response Modules

The obvious value here is automation and the reduction of incident response lag caused by shifting through multiple user interfaces and attempting to correlate available data manually. That’s precisely what Threat Response does for you. The daily workflow is also streamlined through the integrated case management tool named “Casebook”. That is a tiny UI component that allows you to gather and pivot on observables, assign names to your investigations, take notes and much more. Casebooks are built on a cloud API and data storage, and can be referenced by any product (with your credentials). Because of this, they can follow you from product to product, eventually across the entire Cisco Security portfolio.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Casebook

Cisco Threat Response is currently available to AMP for Endpoints and Threat Grid customers, who can take advantage of this powerful platform and the possibilities it provides today.

Tying AMP Unity and Cisco Threat Response Together


Considering both of these developments provide added value to security teams through tighter native integrations, how do they relate to each other? Simple – Cisco Threat Response queries correlated event telemetry from AMP for Endpoints and allows you to quickly take containment actions. It does so through the AMP for Endpoints API, via the AMP for Endpoints module enabled in Threat Response. Since AMP for Endpoints Console is a central place to correlate telemetry from AMP-enabled devices, this information can be used to enrich relationship graphs built by Threat Response. On top of that, Global Outbreak Control capabilities introduced by AMP Unity can be used through the Threat Response User Interface.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
AMP Unity Events in Threat Response

AMP Unity brings your AMP-enabled device data to Threat Response via the AMP for Endpoints module, and in turn Threat Response allows you to quickly take action at both the endpoint and edge layers of your AMP deployment based on investigation results across all Threat Response data.

As Cisco continues to develop new modules for Threat Response, enabling AMP Unity will be an optional step to correlate event telemetry from AMP-enabled devices. Eventually Threat Response will be able to query these devices (WSA, ESA, CES, NGFW, NGIPS) directly without having to rely on the AMP for Endpoints module (which is especially important for customers who do not have AMP for Endpoints).

Friday 12 October 2018

Meraki Wireless Health APIs Make Network Assurance Easier

As Meraki continues to drive cloud managed networking into new markets, we continue to evolve our offerings to help customers and partners adopt this mission. With large enterprise, campuses, and service providers all rapidly growing Meraki wireless deployments, Meraki continues to rapidly evolve to drive innovation in these markets. As part of the strategy to make our rich data sources available to our customers, we introduced Meraki Wireless Health and a brand new product line Meraki Insight at Cisco Live in Barcelona 2018.

In addition to our rapid adoption of Meraki Wireless Health, Meraki’s API platform has also been experiencing rapid adoption by our customer and partner community. Since the introduction of APIs less than 2 years ago, the API platform has grown into 100’s of unique APIs and over 20 million API calls a day

Access all Meraki Wireless Health features via an API interface


Wireless Health has been an instant hit with hundreds of thousands of customers now actively using Meraki Wireless Health, we have maintained our focus on wireless health to build out additional value for our customers. To build on top of all of these successes, Meraki is proud to announce that we are now launching full Wireless Health API endpoints. This is one of our largest API launches to date, and all of our customers now have access. These new API endpoints will make it easy for both Meraki’s Partner Solutions team and Cisco’s DevNet team to drive simple Open Source and Partner Solutions that can help simplify the management of wireless deployments of all types and for all verticals.

With this launch we are creating 3 key types of API endpoints


1. Connection Health – Summarizes the connection health of a network, AP or client
2. Connection Failures – Returns a full list of association, authentication, DHCP and DNS issues
3. Network Latency – Summarizes the latency of a network, AP or client.

The great thing about these new API endpoints is we have designed them to be flexible. You can filter all of them by a specific VLAN or SSID. You can also call the summary statistics by using start time and end times.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Leverage our DevNet community to build your own customized analysis or visualization

Using the new APIs to create working solutions


During the development of the Wireless Health API endpoints, Meraki worked with DevNet to validate the format of the API endpoints and make them as robust as possible. Our teams also worked together to create a real working solution using the new API endpoints. With the experienced DevNet team working with the Meraki team, we were able to put together a full working demonstration of the new API endpoints within one week. Now that Meraki APIs are part of Cisco DevNet we now have access to over 500,000 DevNet developers who can create services and solutions based on the Meraki APIs.

Wireless infrastructure supports mobile POS devices


Together we created a solution that allows one of our retail customers to correlate how network health, customer foot traffic, and point of sale statistics all interact – and do it in a single dashboard for over 50 separate locations! With mobile point of sale (POS) devices becoming a predominant method of in-store payments, the wireless infrastructure has become more important than ever. So we wanted to create a dashboard where our customers could quickly check the overall current health of the retail store, but also (thanks to some open source engineering) see the historical health. We also worked with the customer to create an overall health score for their retail stores, so that they can roll up the disparate and complex data set into a single scoring rank. This unleashes an incredibly powerful data point, where the retailer can quickly identify poorly performing retail locations and within seconds dive in and investigate the root cause.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Complete visibility into the retail locations data using Kibana

Unleash the power of your data in the Meraki platform


This is just the beginning of Meraki unleashing the power of the data in our platform. In the months ahead, we will continue to release more network health metrics that will further streamline the running of enterprise networks. We are looking forward to all the great things our customers and partners are going to create with these newly introduced API endpoints, and our team will continue to leverage our agile development environment to drive more innovations in the networking space.

Since 2017 the Cisco Meraki team has been working with the Cisco DevNet team. As a result, over the last year a large number of partners have leveraged our API infrastructure to create solutions specific to unique use cases. We are constantly adding new solutions and partners to our ecosystem, and they are easily available for anyone to view.

Wednesday 10 October 2018

Challenge Your Inner Hybrid Creativity with Cisco and Google Cloud

In recent years, Kubernetes has risen up in popularity, especially with the developer community. And why do developers love Kubernetes? Because it offers incredible potential for speed, consistency, and flexibility for managing containers. But containers are not all sunshine and roses for enterprises – with big benefits come some big challenges. Nobody loves deploying, monitoring, and managing container lifecycles, especially across multiple public and private clouds. On top of that, there are many choices when it comes to environments, which can also create a lot of complexity – there are simply too many tools and too little standardization.

Production grade container environments powered by Kubernetes


That’s why earlier this year Cisco launched the Cisco Container Platform, a turnkey-solution for production grade container environments powered by Kubernetes. The Cisco Container Platform automates the repetitive functions and simplifies the complex ones so everyone can go back to enjoying the magic of containers. The Cisco Container Platform is a key element of Cisco’s overall container strategy and another way Cisco provides our customers with choices to various public clouds.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 1: Cisco Hybrid Cloud for Google Cloud

Hybrid cloud applications are the next big thing for developers


At the beginning of the year Cisco joined forces with Google Cloud on a hybrid cloud offering that, among other things, allows enterprises to deploy Kubernetes-based containers on-premises and securely connect with Google Cloud Platform.

In July at Google Cloud Next ’18, we kicked off the Cisco & Google Cloud Challenge.  (You still have until November 1, 2018 to enter the challenge and win prizes.) The idea behind it is to give developers a window into the possibilities for building hybrid cloud applications. Hybrid cloud applications are the next frontier for developers. There are so many innovation possibilities for the hybrid cloud infrastructure. That’s why we even named it “Two Clouds, infinite possibilities.”

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 2: Timeline for the Cisco & Google Cloud Challenge

An IoT edge use case for inspiration


Consider the following use case –assume we have a factory which generates a huge amount of data from sensors deployed across the physical building. We would like to analyze that data on-premises, but take advantage of cloud services in Google Cloud Platform for further analysis. This could include running predictive analysis with Machine Learning (ML) on that data (i.e., which machine part is going to break next). “Edge” here represents a generic class of use cases with these characteristics:

◈ Limited Network Bandwidth – Many manufacturing environments are remote, with limited bandwidth. Collecting data from hundreds of thousands of devices requires processing, buffering, and storage at the edge when bandwidth is limited. For instance, an offshore oil rig collects more than 50,000 data points per second, but less than 1% of this can be used in business decision making due to bandwidth constraints. Instead, analytics and logic can be applied at the edge, and summary decisions rolled up to the cloud.

◈ Data Separation & Partitioning – Often data from a single source needs to go to different and/or multiple locations or cloud services for analytics processing. Parsing the data at the edge to identify its final destination based on the desired analytics outcome allows you to route data more effectively, lower cloud costs and management overhead, and provide for the ability to route data based on compliance or data sovereignty needs. For example sending PCI, PII, or GDPR classified data to one cloud or service, while device or telemetry data routes to others. Additionally, data pre-processing can occur at the edge to munge data such as time series formats into aggregate, reducing complexity in the cloud.

◈ Data Filtering – Most data just isn’t interesting. But you don’t know that until you’ve received it at a cloud service and decide to drop it on the floor. For example, fire alarms send the most boring data 99.999% of the time. Until they send data that is incredibly important! There is often no need to store or forward this data until it is relevant to your business. Additionally, many data scientists now desire to run individually trained models at the edge, and if data no longer fits that model or is an exception, to send the entire data set to the cloud for re-training. Filtering with complex models also allows intelligent filtering at the edge that support edge decision making.

◈ Edge Decision Making & Model Training – Training and storing ML models directly at the edge allows storing ephemeral models that may otherwise not be possible due to compliance or data sovereignty requirements. These models can act on ephemeral data that is not stored or forwarded, but still garner information and outcomes that can then be sent to centralized locations. Alternatively, models can be trained centrally in the cloud and pushed to the edge to perform any of the other listed edge functions. And when data no longer fits that model (such as collecting long tail time-series data) the entire data set can be aggregated to the cloud for retraining, and the model re-deployed to the edge endpoints.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 3: Hybrid Cloud, Edge Compute Use-case

As a real-life example, here in Cisco DevNet, we developed a use-case for doing Object Recognition using video streams from IP cameras. The video gateway at the edge analyzed the video streams in real-time, did object detection at the edge and passed the object to the Cisco Container Platform which further did object recognition. The recognized object, and all the associated meta-data, were stored at this layer. An application to query this data was written in the public cloud to track the path of the object.

Give the Cisco & Google Cloud Challenge a try


There’s no doubt about the popularity of Kubernetes in the developer community. Cisco Hybrid Cloud Platform for Google Cloud takes away the complexity of managing private clusters and lets developers concentrate on the things they want to innovate on. Start with our DevNet Sandbox for CCP, reserve your instance and test-drive it for yourself.

The Cisco & Google Cloud Challenge is an awesome way to brainstorm and solve some real customer problems and even win some prizes while you are at it. So, consider this blog as me inviting everyone to give the Challenge a try, and wishing you the very best! You have until Nov 1, 2018 to enter the challenge and win prizes.

Saturday 6 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 2

In part 1 of this blog series I talked about how data processing landscapes are getting more complex and heterogeneous creating roadblocks for customer who want to adopt truly hybrid cloud data applications. In the beginning of this year, Cisco and SAP decided to join forces and to bring the SAP Data Hub to the Cisco Container Platform. The goal is to provide a real end-to-end solution to help customers tackle the challenges described above and enable them to become a successful intelligent enterprise. We are focusing on providing a turn-key enterprise-scale solution that fosters a seamless interplay of powerful hardware and sophisticated software.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 1 Unified data integration and orchestration for enterprise data landscapes.

SAP brings into the game its novel data orchestration and refinery solution ‘SAP Data Hub’. The solution brings a number of features that allow customers to manage and process data in complex data landscapes involving on-premise systems and across multiple clouds. SAP Data Hub supports connecting the different systems in a landscape to a central hub to gain a first overview of all systems involved in data processing within a company. Above that the Data Hub is able to scan, profile and crawl those sources to retrieve the metadata and characteristics of the data stored in those sources. With that the SAP Data Hub provides a holistic data landscape overview in a central catalog and allows companies to answer the central questions about data positioning and governance.

Furthermore, the SAP Data Hub allows the definition of data pipelines that allow a data processing and landscape orchestration across all connected systems. Data pipelines consist of operators—small independent computation units—that form a joint computation graph. The functionality an operator provides can reach from very simple read operations and transformations (e.g. change the date format from US to EU), over interacting with a connected system, towards invoking a complex machine learning model. The operators are invoking their functionality and applying their transformations, while the data flows through the defined pipeline. This kind of data processing changes the paradigm of static, transactional ETL processes to a more dynamic flow-based data processing model.

With all of this functionality, we kept in mind that for being successful in bridging enterprise data and big data, we need to be open with respect to connecting not only SAP enterprise systems, but common systems used within the Big Data space (compare Figure 2). For this purpose, the SAP Data Hub is focusing on an open connectivity paradigm providing a huge number of connectors to different kinds of cloud and on-premise data management systems fostering the integration between enterprise data and big data.

All of that makes the SAP Data Hub a powerful enterprise application that allows customer to orchestrate and manage their complex system landscape. However, a solution like the Data Hub would be nothing without a powerful and flexible platform. Customers are increasingly turning towards containerized applications and Kubernetes as the orchestrator of choice, to handle the requirements to efficiently process large volumes of data. For this reason, it was a clear decision to move the SAP Data Hub also in this direction. The SAP Data Hub is completely containerized and uses Kubernetes as its platform and foundation.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 2 SAP & Cisco delivering turn-key solutions for complex enterprise data landscapes.

This is where Cisco with its advanced Cisco Container Platform (CCP) on its hyperconverged hardware solution Cisco Hyperflex comes into the game. Providing elastically scalable container clusters as a single turnkey solution covering on-premise and cloud environments with a single infrastructure stack is key for enterprise customer involved in big data analytics. With the Cisco Container Platform on Hyperflex 3.0 Cisco offers a fully integrated and flexible ‘container as a service’ offering with lifecycle management for hardware and software components. It provides a 100% upstream Kubernetes with integrated networking, system management and security. In addition, it utilizes modern technologies such as ‘istio’ and ‘Cloud Connect VPN’ to efficiently bridge on-premise and cloud services from different cloud providers. Accordingly, it accelerates a cloud-native transformation and application delivery in hybrid cloud enterprise environments, clearly embracing the multi-cloud world and helping to solve the multi-cloud challenges. Furthermore, the CCP allows to monitor the entire hardware and Kubernetes platform to allow customers to identify issues and non-beneficial usage patterns pro-actively and troubleshoot container clusters with fast pace.

Accordingly, the CCP is the perfect foundation for deploying the SAP Data Hub in complex, multi-cloud and hybrid cloud customer landscapes. We complemented the solution with Scality Ring an enterprise-ready scale-out file and object storage that fulfills major characteristics for production-ready usage; e.g. guaranteed reliability, availability and durability. This adds a data lake to the on-premise solution allowing price-efficient storage for mass data. In addition, we added network traffic load balancing with the advanced AVI Networks load balancers. They provide intelligent automation and monitoring for improved routing decisions. Both additions greatly benefit the CCP and complete it towards a full big data management and processing foundation.

With the release of the SAP Data Hub on the Cisco Container Platform running on Hyperflex 3.0 and complemented with Scality Ring and AVI Networks load balancers during SAP TechEd Las Vegas, customers will have the option to receive a turn-key, full-stack solution to tackle the challenges of modern enterprise data landscapes. They can start fast, they remain flexible and they receive full-stack support from Cisco’s world class engineering support and SAP’s advanced support services. Accordingly, SAP and Cisco together enable customers to win the race for the best data processing in the digital economy.

Friday 5 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 1

The journey towards the intelligent enterprise


When talking about modern data processing in the digital economy, data is often regarded as the new oil. Enterprise companies are already competing in a race for the best mining, extraction and processing technologies to gain better insights into their companies, deals and processes. Winning in this race will ultimately lead to a competitive advantage in the market, since companies with a deep understanding for their businesses will be able to take the most profitable decisions and establish the most beneficial optimizations. For this reason, the way companies are handling data and analytics is changing from pure transactional, ETL-like processing towards adopting modern technologies such as machine learning, intelligent analytics, stream processing on-premise and in the cloud. We regard to this transition as the move towards the ‘intelligent enterprise’.

However, the transition also leads to the fact that data processing landscapes are getting more complex and more heterogeneous. Data is processed in a growing collection of different system, it is distributed over different places; its volume is growing by the day and customers need to orchestrate and integrate cloud technologies with classical on-premise systems. Among all the challenges that come with the journey towards digitalization and the ‘intelligent enterprise’ the following three main challenges emerge as the most pressing points in our customer base:

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 1 Enterprise data landscapes are growing increasingly complex.

1. The tale about data governance and the lack of data knowledge, security and visibility 


One of the biggest challenges in complex modern enterprise data landscape is the distribution of the data over a growing number of stores and processing systems. This leads to a missing knowledge about data positioning, data characteristics and governance. “What data is available in which store?”, “What are the major characteristics of my data sets?”, “Who changed the data, in what way and who has the permissions to access it?” are typical questions that are hard to answer even within a single company. However, it is key to find a strategy that allows a holistic data governance and data management across the entire company.

2. The legend about enterprise readiness of big data technologies 


In the world of modern data processing technologies and big data management, we observe an incredible growth in tools and technologies a customer can choose from. While in the first place, choice seems to be an advantage, one quickly recognizes that this leads to a zoo of non-integrated system each exhibiting different characteristics, life cycles and environments. It is left to the customer to manage, organize and orchestrate those systems leading to a very high effort to arrive at an enterprise ready data landscape with a well-organized interplay of all components.

3. The story about easily processing enterprise data and big data together 


The adoption of modern big data technologies mainly comes from the fact that augmenting classical enterprise data such as sales figures and revenue data with big data such as sensor streams, social media data collections or mobile device data, allows to create deeper and advanced analyses. However, in most cases enterprise data and big data are kept in different silos and exhibit totally different characteristics. Enterprise data typically comes from classical transactional systems such as ERP systems or transactional data bases, it is well structured and adheres a standardized schema. On the other hand, big data often arrives in its pure form as data streams or data collections stored in data lakes (e.g. Hadoop, S3, GCS). It is often unstructured, misses clear data types and might not adhere to a clear data schema. Accordingly, creating an end-to-end data pipeline across the enterprise that combines business data with big data comes with a considerable effort.

SAP and Cisco jointly recognize that our mutual customers need innovative new solutions that would help them overcome these hurdles in order to fully leverage the value of their distributed data and turn them into actionable insights.