Sunday 14 October 2018

Building the 5G Business Case

2018 to date has been the year when 5G came out of the standards and into reality, with many trials throughout Asia Pacific (APAC). The learnings from these trials have shown us not only what services could be supported, but how 5G should be deployed and what the likely investments will be required for a commercial launch. During the recent 5G Asia event, the 5G discussions have moved from how and when 5G will be deployed, to how are we going to pay for it. So what will a 5G investment business case look like?

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

The way I see it, there are three broad areas of focus for the initial 5G business case, considering both top-line and bottom-line drivers:

1. First, the initial focus area is the economics of meeting the current projected data capacity growth requirements at a lower cost per bit. 3G/4G traffic is still growing at more than 100% year-over-year in some APAC markets and even jumped over 400% in India last year. The amount of capacity that 5G will enable of course depends primarily on the amount of new spectrum that regulators make available, but looking at allocations to date, we expect 5G to open up around 5x more bandwidth than 4G has today. Based on our modeling, 5G cost per bit could be less than half of what it is for 4G, and a quarter of 3G costs. This will drive traffic migration and spectrum re-farming initiatives once 5G is launched.

2. Second, 5G enables more customized services, or so-called slices. Most 4G services today are supported over the same generic “bearer” regardless of the requirements or value of that service. With 5G networks, attributes like bandwidth (BW), latency, resiliency and security can be customized, like per over-the-top (OTT), enterprise or Internet of Things (IoT) application, etc. The business case drivers here are both increased revenue share with new and differentiated services, and further improved cost to serve.

3. Third is the monetization of completely new services beyond what 4G can support today. This is where the new cool applications come in, like lower latency, Augmented Reality (AR)/Virtual Reality (VR), tactile internet, super high-definition media with Gbps bandwidth, and massively dense machine-to-machine (M2M) deployments. In this area, we have the most uncertainty for Return on Investment (RoI). Traditionally, the mobile industry hasn’t been great at predicting the next “killer” application, but we do know 5G is a step change in mobile capabilities and with the right device and application ecosystem, the next killer app will come.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

As we move from trial, to deployment, to launch of 5G services, the business model will continue to evolve, but at least for now, we have an initial view of how 5G will benefit service providers. Investment business cases based on lower cost per bit capacity and service differentiation are enough to show positive RoI, and monetizing the next 5G killer app will be the future icing on the cake.

Empowering Defenders: AMP Unity and Cisco Threat Response

Defenders have a lot of work to do, and many challenges to overcome. While conducting the Cisco 2018 Security Capabilities Benchmark Study, where we touched more than 3600 customers across 26 countries, these assumptions were confirmed. We have seen that defenders are struggling with the orchestration of a mix of security products and that, by itself, may obfuscate rather than clarify the security landscape.

Let’s take a moment to imagine a security team and the tasks it performs daily. Reviewing increasing numbers of alerts, attempting to correlate information from various sources to build a complete picture of each potential threat, triaging and assigning priorities, are all complex tasks performed under time pressures. The goal is to quickly come up with an adequate response strategy based on the clear understanding of the threat, its scope of compromise, and the potential damage it could cause. This process is often error-prone and time-consuming when it is manual. At the same time when understanding the alerts becomes a challenge, high severity threats can slip through the defenses.

We have heard from the majority of customers that an integrated approach is easier to implement and is more cost effective. Listening to and understanding the needs of our customers has always been a priority for us. Therefore, to empower security analysts with effective weapons to defend their organizations, Cisco has built a security architecture that helps streamline security operations. Most recently we have developed two offerings: one a platform and the other a capability: Cisco Threat Response and AMP Unity. Both are exciting developments and while they are different, they serve the same strategic goal.

AMP Unity


AMP Unity is a capability that allows organizations to register their AMP-enabled devices (Cisco NGFW, NGIPS, ESA, CES, WSA with a Malware/AMP subscription) in the AMP for Endpoints Console. In this way, those devices can be seen and queried (for sample observations) the same way the AMP for Endpoints Console already provides for endpoints. This integration allows correlating file propagation data across all of the threat vectors in a single User Interface (Global File Trajectory view).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Global File Trajectory view (showcasing file transfer through an email gateway, down to the endpoint, across the network to another endpoint)

But it doesn’t stop there. AMP Unity also allows you to create common file whitelists and file blacklists (through the same AMP for Endpoints Console) and enforce them across all of the registered AMP-enabled devices in the organization alongside your AMP endpoints (Global Outbreak Control).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Global Outbreak Control (adding a file to a Simple Detection list which enforces a blocking action across all AMP-enabled devices and endpoints)

In an incident response scenario, being able to quickly understand the scope of compromise and the way threats propagate across the environment, is essential. Being able to enforce policy across the malware inspection gateways and endpoints consistently helps security teams save time and address threats that matter.

Keep in mind that AMP Unity is a capability. It doesn’t introduce new dashboards or policies – it’s all managed through the AMP for Endpoints Console. That helps you derive more value out of your existing AMP investments.

Cisco Threat Response


Cisco Threat Response is an innovative platform that brings together security-related information from Cisco and third-party sources into a single, intuitive investigation and response console. It does so through a modular design that serves as an integration framework for event logs and threat intelligence. Modules allow for the rapid correlation of data by building relationship graphs that in turn, enable security teams to obtain a clear view of the attack, as well as to quickly make effective response actions.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Cisco Threat Response Relationship Graph

As of the time of publishing this blog, Cisco Threat Response brings together event logs and threat intelligence from multiple Cisco and 3rd party modules. It’s likely that by the team you read this blog, the platform has added additional modules and capabilities.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Cisco Threat Response Modules

The obvious value here is automation and the reduction of incident response lag caused by shifting through multiple user interfaces and attempting to correlate available data manually. That’s precisely what Threat Response does for you. The daily workflow is also streamlined through the integrated case management tool named “Casebook”. That is a tiny UI component that allows you to gather and pivot on observables, assign names to your investigations, take notes and much more. Casebooks are built on a cloud API and data storage, and can be referenced by any product (with your credentials). Because of this, they can follow you from product to product, eventually across the entire Cisco Security portfolio.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Casebook

Cisco Threat Response is currently available to AMP for Endpoints and Threat Grid customers, who can take advantage of this powerful platform and the possibilities it provides today.

Tying AMP Unity and Cisco Threat Response Together


Considering both of these developments provide added value to security teams through tighter native integrations, how do they relate to each other? Simple – Cisco Threat Response queries correlated event telemetry from AMP for Endpoints and allows you to quickly take containment actions. It does so through the AMP for Endpoints API, via the AMP for Endpoints module enabled in Threat Response. Since AMP for Endpoints Console is a central place to correlate telemetry from AMP-enabled devices, this information can be used to enrich relationship graphs built by Threat Response. On top of that, Global Outbreak Control capabilities introduced by AMP Unity can be used through the Threat Response User Interface.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
AMP Unity Events in Threat Response

AMP Unity brings your AMP-enabled device data to Threat Response via the AMP for Endpoints module, and in turn Threat Response allows you to quickly take action at both the endpoint and edge layers of your AMP deployment based on investigation results across all Threat Response data.

As Cisco continues to develop new modules for Threat Response, enabling AMP Unity will be an optional step to correlate event telemetry from AMP-enabled devices. Eventually Threat Response will be able to query these devices (WSA, ESA, CES, NGFW, NGIPS) directly without having to rely on the AMP for Endpoints module (which is especially important for customers who do not have AMP for Endpoints).

Friday 12 October 2018

Meraki Wireless Health APIs Make Network Assurance Easier

As Meraki continues to drive cloud managed networking into new markets, we continue to evolve our offerings to help customers and partners adopt this mission. With large enterprise, campuses, and service providers all rapidly growing Meraki wireless deployments, Meraki continues to rapidly evolve to drive innovation in these markets. As part of the strategy to make our rich data sources available to our customers, we introduced Meraki Wireless Health and a brand new product line Meraki Insight at Cisco Live in Barcelona 2018.

In addition to our rapid adoption of Meraki Wireless Health, Meraki’s API platform has also been experiencing rapid adoption by our customer and partner community. Since the introduction of APIs less than 2 years ago, the API platform has grown into 100’s of unique APIs and over 20 million API calls a day

Access all Meraki Wireless Health features via an API interface


Wireless Health has been an instant hit with hundreds of thousands of customers now actively using Meraki Wireless Health, we have maintained our focus on wireless health to build out additional value for our customers. To build on top of all of these successes, Meraki is proud to announce that we are now launching full Wireless Health API endpoints. This is one of our largest API launches to date, and all of our customers now have access. These new API endpoints will make it easy for both Meraki’s Partner Solutions team and Cisco’s DevNet team to drive simple Open Source and Partner Solutions that can help simplify the management of wireless deployments of all types and for all verticals.

With this launch we are creating 3 key types of API endpoints


1. Connection Health – Summarizes the connection health of a network, AP or client
2. Connection Failures – Returns a full list of association, authentication, DHCP and DNS issues
3. Network Latency – Summarizes the latency of a network, AP or client.

The great thing about these new API endpoints is we have designed them to be flexible. You can filter all of them by a specific VLAN or SSID. You can also call the summary statistics by using start time and end times.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Leverage our DevNet community to build your own customized analysis or visualization

Using the new APIs to create working solutions


During the development of the Wireless Health API endpoints, Meraki worked with DevNet to validate the format of the API endpoints and make them as robust as possible. Our teams also worked together to create a real working solution using the new API endpoints. With the experienced DevNet team working with the Meraki team, we were able to put together a full working demonstration of the new API endpoints within one week. Now that Meraki APIs are part of Cisco DevNet we now have access to over 500,000 DevNet developers who can create services and solutions based on the Meraki APIs.

Wireless infrastructure supports mobile POS devices


Together we created a solution that allows one of our retail customers to correlate how network health, customer foot traffic, and point of sale statistics all interact – and do it in a single dashboard for over 50 separate locations! With mobile point of sale (POS) devices becoming a predominant method of in-store payments, the wireless infrastructure has become more important than ever. So we wanted to create a dashboard where our customers could quickly check the overall current health of the retail store, but also (thanks to some open source engineering) see the historical health. We also worked with the customer to create an overall health score for their retail stores, so that they can roll up the disparate and complex data set into a single scoring rank. This unleashes an incredibly powerful data point, where the retailer can quickly identify poorly performing retail locations and within seconds dive in and investigate the root cause.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Complete visibility into the retail locations data using Kibana

Unleash the power of your data in the Meraki platform


This is just the beginning of Meraki unleashing the power of the data in our platform. In the months ahead, we will continue to release more network health metrics that will further streamline the running of enterprise networks. We are looking forward to all the great things our customers and partners are going to create with these newly introduced API endpoints, and our team will continue to leverage our agile development environment to drive more innovations in the networking space.

Since 2017 the Cisco Meraki team has been working with the Cisco DevNet team. As a result, over the last year a large number of partners have leveraged our API infrastructure to create solutions specific to unique use cases. We are constantly adding new solutions and partners to our ecosystem, and they are easily available for anyone to view.

Wednesday 10 October 2018

Challenge Your Inner Hybrid Creativity with Cisco and Google Cloud

In recent years, Kubernetes has risen up in popularity, especially with the developer community. And why do developers love Kubernetes? Because it offers incredible potential for speed, consistency, and flexibility for managing containers. But containers are not all sunshine and roses for enterprises – with big benefits come some big challenges. Nobody loves deploying, monitoring, and managing container lifecycles, especially across multiple public and private clouds. On top of that, there are many choices when it comes to environments, which can also create a lot of complexity – there are simply too many tools and too little standardization.

Production grade container environments powered by Kubernetes


That’s why earlier this year Cisco launched the Cisco Container Platform, a turnkey-solution for production grade container environments powered by Kubernetes. The Cisco Container Platform automates the repetitive functions and simplifies the complex ones so everyone can go back to enjoying the magic of containers. The Cisco Container Platform is a key element of Cisco’s overall container strategy and another way Cisco provides our customers with choices to various public clouds.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 1: Cisco Hybrid Cloud for Google Cloud

Hybrid cloud applications are the next big thing for developers


At the beginning of the year Cisco joined forces with Google Cloud on a hybrid cloud offering that, among other things, allows enterprises to deploy Kubernetes-based containers on-premises and securely connect with Google Cloud Platform.

In July at Google Cloud Next ’18, we kicked off the Cisco & Google Cloud Challenge.  (You still have until November 1, 2018 to enter the challenge and win prizes.) The idea behind it is to give developers a window into the possibilities for building hybrid cloud applications. Hybrid cloud applications are the next frontier for developers. There are so many innovation possibilities for the hybrid cloud infrastructure. That’s why we even named it “Two Clouds, infinite possibilities.”

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 2: Timeline for the Cisco & Google Cloud Challenge

An IoT edge use case for inspiration


Consider the following use case –assume we have a factory which generates a huge amount of data from sensors deployed across the physical building. We would like to analyze that data on-premises, but take advantage of cloud services in Google Cloud Platform for further analysis. This could include running predictive analysis with Machine Learning (ML) on that data (i.e., which machine part is going to break next). “Edge” here represents a generic class of use cases with these characteristics:

◈ Limited Network Bandwidth – Many manufacturing environments are remote, with limited bandwidth. Collecting data from hundreds of thousands of devices requires processing, buffering, and storage at the edge when bandwidth is limited. For instance, an offshore oil rig collects more than 50,000 data points per second, but less than 1% of this can be used in business decision making due to bandwidth constraints. Instead, analytics and logic can be applied at the edge, and summary decisions rolled up to the cloud.

◈ Data Separation & Partitioning – Often data from a single source needs to go to different and/or multiple locations or cloud services for analytics processing. Parsing the data at the edge to identify its final destination based on the desired analytics outcome allows you to route data more effectively, lower cloud costs and management overhead, and provide for the ability to route data based on compliance or data sovereignty needs. For example sending PCI, PII, or GDPR classified data to one cloud or service, while device or telemetry data routes to others. Additionally, data pre-processing can occur at the edge to munge data such as time series formats into aggregate, reducing complexity in the cloud.

◈ Data Filtering – Most data just isn’t interesting. But you don’t know that until you’ve received it at a cloud service and decide to drop it on the floor. For example, fire alarms send the most boring data 99.999% of the time. Until they send data that is incredibly important! There is often no need to store or forward this data until it is relevant to your business. Additionally, many data scientists now desire to run individually trained models at the edge, and if data no longer fits that model or is an exception, to send the entire data set to the cloud for re-training. Filtering with complex models also allows intelligent filtering at the edge that support edge decision making.

◈ Edge Decision Making & Model Training – Training and storing ML models directly at the edge allows storing ephemeral models that may otherwise not be possible due to compliance or data sovereignty requirements. These models can act on ephemeral data that is not stored or forwarded, but still garner information and outcomes that can then be sent to centralized locations. Alternatively, models can be trained centrally in the cloud and pushed to the edge to perform any of the other listed edge functions. And when data no longer fits that model (such as collecting long tail time-series data) the entire data set can be aggregated to the cloud for retraining, and the model re-deployed to the edge endpoints.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 3: Hybrid Cloud, Edge Compute Use-case

As a real-life example, here in Cisco DevNet, we developed a use-case for doing Object Recognition using video streams from IP cameras. The video gateway at the edge analyzed the video streams in real-time, did object detection at the edge and passed the object to the Cisco Container Platform which further did object recognition. The recognized object, and all the associated meta-data, were stored at this layer. An application to query this data was written in the public cloud to track the path of the object.

Give the Cisco & Google Cloud Challenge a try


There’s no doubt about the popularity of Kubernetes in the developer community. Cisco Hybrid Cloud Platform for Google Cloud takes away the complexity of managing private clusters and lets developers concentrate on the things they want to innovate on. Start with our DevNet Sandbox for CCP, reserve your instance and test-drive it for yourself.

The Cisco & Google Cloud Challenge is an awesome way to brainstorm and solve some real customer problems and even win some prizes while you are at it. So, consider this blog as me inviting everyone to give the Challenge a try, and wishing you the very best! You have until Nov 1, 2018 to enter the challenge and win prizes.

Saturday 6 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 2

In part 1 of this blog series I talked about how data processing landscapes are getting more complex and heterogeneous creating roadblocks for customer who want to adopt truly hybrid cloud data applications. In the beginning of this year, Cisco and SAP decided to join forces and to bring the SAP Data Hub to the Cisco Container Platform. The goal is to provide a real end-to-end solution to help customers tackle the challenges described above and enable them to become a successful intelligent enterprise. We are focusing on providing a turn-key enterprise-scale solution that fosters a seamless interplay of powerful hardware and sophisticated software.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 1 Unified data integration and orchestration for enterprise data landscapes.

SAP brings into the game its novel data orchestration and refinery solution ‘SAP Data Hub’. The solution brings a number of features that allow customers to manage and process data in complex data landscapes involving on-premise systems and across multiple clouds. SAP Data Hub supports connecting the different systems in a landscape to a central hub to gain a first overview of all systems involved in data processing within a company. Above that the Data Hub is able to scan, profile and crawl those sources to retrieve the metadata and characteristics of the data stored in those sources. With that the SAP Data Hub provides a holistic data landscape overview in a central catalog and allows companies to answer the central questions about data positioning and governance.

Furthermore, the SAP Data Hub allows the definition of data pipelines that allow a data processing and landscape orchestration across all connected systems. Data pipelines consist of operators—small independent computation units—that form a joint computation graph. The functionality an operator provides can reach from very simple read operations and transformations (e.g. change the date format from US to EU), over interacting with a connected system, towards invoking a complex machine learning model. The operators are invoking their functionality and applying their transformations, while the data flows through the defined pipeline. This kind of data processing changes the paradigm of static, transactional ETL processes to a more dynamic flow-based data processing model.

With all of this functionality, we kept in mind that for being successful in bridging enterprise data and big data, we need to be open with respect to connecting not only SAP enterprise systems, but common systems used within the Big Data space (compare Figure 2). For this purpose, the SAP Data Hub is focusing on an open connectivity paradigm providing a huge number of connectors to different kinds of cloud and on-premise data management systems fostering the integration between enterprise data and big data.

All of that makes the SAP Data Hub a powerful enterprise application that allows customer to orchestrate and manage their complex system landscape. However, a solution like the Data Hub would be nothing without a powerful and flexible platform. Customers are increasingly turning towards containerized applications and Kubernetes as the orchestrator of choice, to handle the requirements to efficiently process large volumes of data. For this reason, it was a clear decision to move the SAP Data Hub also in this direction. The SAP Data Hub is completely containerized and uses Kubernetes as its platform and foundation.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 2 SAP & Cisco delivering turn-key solutions for complex enterprise data landscapes.

This is where Cisco with its advanced Cisco Container Platform (CCP) on its hyperconverged hardware solution Cisco Hyperflex comes into the game. Providing elastically scalable container clusters as a single turnkey solution covering on-premise and cloud environments with a single infrastructure stack is key for enterprise customer involved in big data analytics. With the Cisco Container Platform on Hyperflex 3.0 Cisco offers a fully integrated and flexible ‘container as a service’ offering with lifecycle management for hardware and software components. It provides a 100% upstream Kubernetes with integrated networking, system management and security. In addition, it utilizes modern technologies such as ‘istio’ and ‘Cloud Connect VPN’ to efficiently bridge on-premise and cloud services from different cloud providers. Accordingly, it accelerates a cloud-native transformation and application delivery in hybrid cloud enterprise environments, clearly embracing the multi-cloud world and helping to solve the multi-cloud challenges. Furthermore, the CCP allows to monitor the entire hardware and Kubernetes platform to allow customers to identify issues and non-beneficial usage patterns pro-actively and troubleshoot container clusters with fast pace.

Accordingly, the CCP is the perfect foundation for deploying the SAP Data Hub in complex, multi-cloud and hybrid cloud customer landscapes. We complemented the solution with Scality Ring an enterprise-ready scale-out file and object storage that fulfills major characteristics for production-ready usage; e.g. guaranteed reliability, availability and durability. This adds a data lake to the on-premise solution allowing price-efficient storage for mass data. In addition, we added network traffic load balancing with the advanced AVI Networks load balancers. They provide intelligent automation and monitoring for improved routing decisions. Both additions greatly benefit the CCP and complete it towards a full big data management and processing foundation.

With the release of the SAP Data Hub on the Cisco Container Platform running on Hyperflex 3.0 and complemented with Scality Ring and AVI Networks load balancers during SAP TechEd Las Vegas, customers will have the option to receive a turn-key, full-stack solution to tackle the challenges of modern enterprise data landscapes. They can start fast, they remain flexible and they receive full-stack support from Cisco’s world class engineering support and SAP’s advanced support services. Accordingly, SAP and Cisco together enable customers to win the race for the best data processing in the digital economy.

Friday 5 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 1

The journey towards the intelligent enterprise


When talking about modern data processing in the digital economy, data is often regarded as the new oil. Enterprise companies are already competing in a race for the best mining, extraction and processing technologies to gain better insights into their companies, deals and processes. Winning in this race will ultimately lead to a competitive advantage in the market, since companies with a deep understanding for their businesses will be able to take the most profitable decisions and establish the most beneficial optimizations. For this reason, the way companies are handling data and analytics is changing from pure transactional, ETL-like processing towards adopting modern technologies such as machine learning, intelligent analytics, stream processing on-premise and in the cloud. We regard to this transition as the move towards the ‘intelligent enterprise’.

However, the transition also leads to the fact that data processing landscapes are getting more complex and more heterogeneous. Data is processed in a growing collection of different system, it is distributed over different places; its volume is growing by the day and customers need to orchestrate and integrate cloud technologies with classical on-premise systems. Among all the challenges that come with the journey towards digitalization and the ‘intelligent enterprise’ the following three main challenges emerge as the most pressing points in our customer base:

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 1 Enterprise data landscapes are growing increasingly complex.

1. The tale about data governance and the lack of data knowledge, security and visibility 


One of the biggest challenges in complex modern enterprise data landscape is the distribution of the data over a growing number of stores and processing systems. This leads to a missing knowledge about data positioning, data characteristics and governance. “What data is available in which store?”, “What are the major characteristics of my data sets?”, “Who changed the data, in what way and who has the permissions to access it?” are typical questions that are hard to answer even within a single company. However, it is key to find a strategy that allows a holistic data governance and data management across the entire company.

2. The legend about enterprise readiness of big data technologies 


In the world of modern data processing technologies and big data management, we observe an incredible growth in tools and technologies a customer can choose from. While in the first place, choice seems to be an advantage, one quickly recognizes that this leads to a zoo of non-integrated system each exhibiting different characteristics, life cycles and environments. It is left to the customer to manage, organize and orchestrate those systems leading to a very high effort to arrive at an enterprise ready data landscape with a well-organized interplay of all components.

3. The story about easily processing enterprise data and big data together 


The adoption of modern big data technologies mainly comes from the fact that augmenting classical enterprise data such as sales figures and revenue data with big data such as sensor streams, social media data collections or mobile device data, allows to create deeper and advanced analyses. However, in most cases enterprise data and big data are kept in different silos and exhibit totally different characteristics. Enterprise data typically comes from classical transactional systems such as ERP systems or transactional data bases, it is well structured and adheres a standardized schema. On the other hand, big data often arrives in its pure form as data streams or data collections stored in data lakes (e.g. Hadoop, S3, GCS). It is often unstructured, misses clear data types and might not adhere to a clear data schema. Accordingly, creating an end-to-end data pipeline across the enterprise that combines business data with big data comes with a considerable effort.

SAP and Cisco jointly recognize that our mutual customers need innovative new solutions that would help them overcome these hurdles in order to fully leverage the value of their distributed data and turn them into actionable insights.

Sunday 30 September 2018

Curated Code Repos Get Your Integration Project Done Faster, Better

Having a hard time getting started on your next big integration with Cisco products? Found the platform and API docs on DevNet but need help turning this into running code? Check out Code Exchange, one more way DevNet makes it easy for developers to be successful with Cisco products and platforms.

Code Exchange is an online, curated set of code repositories that help you develop applications with/on Cisco platforms and APIs. Inside Code Exchange, you will find hundreds of code repositories – code created and maintained by Cisco engineering teams, ecosystem partners, technology and open source communities, and individual developers. Anyone can use this code to jumpstart their app development with Cisco platforms, products, application programming interfaces (APIs), and software development kits (SDKs).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Curated for quality


There is a large and growing amount of sample code and applications, helpful tools and libraries, and open source projects related to Cisco technologies on GitHub. However, finding up-to-date content best suited for your immediate needs can be difficult. Code Exchange helps you tackle this challenge.

To get things started, our team of DevNet Developer Advocates identified candidate repositories using GitHub crawlers and an algorithm that scores repositories based on a number of criteria. We then reviewed top repos to make sure they are in good shape and of general interest to the DevNet community. While we do not actively maintain all of the code, we provide confirmation that the code is a worthwhile investment of your time before accepting it into Code Exchange.

Simple filters for technology space and programming language may be used independently or in combination with keywords you provide to zero in on the set of repos most relevant to your immediate needs. Want more guidance? Sorting by those most recommended by DevNet Developer Avocates, or the date the repo was last updated, presents you with the best and brightest projects.

Key Features:

1. Curated view of code repositories related to all Cisco platforms
2. Easy discoverability using filters and search features
3. Link to repository on GitHub for direct access to code and contributors

For example, let’s say you are looking for some sample code written in Python for automation of Cisco IOS XR platforms using APIs defined by native and standard YANG models. Simply enter that in the Code Exchange search field and filters and back comes a set of highly relevant resources.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Or perhaps you’re looking for Javascript for an integration with Cisco’s collaboration platforms? Let Code Exchange do the heavy lifting.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Community contributions provide even more options


At present, the majority of the code in Code Exchange comes from GitHub organizations managed by employees at Cisco. These include some obvious ones, such as Cisco DevNet, Cisco, Cisco Systems, and Cisco SE, as well as others that are less obvious at first glance, such as Talos, IOS-XR, and Meraki.

That said, we realize, and very much appreciate, that a huge amount of very useful code for working with Cisco technologies exists throughout the community at large. We encourage and welcome contributions to Code Exchange from the entire DevNet community, including code in your personal GitHub account.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Follow these base requirements to prepare your GitHub repositories related to Cisco technologies:

1. Include a LICENSE in the repository
2. Add a clear README
3. Ensure repository is publicly available
4. Show evidence of repository being maintained

Then fill out the form and DevNet Developer Advocates will take a look!

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

How to make sure your repo is accepted


What better way to get your application, your company, your name out in the community working with Cisco products and APIs than to have your repo featured in Code Exchange? In addition to meeting the requirements enforced by the submission form, there are several things you can do to help us realize how great your code is and gladly accept it into Code Exchange.

Your README should provide new users with all the information they need to understand what your repo contains, including a getting started section with step by step instructions for how to install, run, and/or use it, and where to turn to get answers or to provide feedback. Your README will show best in Code Exchange if written in Markdown (i.e. README.md). We are in the process of adding support for reStructured Text (i.e. README.rst).

It is also highly recommended to include a CONTRIBUTING.md file that outlines how best to contribute back to the project by reporting issues, fixing bugs, adding new functionality, etc. Is it best to fork the project and send a pull request? Should an issue be opened first? What if I simply want to ask a question? Make it clear and easy for others to not only use your code but also help make it better.

Tips for enhancing the discoverability of your repos


At the time of project submission, you can identify the set of technologies to which your code is related. Identifying all and only those that truly apply is very helpful. Equally important is to add a meaningful description and GitHub topics. The search functionality of Code Exchange relies on these as well as first and second level headings in your README.