Saturday, 13 April 2019

ACI Anywhere Now Extending From On-Premises to AWS Cloud

Cisco is pleased to announce availability of a brand-new solution, Cisco Cloud ACI on AWS. This solution automates management of end-to-end connectivity and enforcement of consistent network security policies for applications running in on-prem data centers and AWS public cloud regions.

Decentralized Data Means Cloud Growth


Enterprises, large and small, are expanding to the cloud to build applications that engage their customers. And their developers and IT teams must manage their private and public cloud environments.

IDC expects spending on cloud IT infrastructure to grow at a five-year compound annual growth rate (CAGR) of 11.2%, reaching $82.9 billion in 2022, and accounting for 56.0% of total IT infrastructure spend. Public cloud data centers will account for 66.0% of this amount, growing at an 11.3% CAGR. Spending on private cloud infrastructure will grow at a CAGR of 12.0%*.

Due to this massive shift in the decentralization of data, increasing cloud acceptance, and move to hybrid environments, businesses need a network that can empower the data center to go securely anywhere. Innovation should only be limited by imagination, not technology. Cisco’s ACI Anywhere with Cloud ACI is the bridge.

Multicloud Doesn’t Need to Mean Complexity


As the adoption of multicloud strategies grow, the industry is demanding consistent policy, security, and visibility everywhere, with a simplified operating model. IT organizations are challenged to maintain governance, compliance, agility, flexibility, and TCO optimization for legacy, virtualized, and next-generation applications across multiple on-premises sites and clouds.

Highly complex operational models today are the result of diverse and disjointed visibility and troubleshooting capabilities, with no correlation across different cloud service providers. There are multiple panes of glass to configure, manage, monitor, and operate these multicloud instances. And there are inconsistent segmentation capabilities today across hybrid instances that pose security, compliance and governance challenges.

Cisco Cloud ACI Extends ACI Capabilities from On-premises to Public Cloud


Cisco ACI delivers control and visibility based on application network policy. With the next phase, Cisco ACI extends this policy-driven automation from on-premises to public cloud instances.

Cisco Cloud ACI runs natively in public clouds and delivers the following key capabilities:

Automated and secure hybrid connectivity through unified management. Through a single pane of glass (ACI Multi-Site Orchestrator), users can configure inter-site connectivity, define policies, and monitor the health of network infrastructure across hybrid environments. Inter-site connectivity includes (i) An underlay network for IP reachability (IPsec VPN over the Internet, or through AWS Direct Connect*) and (ii) an overlay network between the on-premises and cloud sites that runs BGP EVPN as its control plane and uses VXLAN encapsulation and tunneling as its data plane.

Cisco Data Center, Cisco AWS Cloud, Cisco Study Materials, Cisco Learning, Cisco Guides

Enable consistent security posture, governance, and compliance through a common policy abstraction. Cisco ACI on AWS uses group-based network and security policy models.Cloud ACI translates ACI policies into cloud-native policy constructs. The logical network constructs of Cisco ACI (tenants, VRFs, endpoint groups (EPGs), and contracts etc) translate into AWS networking constructs (user accounts, Virtual Private Cloud (VPC), and security groups, plus security group rules and network access-control lists etc.). This enables consistent network segmentation, access control, and isolation across hybrid deployments.

Enable elasticity for resources across on-premises data center and public cloud. Enable secure workload mobility and preserve the application policies, network segmentation, and identity of the workload (IP mobility*).

Facilitate workload migration across hybrid environments. Enable secure workload mobility and preserve the application policies, network segmentation, and identity of the workload (IP mobility*).

Enable business continuity and disaster recovery. Allow organizations to maintain or quickly resume mission-critical applications using a back-up and recovery site in the public cloud.

What makes Cisco’s Cloud ACI different and relevant for you


Cloud ACI provides a common policy abstraction and consumes AWS public APIs to deliver policy consistency and segmentation. As such, Cloud ACI is not confined to bare-metal instances in AWS and does not require deployment of agents in cloud workloads to achieve segmentation.

With Cisco ACI, customers can carry all their network and security policies across data centers, colocations, and clouds. Cisco ACI automates cross-domain service chaining of application traffic across physical and virtual L4-L7 devices to scale, and seamlessly integrates bare-metal servers, virtual machines, and containers under a single policy framework.

Cisco Data Center, Cisco AWS Cloud, Cisco Study Materials, Cisco Learning, Cisco Guides

Cisco ACI also has the industry’s broadest tech-partner ecoysystem and integrates with a variety of solutions ranging from Cisco AppDynamics, CloudCenter to F5, ServiceNow, Splunk, SevOne, and Datadog. Customers can leverage widely adopted tools such as Terraform and Ansible to achieve end-to-end workflow-based automation. AWS customers can tap into the rich cross-silo insights through ACI integrations with AWS technologies like Amazon CloudWatch* and Amazon Simple Notification Service (Amazon SNS)* to fine tune the network for better throughput, latency, path selection, security and cost optimization.

Have ACI Anywhere with Cloud ACI on AWS


As the industry’s most deployed, open SDN platform, Cisco delivers advanced capabilities on AWS and simplifies multicloud deployments with Cisco Cloud ACI. With the Cloud ACI architecture, customers and analysts see the benefit of seamless layer-in policy consistency, operational simplicity and the flexibility to leverage services offered by public clouds.

“ESG Research validates that companies are increasingly adopting a hybrid cloud approach to deliver the best service for their customers. In fact, many are adopting a Multicloud policy” says Bob Laliberte, Practice Director and Senior Analyst with the Enterprise Strategy Group. “However, these distributed compute environments create significant management complexity. Cisco ACI Anywhere, and more specifically, Cloud ACI on AWS is helping to consolidate and simplify management across the on-premises data center and the popular AWS cloud environment, something that we expect will be well received by all market segments.”

Thursday, 11 April 2019

Simplifying Container Orchestration with Cisco Hybrid Solution for Kubernetes on AWS

For organizations that are adopting DevOps practices and modern cloud capabilities to accelerate innovation and gain competitive advantage, one of the biggest challenges is maintaining common and consistent environments through an application’s lifecycle from development through to deployment. Containers solved the application portability problem of packaging all the necessary dependencies into discrete images, and Kubernetes has emerged as the defacto standard for how those containers are orchestrated and deployed.

By adopting containers and Kubernetes, IT and Line of Business users can focus their efforts on developing applications, rather than infrastructure and ‘plumbing’. Because Kubernetes is available everywhere, one can choose the best place to run an application based on business needs. For some applications, the scale and reach of the public cloud, along with its huge number of services available, will be the determining factor. For others, data locality, security or other concerns dictate an on-premises deployment.

Current solutions can be complex, requiring organizations to work across either isolated or separate environments and forcing teams to “glue” all the parts together themselves, at the expense of time and money. This can result in less choice by forcing organizations to choose between on-premises and public clouds or being limited by “all-or-nothing” stacks.

To help our customers with this challenge, Cisco announced today our collaboration with AWS to create the Cisco Hybrid Solution for Kubernetes on AWS. The new solution combines Cisco, AWS and Open Source technologies to simplify complexity and eliminate challenges for customers turning to Kubernetes to enable deploying applications across on-premises and the AWS cloud in a secure, consistent manner. It provides a tested, validated and simple solution that delivers consistent Kubernetes clusters both on premises and in the cloud, leveraging the best attributes of each. This reduces the burden on different teams with respect to people, processes and skill sets, accelerating the application deployment cycle and resulting in faster innovation. Customers can extend on-premises capabilities and resources to AWS cloud as well as utilize services and resources from AWS cloud on-premises.

Solution Overview


The core component of the The Cisco Hybrid Solution for Kubernetes on AWS is a unique integration between Cisco Container Platform (CCP) and Amazon Elastic Container Service for Kubernetes (EKS) so that through the single CCP management UI, the customer can provision clusters both on-premises and on EKS in the cloud. CCP uses AWS IAM authentication to create the VPC, instructs EKS to create a new cluster, and then configures the worker nodes in that cluster.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

With Cisco Hybrid Solution for Kubernetes on AWS, customers use the CCP UI to launch Kubernetes clusters in Amazon AWS in addition to on-premises environments. They simply declare their Kubernetes cluster specification and reference the cisco managed operating system images for the worker node images to deploy clusters in either environment. AWS Identity and Access Management (IAM) is integrated as common authentication mechanism, so that the cluster administrator is free to apply the same role-based access control (RBAC) policies across both environments. Both environments are integrated with Amazon Elastic Container Registry (ECR), providing a secure, single repository for all the container images. A standard set of Open Source monitoring and logging tools based on Prometheus and ElasticSearch/FluentD/Kibana (EFK) stack is deployed to the clusters to provide consistent logging and metrics. Finally, Cisco’s site-to-site VPN solutions, such as CSR 1000v are leveraged to provide a range of secure connectivity options between the cloud-hosted and on-premises services.

Cisco offers a single point of contact for support across all the components of the solution (including AWS components – EKS, IAM and ECR) – as opposed to having to seek support for each component separately from different vendors.

Using Cisco Container Platform to Provision Kubernetes Clusters in Amazon EKS and on-premises


To see what this looks like in practice, lets walk through how the administrator would create an EKS cluster using the Cisco Container Platform (CCP) dashboard.

Provisioning an EKS clusters is as simple as a few button clicks. You first define AWS as your infrastructure provider. This includes a provider name, and AWS account credentials.

Note: The AWS account credentials specified here will be the AWS IAM identity that has privileges to manage the EKS cluster.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

Next, you specify basic information about your Amazon EKS cluster. This includes the AWS region you want to deploy the EKS cluster in, an optional IAM user or role that you want allowed additionally to manage the EKS cluster, a cluster name and the Kubernetes version for the cluster.

Finally, you configure information about the EKS worker nodes. This includes the instance types, machine image, number of worker nodes and public ssh keys.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

And that’s it! Behind the scenes, CCP uses the Amazon APIs to provision the following resources:

◈ A new VPC (including subnets, security groups, route tables, etc.) in your account in accordance with AWS best practices, with secure private and public subnets as recommended by Cisco for VPN interconnection
◈ A service role for EKS
◈ A node instance profile for the EKS worker nodes
◈ An EKS cluster
◈ An autoscaling group with EKS worker nodes
◈ A configMap on your cluster that allows the worker nodes to join the master

Once the cluster is deployed, you can download a pre-generated Kubernetes cluster config file ( ~/.kube/config) . CCP leverages the open source aws-iam-authenticator kubectl plugin that uses credentials from your local ~/.aws/credentials file to authenticate an AWS IAM user with the EKS cluster.

For on-premises Kubernetes clusters deployed and managed by CCP, the solution offers an integrated experience with Amazon Cloud. As part of the integration with AWS, you can now select the “enable AWS IAM” option, which will install the AWS IAM authenticator components in the newly created on-premises Kubernetes cluster. This allows you to use a single set of AWS IAM credentials to access Kubernetes clusters both on-premises as well as in EKS.

With clusters provisioned in cloud and on-premises environments, let’s take a deeper look at each of the AWS integrations in Cisco Hybrid Solution for Kubernetes on AWS.

Common IAM Identity for Authentication with a common RBAC policy for Authorization


CCP leverages the open source AWS IAM authenticator to enable a common AWS IAM user/role to authenticate with clusters in both cloud and on-premises environments. Once the user/role authenticates with the clusters, a configurable common RBAC policy defines the specific permissions that the user/role is authorized to perform within the respective clusters. As a result, you have to simply switch context using a common “kubectl” cli tool to access either environment.

By default, the AWS credentials specified at the time of Amazon EKS cluster creation are mapped to the Kubernetes ‘cluster-admin’ ClusterRole (the “system:managers” group ClusterRoleBinding). This IAM identity has administrative control of the EKS cluster. As noted before, you can optionally specify an additional AWS IAM role or IAM user as Amazon Resource Name (ARN). When you specify this, CCP:

1 )  Maps an additional associated role in the EKS cluster configMap as illustrated below:

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

2)  Adds the associated role to the kube config so that the AWS IAM authenticator can use that role to authenticate with the EKS cluster as shown below:

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

For the on-premises cluster, you can enable the AWS IAM integration to authenticate with the cluster using the same IAM identity. You do this by specifying the ARN of an AWS IAM user during the on-premises cluster creation process. CCP similarly maps this user to the Kubernetes ‘cluster-admin’ ClusterRole in the on-premises cluster’s configMap. It also updates the on-premises cluster’s kubeconfig which in-turn enables the AWS IAM authenticator client to authenticate with the on-premises cluster using the same IAM identity.

With IAM configured as described above, it is then possible to use a common RBAC policy applied to Kubernetes clusters either an EKS or on-premises to control access to resources.

Common Amazon Elastic Container Registry (ECR)


CCP integrates with ECR, providing a secure, single repository for all the container images.

For Amazon EKS worker nodes, CCP automatically provisions an instance-role that has permissions to read/write from an ECR repository.

Since on-premises nodes have no such role, an additional step is necessary – the credentials must be stored in a Kubernetes secret which is then referenced by the pod manifest (see below). A script such as the following will do that for you (replace the items in [] as appropriate).

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

This script fetches an authorization token from AWS and stores it in a Kubernetes secret which is read during the pod deployment. Note that it is necessary to periodically refresh this token. By default, the token expires after 12 hours.

After running the script above, you can deploy a kubernetes manifest via kubectl, specifying the relevant details of the ECR repository, as you normally would. The example pod manifest below demonstrates how the ECR repository used by an application is specified in the image property.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

To pull images from an ECR registry, it is necessary to provide credentials. This is described in Amazon’s ECR documentation. For a user running docker, i.t looks like this: (ecr:GetAuthorizationToken privileges are required), while Kubernetes will use the credentials stored in a Kubernetes secret as described earlier and specified in the “imagePullSecret” in the pod manifest.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

With CCP, you can deploy both your on-premises and Amazon EKS worker nodes with the same Kubernetes version and operating system.

At launch, you can deploy Kubernetes v1.10 with Ubuntu 18.04 worker nodes, using Cisco-provided images. You do not have to worry about Kubernetes and operating system version inconsistencies across siloed environments. Updates and security patches across the on-premises and AWS environment are handled seamlessly and provided via the CCP control plane software.

Common Monitoring and Logging


CCP provides integrated cluster monitoring via a Prometheus and EFK stack (ElasticSearch/FluentD/Kibana) that is deployed within each cluster deployed by CCP. Monitoring each cluster is in compliance with best practices that mandate separation of production data from development data and for keeping information local for GDPR. It also ensures that logs and metrics are not reliant upon a central service which could be unavailable. Cisco Services can help with log forwarding and central metrics collection as well as integration with customer’s own logging and metrics systems as desired.

Value-added Integrations for Connectivity, Security and Monitoring


Cisco’s extended cross-portfolio solutions provide a range of value-added solutions that can be leveraged from the AWS marketplace to complement the Cisco Hybrid Solution for Kubernetes on AWS.

These include:

◈ Application Deployment: Use Cisco CloudCenter to securely deploy both Kubernetes and VM-based workloads across both private and public infrastructure.

◈ Connectivity: Use Cisco CSR1000v to establish VPN connectivity between hybrid on-premises and cloud environments

◈ Security: Deploy Cisco Stealthwatch to monitor network traffic application traffic for anomalies, leveraging AWS flow logs for cloud-based workloads.

◈ Monitoring: Enable AppDynamics application performance monitoring to see the real-time impact that application performance has on your business results.

Tuesday, 9 April 2019

A guide to maximizing your chances of success with IoT

“Dream big, start small.” This may sound like a clichéd phrase from a motivational poster, but it’s actually a very valuable piece of advice for enterprises to heed when deploying Internet of Things (IoT) initiatives.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

By now, we all know that IoT has the power to drive digital transformations across industries by creating new value propositions, business models, services and markets. However, as I speak with frontline business and operations managers from enterprises around the world, I’ve found that many are still unsure how and where to begin their IoT journeys. They have big ideas and aspirations, but often struggle to see their project through. In fact, 60 percent of IoT initiatives don’t move past the proof of concept stage, and just 26 percent of organizations consider their IoT initiative a success.

Whether you’re embarking on your first or fiftieth IoT project, you need to do some careful planning to yield tangible results. Often that means starting with the low-hanging fruit – realizing some quick successes with a fast ROI and then scaling your projects into additional areas of the business for more ambitious results.

Condensed from my interactive book, “Building the Internet of Things – a Project Workbook” here are the 10 steps I recommend organizations take to maximize the chances of success with their IoT projects. Some of these might seem basic and common sense. However, based on my experience with dozens of IoT implementations across industries, I have discovered that these guidelines are often overlooked.

Identify your IoT project vision: First start small, but never lose sight of your end goals. IoT is a technology tool, and your IoT project is a means to an end. Therefore, you must first clearly define your business-oriented “why.” Why do you want to implement IoT, and what business goals do you plan to achieve? Here, consult cross-functional teams for input and to help secure buy-in from your higher-ups. If you skip this step, you will end up fragmenting your efforts on one-off projects, rather than creating a foundation for true digital transformation across your organization.

Define your use case: What is the specific business problem you want to solve? I recommend starting with one of four “fast paths” to IoT payback that focus on improving existing processes and thus reducing costs: connected operations (linking devices, sensors and meters to a network); remote operations (monitoring, control and asset management); predictive analytics (identifying and understanding where to take action); and preventative maintenance (increasing uptime and productive hours). Further down the road, you can start leveraging IoT to generate new revenue streams, business models and value propositions, as well as map out new go-to-market strategies, market disruptions and more.

Determine your skill requirements. People, not just the technology itself, determine the success of your IoT journey. Therefore, evaluate the readiness of your team and its skillsets to support your IoT initiative. Large IoT projects require people with soft-skills – not just technical knowledge– to build trusted relationships and virtual teams across departments and functions, listen and communicate, as well as secure buy-ins and on-going support and sponsorship from peers, executives and partners.

Benchmark your organization against your industry peers: This step will help establish metrics you can use to validate your project and determine how far you’ve come upon its completion. I suggest benchmarking your organization in the following areas: IT and OT convergence (not only at a technology level, but also organizational, architectural and business process); innovation environments (your workforce’s capabilities and appetite for innovation); partner ecosystems; customer relationships; and level of IoT experience. Use the results to identify gaps you need to address prior to starting the project.

Assess your technological readiness: Consider whether you’ll be able to connect and access all data and, at least, major functions of IT and OT groups via open and interoperable technology stack. Do you need to integrate islands of data? Do you have plans to consolidate networks onto IP? Rest assured that you don’t necessarily need to overhaul your legacy system from the start, especially if you are starting small. You can begin by connecting existing systems within your organization, then gradually introduce other elements of flexible frameworks.

Assess your cultural readiness: From the C-Suite to your workforce and across your partner’s ecosystem, your organization must be ready and willing to support your project. Here, it’s important to assess how well key functions tend to work together, how well they communicate with each other as well as with key stakeholders (including customers), what changes to the culture your initiative will require and what changes it will bring.

Develop the value proposition for your business case: As you prepare your organization for IoT’s required cultural change, IT managers will want to know the expected ROI of your project. Do your best to estimate a hard number, considering patterns of payback where IoT delivers the greatest value (see step 4), while taking into account the cost of new technology, human capital, device connections and cultural change.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Identify and connect devices, technologies and system: This critical step involves creating your project’s blueprint. Define your technology framework and how it needs to integrate with your existing systems and with the business processes. Make sure that your framework is not applicable only to your first project, but that it can also scale across your organization down the road and is flexible enough to integrate future technologies.

Address security:Take a rational, risk-based, architectural approach to IoT security. Partner with your Chief Information Security Officer to create a unified and policy-based security architecture that is imbedded into every aspect of your technology stack and workflow. Develop a plan for how you’ll handle security incidents before, during and after an attack. Leverage industry best practices and tools (don’t reinvent the wheel) such as device and traffic segmentation to safeguard your infrastructure from end to end. In addition, implement processes and checks to ensure the accuracy and validity of your IoT data flows. Identify the data you plan to capture and apply the appropriate business rules or logic needed to process or analyze it for meaningful results.

Measure Success:As you put your plan into gear, measure your successes (and even failures) along the way. Refer back to your baseline metrics established during the benchmarking step; identify what worked, and where you need to improve. Once you realize results – big or small – look for ways to replicate and scale your initiatives across other areas of your business.

Thursday, 4 April 2019

The Potential of Thought Leadership is Much Better Than You Think

A headline from the 2019 Edelman-LinkedIn B2B Thought Leadership Impact Study caught my attention: Thought leadership has more influence on sales than marketers realize.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certifications

As it turns out, when it comes to thought leadership, marketers and those who create thought leadership have different beliefs when compared to decision makers and those who consume thought leadership content.

Consider these examples:

◈ Thought leadership creates access to top-of-the-food-chain decision makers. Forty-seven percent of C-Suite executives said they shared their contact information after consuming thought leadership content. Only 39 percent of marketers believe thought leadership generates leads or provides new contacts to call on

◈ Thought leadership content influenced 45 percent of business decision makers to invite an organization to bid on a project they were not previously considering. Only 17 percent of marketers said they felt thought leadership was effective at generating RFPs

◈ Thought leadership directly influenced 58 percent of decision makers to award business to an organization. Only 26 percent of marketers believe thought leadership is responsible for helping them close business

◈ Sixty-one percent of C-Suite executives said they would pay a premium to work with organizations that have clearly articulated a vision through thought leadership. Only 14 percent of marketers said thought leadership allowed them to charge more than their competitors who produce lower quality thought leadership content or none at all.

Here’s a chart that sums it all up: At every stage, decision makers value thought leadership more than those who produce it.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certifications

Source: 2019 Edelman-LinkedIn B2B Thought Leadership Impact Study

Marketers and decision makers are aligned on one thing: There’s not a lot of great thought leadership out there. Only 18 percent of thought leadership content is considered “excellent.”

To get the attention of the C-suite, to generate new RFPs, to become a premiere service provider—to reap the benefits of thought leadership, you have to create something far above the run-of-the-mill content branded and touted as ‘thought leadership.’

Thought leadership is bestowed, not claimed. Leadership of thought is in the eye of the content consumer, not the content creator. They decide who is the leader, and they decide who is not.

Anyone can create thought leadership. It takes five fairly straight-forward steps:

1. Develop an Idea
2. Create content
3. Merchandise your content
4. Ensure your content is consumed
5. As a result of your content, peoples’ attitudes and behaviors change

The fifth step is the one we overlook. But without it, what’s the point?

Today, I see a lot of marketers completing steps 1 – 4 but they’re not thinking about the fifth and most important step. Thought leadership without accompanying attitude and behavior change is a big waste of time and money.

If you want to create thought leadership that actually moves the needle—that actually influences change—follow these three commandments:

1. Know the Landscape.


No content exists in a vacuum, and it’s nearly impossible to find a topic that is brand-new and uncovered. In order for your content to become “thought leadership” it must be different and better than everything else that is already out there on the topic. I don’t see enough businesses doing the research and leg work from the beginning (and there are a TON of tools out there to help do this work). If seven great eBooks or webinars already exist about your topic, the bar for your content to become “thought leadership” is high.

Know what you are competing against for attention (and for Google love) and make your content different and better.

2. Prove It.


Today there’s a lot of essay-style “thought leadership,” which really isn’t that different than a guy on a street corner shouting at passersby. You see this approach with LinkedIn articles and Medium posts. When deconstructed, the content is someone venting or throwing out an idea. That canbe interesting, but opinions only aren’t likely to become thought leading.

If you want to be at the head of the pack, it’s better to use first or second-party research to develop more fact-based content.

3. Atomize It.


Even within your target audience who share attributes and values, people prefer different modalities of content based on their age, technological aptitude, and job function to name a few. The best thought leadership respects these choices and provides content in a panoply of formats. Don’t just write a white paper and call the job done. At Convince & Convert, we counsel all our clients to atomize their thought leadership into different formats: teasers, videos, infographics, audio content, and more. The list goes on and on.

The rule we follow is for every single piece of content create at least eight new and different content formats. If your thought leadership content is a white paper, for example, produce eight videos and also distribute it as an episodic series. Generate additional content formats and use those to appeal to your audience in many ways. Don’t stop at one.

Wednesday, 3 April 2019

How 5G Will Make the Network-as-a-Service (NaaS) Model a Reality

Cellular networks have become an important connectivity asset for businesses, allowing them to support mobile workers and devices that sit outside the enterprise. Despite the importance of connectivity, mobile networks have been limited in their ability to provide unique experiences for different types of users and apps connected to the network. 5G will have huge ramifications for what organizations can do over cellular connections and, as this is the first of four blogs providing an insight into how Cisco sees 5G’s future, it’s a worthwhile starting point to remind ourselves of the likely impacts of this new wave in connecting machines and people.

Clearly, 5G will be much faster than today’s networks but it will also be more reliable, more energy-efficient, capable of delivering high connectivity density and operating with very low latency.

5G’s network slicing capability is a means of providing a differentiated experience for users and devices based on the specific requirements of the environment they operator in. Together with the aforementioned new radio capabilities, slicing will offer the service levels, security, controllability, programmability and uptime that are needed for challenging and even mission-critical applications today. Network slicing leverages the virtualization of mobile network resources to allow the operator to create many logical networks with unique capabilities over a single physical network.

SP360: Service Provider, Collaboration, 5G, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certifications

With 5G, the dynamic provisioning and scaling of network capacity and resources are available for the first time. The vision of managing the network-as-a-service in the same way as an application developer might manage cloud resources on AWS, Azure, or Google Cloud Platform is finally coming.

So, what does this mean in the real world?

5G’s speed and low latency makes it fit for the data glut created by bandwidth-hungry applications such as 4k video, AI-embedded devices and streaming analytics. But, more strategically, 5G opens new markets opportunities for Mobile Network Operators (MNOs) to address use cases that have specialized connectivity requirements: factory floors, autonomous vehicles, the Internet of Things, fixed wireless connectivity to remote branches and sites, and beyond.

While the potential for 5G to introduce new MNOs to new markets is significant, it should be noted that the investment operators will have to make to deliver 5G networks will be equally significant. Ensuring a viable business case will require the operators to find opportunities to charge a premium over basic connectivity for the differentiated experiences that 5G enables.

MNOs need therefore to make clear the advantages of 5G in terms of its ability to enable new capabilities and business outcomes for business customers:

SP360: Service Provider, Collaboration, 5G, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certifications

◈ A sensor-driven manufacturing system that links to supply-chain peers and knows when parts need to be serviced or replaced (and might even be able to perform the task automatically).
◈ Real-time collaboration, including video conferencing
◈ An autonomous vehicle that relays status information to its manufacturer from every component – brakes, gears, acceleration, passenger games and entertainment and so on – for predictive maintenance, thanks to those aforementioned network slices.
◈ Connected remote offices and sites that are located in inaccessible or rural areas.
◈ Retail stores and malls that can deliver individualized augmented reality experiences offers and information for shoppers.
◈ Security, including live video streaming for protection of people and assets

At Cisco, we see our role as enabling enterprise customers to extend their network boundaries and trusted security profiles to the edge of the 5G network, with role-based controls over who has access to what services. By providing decades of expertise on internetworking and by layering in network and security policy controls to 5G services we will go to market with MNOs to change the network. And change the world.

Tuesday, 2 April 2019

Putting the “Trust” in Trustworthy SD-WAN

Organizations are implementing SD-WAN to bring secure, cost-effective, and efficient connectivity to distributed branches, retail outlets, and an increasingly distributed workforce. Top of mind for IT when expanding remote connectivity is ensuring the security and integrity of remote network appliances that are no longer under lock and key in the data center. Therefore, one of the advantages of adopting a software-defined network architecture is the plug-and-play, zero touch installation and configuration of remote SD-WAN branch routers and compute platforms. These Cisco-engineered appliances can be shipped directly to a remote site, powered on by a non-technical employee, and remotely configured by an IT expert from anywhere in the World Wide Web. For budget-constrained IT departments, remote provisioning, configuration, and management of network components, both hardware and software, provides significant time and cost savings.

But there’s also continuous pressure to decrease IT CapEx spending, as reflected in a recent trend to run Virtualized Network Functions (VNF) on white-label or bare-metal hardware. Budget-minded IT purchasers hope to save money by opting for less-expensive, generic versions of x86 hardware to run routing and security VNFs. However, security-minded IT professionals have a different perspective of using off-the-shelf compute hardware to process business-sensitive and personal data—the increase in risk.

Let’s look at an example of white box hardware that is shipped from a third-party manufacturer to a remote office for installation and provisioning. In today’s security environment, IT professionals should be asking:

◈ Where did my networking gear actually originate?
◈ Is the device genuine?
◈ Has it been altered at low levels in the BIOS?
◈ Is malware lurking in the bootstrap code?
◈ Can corrupted software with backdoors be installed without warning?

For scenarios like these, there’s no way to tell if corruption has occurred unless security-focused processes and technologies are built into the hardware and software across the full lifecycle of the solution. That level of engineering is difficult to accomplish on low-margin, bare metal hardware. Even when running VNFs on a public cloud, the same bare metal risk is mitigated only by the guarantees of the colocation or IaaS provider. If the choice comes down to savings from slightly less costly hardware versus increase in risk, its worthwhile remembering the average cost of stolen data from security breaches is $148 per record, while the cost of the loss of customer trust and theft of intellectual property is incalculable.

The Risky Business of Trusting Generic Hardware


With the daily onslaught of ever-more sophisticated threats, we all recognize that security for networks and applications has to be built into the foundation of every networking device. Network operators must be able to verify whether the hardware and software that comprise their infrastructure are genuine, uncompromised, and operating as intended. No matter how many functions are added to the security stack, the weakest link can cause all the other layers to fail. From hardware, to OS, to VNFs, every layer needs to be secure and work interdependently with the other layers for a complete defensive posture of the attack surface.

Building in Trust from Design through Deployment


Cisco embeds security and resilience throughout the lifecycle of our solutions including design, test, manufacturing, distribution, support, and end of life. We use a secure development lifecycle to make security a primary design consideration—never an afterthought. We design our solutions with trustworthy technologies to enhance security and provide verification of the authenticity and integrity of Cisco hardware and software. And we work with our partner ecosystem to implement a comprehensive Value Chain Security program to mitigate supply chain risks such as counterfeit and taint.

Security and Resilience Anchored in Hardware


The ability to verify that a Cisco device is genuine and running uncompromised code is possible with Cisco Secure Boot and Trust Anchor module (TAm). Cisco uses digitally-signed software images, a Secure Unique Device Identifier (SUDI) to prove hardware origin, and a hardware-anchored secure boot process to prevent inauthentic or compromised code from booting on a Cisco platform.

Secure Boot

Cisco Secure Boot helps ensure that the code that executes on Cisco hardware platforms is genuine and untampered. Using a hardware-anchored root of trust and digitally-signed software images, Cisco hardware-anchored secure boot establishes a chain of trust which boots the system securely and validates the integrity of the software at every step. The root of trust, which is protected by tamper-resistant hardware, first performs a self-check and then validates the next element in the chain before it is allowed to start, and so on. Through the use of image signing and trusted elements, Cisco hardware-anchored secure boot establishes a chain of trust which boots the system securely and validates the integrity of the software.

Cisco SD-WAN, Cisco Tutorial and Materials, Cisco Learning, Cisco Study Material

Trust Anchor Module

The TAm is a proprietary, tamper-resistant chip that features non-volatile secure storage for the Secure Unique Device Identifier (SUDI), as well as secure generation and storage of key pairs with cryptographic services including random number generation (RNG).

Secure Unique Device Identifier (SUDI)

The SUDI is an X.509v3 certificate with an associated key-pair that is protected in hardware. The SUDI certificate contains the product identifier and serial number and is rooted to the Cisco’s Public Key Infrastructure. This identity can be either RSA- or ECDSA-based. The key pair and the SUDI certificate are inserted into the TAm during manufacturing so that the private key can never be exported. The SUDI provides an immutable identity for the router that is used to verify that the device is a genuine Cisco product.

TAm-embedded SUDI and Secure boot are particularly important for configuring remote appliances with Zero Touch capabilities, providing assurance that both the hardware is Cisco certified and software being loaded is uncompromised. Before a router, switch, or AP can load the BIOS and network operating system, the unit must first prove to the network controllers that it is a verifiable Cisco hardware component by submitting the encrypted SUDI to the orchestrator in Cisco DNA Center or Cisco vManage. Once the hardware’s certificate is validated, the BIOS and network OS load, each verified by additional encrypted certificates to ensure the code is untampered before running. Finally, the IOS-XE and SD-WAN software loads and the router can receive a configuration file to join the orchestration fabric. Every step of this process is protected with encrypted certificates and secure tunnels for end-to-end trusted provisioning.

Cisco Secure Development Lifecycle is a Holistic Approach to Trustworthiness


The Cisco Secure Development Lifecycle (SDL) is a repeatable and measurable process designed to increase Cisco product resiliency and trustworthiness. The combination of tools, processes, and awareness training introduced throughout the development lifecycle enhances security, provides a holistic approach to product resiliency, and establishes a culture of security awareness. Cisco SDL development process includes:

◈ Product security requirements
◈ Management of third-party code
◈ Secure design processes
◈ Secure coding practices and common libraries
◈ Static analysis
◈ Vulnerability testing

In addition, Cisco IT is “Customer Zero” for many of our own products, so that ordering, implementation, and production are robustly tested even before Customer Early Field Trials.

Enforcing Trust in Virtualized Network Functions


Virtual Network Functions for SD-WAN can be trusted as long as the appliance hardware has the proper built-in security features, such as a TAm, to enforce hardware-anchored secure boot. Whether the routing appliance is located in a secure data center, installed with zero-touch ops at a remote site, or running in a cloud colocation facility, Cisco hardware supports VNF routing with end-to-end security and trustworthiness.

Cisco SD-WAN, Cisco Tutorial and Materials, Cisco Learning, Cisco Study Material
When selecting the appropriate hardware to run critical virtualized functions such as routing and security, it’s also important that the entire hardware ecosystem is optimized to achieve the levels of performance required to support SLAs and the expected application Quality of Experience (QoE). When it comes to high-speed gigabit routing and real-time analysis of encrypted traffic, performance is more than processing horsepower. By designing custom ASICs for complex routing functions and including Field Programmable Devices (FPD) to support in-field updates, Cisco hardware is fine-tuned for network workloads, security analytics, and remote orchestration.

Trust and Security Built-in from Design to Deployment


With a hardware-anchored root of trust; embedded SUDI device identity; encryption key management for code signing; plug and play zero touch installation, and custom silicon optimized for IP routing, Cisco provides a secure and trusted platform for enterprises of all sizes.

Monday, 1 April 2019

How to Get the Most Value From Your Container Solutions?

There’s been a fundamental shift in the technology industry over past 3-4 years with “applications and software-defined everything” dominating IT philosophy. The market continues to move towards a cloud native environment where developers and IT leads are looking for agility in application development, faster application lifecycle management, CI/CD, ease of deployment, and increased data center utilization.

Today, engineers and IT operations teams are tasked with churning out applications, new features and functionalities, configuration upgrades, intelligent analytics and automation quickly and efficiently to stay competitive and relevant, all while reducing cost and risk. An elastic and flexible agile development is now considered core to innovation and to reduce time-to-market. However, IT is faced with some key challenges, such as: siloed tools and processes, delayed application deployment cycles, and increased production bugs and issues – all resulting in slower application time-to-market, increasing costs, risk and inefficiency.

Docker revolutionized the industry with the introduction of application container technology where you can run multiple applications seamlessly across a single server or deploy software across multiple servers to increase portability and scale. While this has helped achieve consistency across multiple, diverse IT environments, removed the underlying OS abstractions, and enabled faster and easier application migration from one platform to another — it’s only the beginning. Organizations still need the right strategy and support to accelerate adoption of container solutions.

And it’s no longer a matter of when, but how?

How to speed container adoption?


Containerization is the new norm. Moving applications across heterogeneous environments from the laptop to the test bed, from testing to production, and from the production cycle to actual release both quickly and efficiently, is testament to an efficient and scalable containerized strategy.

So no matter what your broader business goals are, whether you are looking to:

◈ Align your cloud strategies with corporate visions
◈ Identify specific use case requirements for implementing container solutions
◈ Get your applications ready for prime-time
◈ Spin up applications for seasonal capacity surges
◈ Enable operational scaling and design for multicloud/ hybrid cloud deployments
◈ Configure application security policies
◈ Align application automation across diverse DevOps teams to streamline operations and troubleshooting;

You need the right cloud and container strategy, tools and expertise to help you bridge the technology and operational gaps, and accelerate the process of modernizing traditional applications. Services can play a critical role in helping you fast-track your transformation journey, while enhancing application portability and ensuring the optimum use of resources.

Determine the best strategy for your business

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material
You need the right strategy alignment and cloud roadmap to maximize the impact of your cloud services across the organization. Coupled with that is the growing importance attributed to determining governance and security policies to reduce IT risk and speed time-to-market. Employing the right expertise – whether in-house or external, can help you to not only identify the right use case requirements for implementing container solutions, and determine technology/ operational gaps but more importantly, help you optimize your investment across people, processes, and technology.

Accelerate deployment across heterogeneous IT environments

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material
Quick and efficient deployment of container solutions across multiple, disparate IT environments is a must, to enable operational scaling, configure feature integration, and design for hybrid cloud solutions. This is a crucial step in the implementation process, and you need highly experienced and trained specialists who can ensure frictionless operations through end-to-end network automation.You need a fool-proof solution design, test plan and clear implementation strategy that can ensure reduced lifecycle risk and interoperability. Engaging the right experts and skill-sets will result in faster implementation and increased time-to-value.

Consistent optimization and support for continued success

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material
Maintaining application consistency and optimizing your application environment post-deployment, will help you exact the most value out of your technology investment. Conducting regular platform performance audits, root-cause analysis, streamlining existing automation capabilities, and running ongoing testing and validation, are all akin to keeping the lights on. Having the best-of-the-breed technology and industry expertise coupled with integrated analytics, automation, tools and methodologies enables you to preempt risks, accelerate container adoption and navigate IT transitions faster. Furthermore, you need centralized support from engineer-level experts who are accountable for issue management and resolution across your entire deployment.

Looking to accelerate applications to market, Cisco can help through our unmatched IT expertise, experienced guidance and best practices.

We offer a lifecycle of Container Services across Advisory, Implementation, Optimization and Solution Support Services to help you drive faster adoption of container solutions. We take a vendor-agnostic approach to offer container networking, infrastructure and lifecycle support to enable distributed containers across the cloud; manage cloud-native apps with support for orchestration, management, security and provisioning, and ensure integrity of the container pipeline and deployment process.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material

We also launched a new container management platform called Cisco Container Platform, based on 100% upstream Kubernetes, that offers a turnkey, open and enterprise-grade solution that simplifies the deployment and management of container clusters for production-grade environments by automating repetitive tasks and reducing workload complexity.