Tuesday 2 April 2019

Putting the “Trust” in Trustworthy SD-WAN

Organizations are implementing SD-WAN to bring secure, cost-effective, and efficient connectivity to distributed branches, retail outlets, and an increasingly distributed workforce. Top of mind for IT when expanding remote connectivity is ensuring the security and integrity of remote network appliances that are no longer under lock and key in the data center. Therefore, one of the advantages of adopting a software-defined network architecture is the plug-and-play, zero touch installation and configuration of remote SD-WAN branch routers and compute platforms. These Cisco-engineered appliances can be shipped directly to a remote site, powered on by a non-technical employee, and remotely configured by an IT expert from anywhere in the World Wide Web. For budget-constrained IT departments, remote provisioning, configuration, and management of network components, both hardware and software, provides significant time and cost savings.

But there’s also continuous pressure to decrease IT CapEx spending, as reflected in a recent trend to run Virtualized Network Functions (VNF) on white-label or bare-metal hardware. Budget-minded IT purchasers hope to save money by opting for less-expensive, generic versions of x86 hardware to run routing and security VNFs. However, security-minded IT professionals have a different perspective of using off-the-shelf compute hardware to process business-sensitive and personal data—the increase in risk.

Let’s look at an example of white box hardware that is shipped from a third-party manufacturer to a remote office for installation and provisioning. In today’s security environment, IT professionals should be asking:

◈ Where did my networking gear actually originate?
◈ Is the device genuine?
◈ Has it been altered at low levels in the BIOS?
◈ Is malware lurking in the bootstrap code?
◈ Can corrupted software with backdoors be installed without warning?

For scenarios like these, there’s no way to tell if corruption has occurred unless security-focused processes and technologies are built into the hardware and software across the full lifecycle of the solution. That level of engineering is difficult to accomplish on low-margin, bare metal hardware. Even when running VNFs on a public cloud, the same bare metal risk is mitigated only by the guarantees of the colocation or IaaS provider. If the choice comes down to savings from slightly less costly hardware versus increase in risk, its worthwhile remembering the average cost of stolen data from security breaches is $148 per record, while the cost of the loss of customer trust and theft of intellectual property is incalculable.

The Risky Business of Trusting Generic Hardware


With the daily onslaught of ever-more sophisticated threats, we all recognize that security for networks and applications has to be built into the foundation of every networking device. Network operators must be able to verify whether the hardware and software that comprise their infrastructure are genuine, uncompromised, and operating as intended. No matter how many functions are added to the security stack, the weakest link can cause all the other layers to fail. From hardware, to OS, to VNFs, every layer needs to be secure and work interdependently with the other layers for a complete defensive posture of the attack surface.

Building in Trust from Design through Deployment


Cisco embeds security and resilience throughout the lifecycle of our solutions including design, test, manufacturing, distribution, support, and end of life. We use a secure development lifecycle to make security a primary design consideration—never an afterthought. We design our solutions with trustworthy technologies to enhance security and provide verification of the authenticity and integrity of Cisco hardware and software. And we work with our partner ecosystem to implement a comprehensive Value Chain Security program to mitigate supply chain risks such as counterfeit and taint.

Security and Resilience Anchored in Hardware


The ability to verify that a Cisco device is genuine and running uncompromised code is possible with Cisco Secure Boot and Trust Anchor module (TAm). Cisco uses digitally-signed software images, a Secure Unique Device Identifier (SUDI) to prove hardware origin, and a hardware-anchored secure boot process to prevent inauthentic or compromised code from booting on a Cisco platform.

Secure Boot

Cisco Secure Boot helps ensure that the code that executes on Cisco hardware platforms is genuine and untampered. Using a hardware-anchored root of trust and digitally-signed software images, Cisco hardware-anchored secure boot establishes a chain of trust which boots the system securely and validates the integrity of the software at every step. The root of trust, which is protected by tamper-resistant hardware, first performs a self-check and then validates the next element in the chain before it is allowed to start, and so on. Through the use of image signing and trusted elements, Cisco hardware-anchored secure boot establishes a chain of trust which boots the system securely and validates the integrity of the software.

Cisco SD-WAN, Cisco Tutorial and Materials, Cisco Learning, Cisco Study Material

Trust Anchor Module

The TAm is a proprietary, tamper-resistant chip that features non-volatile secure storage for the Secure Unique Device Identifier (SUDI), as well as secure generation and storage of key pairs with cryptographic services including random number generation (RNG).

Secure Unique Device Identifier (SUDI)

The SUDI is an X.509v3 certificate with an associated key-pair that is protected in hardware. The SUDI certificate contains the product identifier and serial number and is rooted to the Cisco’s Public Key Infrastructure. This identity can be either RSA- or ECDSA-based. The key pair and the SUDI certificate are inserted into the TAm during manufacturing so that the private key can never be exported. The SUDI provides an immutable identity for the router that is used to verify that the device is a genuine Cisco product.

TAm-embedded SUDI and Secure boot are particularly important for configuring remote appliances with Zero Touch capabilities, providing assurance that both the hardware is Cisco certified and software being loaded is uncompromised. Before a router, switch, or AP can load the BIOS and network operating system, the unit must first prove to the network controllers that it is a verifiable Cisco hardware component by submitting the encrypted SUDI to the orchestrator in Cisco DNA Center or Cisco vManage. Once the hardware’s certificate is validated, the BIOS and network OS load, each verified by additional encrypted certificates to ensure the code is untampered before running. Finally, the IOS-XE and SD-WAN software loads and the router can receive a configuration file to join the orchestration fabric. Every step of this process is protected with encrypted certificates and secure tunnels for end-to-end trusted provisioning.

Cisco Secure Development Lifecycle is a Holistic Approach to Trustworthiness


The Cisco Secure Development Lifecycle (SDL) is a repeatable and measurable process designed to increase Cisco product resiliency and trustworthiness. The combination of tools, processes, and awareness training introduced throughout the development lifecycle enhances security, provides a holistic approach to product resiliency, and establishes a culture of security awareness. Cisco SDL development process includes:

◈ Product security requirements
◈ Management of third-party code
◈ Secure design processes
◈ Secure coding practices and common libraries
◈ Static analysis
◈ Vulnerability testing

In addition, Cisco IT is “Customer Zero” for many of our own products, so that ordering, implementation, and production are robustly tested even before Customer Early Field Trials.

Enforcing Trust in Virtualized Network Functions


Virtual Network Functions for SD-WAN can be trusted as long as the appliance hardware has the proper built-in security features, such as a TAm, to enforce hardware-anchored secure boot. Whether the routing appliance is located in a secure data center, installed with zero-touch ops at a remote site, or running in a cloud colocation facility, Cisco hardware supports VNF routing with end-to-end security and trustworthiness.

Cisco SD-WAN, Cisco Tutorial and Materials, Cisco Learning, Cisco Study Material
When selecting the appropriate hardware to run critical virtualized functions such as routing and security, it’s also important that the entire hardware ecosystem is optimized to achieve the levels of performance required to support SLAs and the expected application Quality of Experience (QoE). When it comes to high-speed gigabit routing and real-time analysis of encrypted traffic, performance is more than processing horsepower. By designing custom ASICs for complex routing functions and including Field Programmable Devices (FPD) to support in-field updates, Cisco hardware is fine-tuned for network workloads, security analytics, and remote orchestration.

Trust and Security Built-in from Design to Deployment


With a hardware-anchored root of trust; embedded SUDI device identity; encryption key management for code signing; plug and play zero touch installation, and custom silicon optimized for IP routing, Cisco provides a secure and trusted platform for enterprises of all sizes.

Monday 1 April 2019

How to Get the Most Value From Your Container Solutions?

There’s been a fundamental shift in the technology industry over past 3-4 years with “applications and software-defined everything” dominating IT philosophy. The market continues to move towards a cloud native environment where developers and IT leads are looking for agility in application development, faster application lifecycle management, CI/CD, ease of deployment, and increased data center utilization.

Today, engineers and IT operations teams are tasked with churning out applications, new features and functionalities, configuration upgrades, intelligent analytics and automation quickly and efficiently to stay competitive and relevant, all while reducing cost and risk. An elastic and flexible agile development is now considered core to innovation and to reduce time-to-market. However, IT is faced with some key challenges, such as: siloed tools and processes, delayed application deployment cycles, and increased production bugs and issues – all resulting in slower application time-to-market, increasing costs, risk and inefficiency.

Docker revolutionized the industry with the introduction of application container technology where you can run multiple applications seamlessly across a single server or deploy software across multiple servers to increase portability and scale. While this has helped achieve consistency across multiple, diverse IT environments, removed the underlying OS abstractions, and enabled faster and easier application migration from one platform to another — it’s only the beginning. Organizations still need the right strategy and support to accelerate adoption of container solutions.

And it’s no longer a matter of when, but how?

How to speed container adoption?


Containerization is the new norm. Moving applications across heterogeneous environments from the laptop to the test bed, from testing to production, and from the production cycle to actual release both quickly and efficiently, is testament to an efficient and scalable containerized strategy.

So no matter what your broader business goals are, whether you are looking to:

◈ Align your cloud strategies with corporate visions
◈ Identify specific use case requirements for implementing container solutions
◈ Get your applications ready for prime-time
◈ Spin up applications for seasonal capacity surges
◈ Enable operational scaling and design for multicloud/ hybrid cloud deployments
◈ Configure application security policies
◈ Align application automation across diverse DevOps teams to streamline operations and troubleshooting;

You need the right cloud and container strategy, tools and expertise to help you bridge the technology and operational gaps, and accelerate the process of modernizing traditional applications. Services can play a critical role in helping you fast-track your transformation journey, while enhancing application portability and ensuring the optimum use of resources.

Determine the best strategy for your business

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material
You need the right strategy alignment and cloud roadmap to maximize the impact of your cloud services across the organization. Coupled with that is the growing importance attributed to determining governance and security policies to reduce IT risk and speed time-to-market. Employing the right expertise – whether in-house or external, can help you to not only identify the right use case requirements for implementing container solutions, and determine technology/ operational gaps but more importantly, help you optimize your investment across people, processes, and technology.

Accelerate deployment across heterogeneous IT environments

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material
Quick and efficient deployment of container solutions across multiple, disparate IT environments is a must, to enable operational scaling, configure feature integration, and design for hybrid cloud solutions. This is a crucial step in the implementation process, and you need highly experienced and trained specialists who can ensure frictionless operations through end-to-end network automation.You need a fool-proof solution design, test plan and clear implementation strategy that can ensure reduced lifecycle risk and interoperability. Engaging the right experts and skill-sets will result in faster implementation and increased time-to-value.

Consistent optimization and support for continued success

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material
Maintaining application consistency and optimizing your application environment post-deployment, will help you exact the most value out of your technology investment. Conducting regular platform performance audits, root-cause analysis, streamlining existing automation capabilities, and running ongoing testing and validation, are all akin to keeping the lights on. Having the best-of-the-breed technology and industry expertise coupled with integrated analytics, automation, tools and methodologies enables you to preempt risks, accelerate container adoption and navigate IT transitions faster. Furthermore, you need centralized support from engineer-level experts who are accountable for issue management and resolution across your entire deployment.

Looking to accelerate applications to market, Cisco can help through our unmatched IT expertise, experienced guidance and best practices.

We offer a lifecycle of Container Services across Advisory, Implementation, Optimization and Solution Support Services to help you drive faster adoption of container solutions. We take a vendor-agnostic approach to offer container networking, infrastructure and lifecycle support to enable distributed containers across the cloud; manage cloud-native apps with support for orchestration, management, security and provisioning, and ensure integrity of the container pipeline and deployment process.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Study Material

We also launched a new container management platform called Cisco Container Platform, based on 100% upstream Kubernetes, that offers a turnkey, open and enterprise-grade solution that simplifies the deployment and management of container clusters for production-grade environments by automating repetitive tasks and reducing workload complexity.

Sunday 31 March 2019

Cisco CloudCenter Suite: Your Multicloud Management Champion

In a few days, over one hundred million viewers will experience the phenomenon known as the Super Bowl. As fans of championship sporting events like the Super Bowl or, for my friends outside of the US, the World Cup, we expect a gratifying experience during these events and are solely focused on our favorite team’s accomplishments. At this point, we don’t care how many practice sessions and training hours were involved, or how long it took the team to reach this level. Our only expectation is to witness a superb performance culminating in our favorite players lifting the trophy; little do we dwell on the inherent complexity of what it took to get there.

In reality, championship teams prepare for this “trophy lifting” experience for years by developing and executing a framework of specific components: talent (management, players, coaching, supporting staff), teamwork (working together), discipline (execute to the plan), and a little luck.

Now in cloud, the expectations of cloud consumers are similar to those of sports fans. They adopt cloud platforms to exploit their numerous benefits: accelerate innovation, increase scale, or reduce operational expenses. Increasingly they are adopting multiple clouds simultaneously to leverage the unique advantages that each of them has to offer. But the specific use case, whether it’s to manage hybrid cloud workloads or distributed multicloud applications, is just a means to an end for them.

But for organizations, it’s all about taking the championship team’s point of view. Because to truly realize the benefits of a multicloud approach, they need a cloud management platform (i.e. a framework) that works across many clouds, both public and private. One that provides the best finished product, while abstracting the inherent complexities.

The newly announced Cisco CloudCenter Suite does just that, via a single solution that works across multiple clouds, doing what many other tools do separately or only for specific clouds.

Cisco Cloud Center, Cisco Tutorial and Materials, Cisco Learning, Cisco Certifications

Cisco CloudCenter Suite is an integrated set of software modules that accelerates innovation by providing a framework for organizations to design, deploy, and optimize infrastructure and applications across clouds to achieve their cost and compliance objectives. The suite simplifies multicloud management by providing workflow automation, application lifecycle management, cost optimization, governance and policy management across clouds.

Cisco CloudCenter Suite is now a modular, self-managed, Kubernetes-based solution that gives you all the benefits of a microservices application without actually having to manage one. It consists of:

Three modules that work together to simplify multicloud management 

◈ Workload Manager – Multicloud management of infrastructure and applications that helps customers design, deploy, and optimize their on-premises and public cloud environments. Workload manager enables governance policies, aligned with the organization’s objectives, that provide centralized visibility and control to help customers improve their multicloud maturity.

◈ Cost Optimizer – Cost reporting and remediation that analyzes customers’ consumption patterns on-premises and in public clouds and provides visibility into total cloud spend (compute, storage, network, and cloud services). It also identifies cost-optimization strategies to help customers right-size their cloud workload instances by minimizing overprovisioning.

◈ Action Orchestrator – Simplified orchestration and workflow automation that provides seamless integration within the suite and externally through a broad set of adaptors and standardized interfaces. This simplifies business processes, reduces human error, and eliminates repetitive tasks associated with technical integrations and business processes.

◈ Suite Admin – Central administration point for all CloudCenter Suite modules. It provides common services such as managing cloud accounts, multi-tenancy, licensing, monitoring and logging, role-based access control, user authentication, and single sign-on integration.

◈ Suite Installer – A self-deployed, self-managed installer that takes care of the installation process for the Kubernetes-based CloudCenter Suite on any environment (VM, OpenStack, on-premises and in public clouds).

Cisco Cloud Center, Cisco Tutorial and Materials, Cisco Learning, Cisco Certifications

CloudCenter Suite delivers a ubiquitous experience across your multicloud environments, whether on-premises or in the cloud, so that you can focus on developing and deploying applications with speed and scale. At design time, architects can compose the dependencies of their multi-tier applications into an application profile. Designers can leverage numerous out-of-the-box integrations across many Cisco products and other ecosystem solutions to build on the strength of Cisco’s ever-increasing investments in cloud technologies. Consumers can then deploy the profile, devoid of multicloud complexities, using a pre-established governance framework consisting of application and infrastructure policies. Applications are delivered consistently and reliably across private and public clouds in a manner that eases the transition to operations teams. Both consumers and operators can optimize infrastructure and applications anywhere through a recommendation engine that exposes the most economical consumption opportunities.

CloudCenter Suite’s flexible consumption models enable customers to choose the buying option that best suits their organizations’ use case requirements and price points, with three subscription-based license tiers available as self-hosted or SaaS. Small and mid-size enterprises can now take advantage of the same premier multicloud management capabilities enjoyed by large enterprises.

Cisco Cloud Center, Cisco Tutorial and Materials, Cisco Learning, Cisco Certifications

How does CloudCenter Suite deliver “quick wins” for cloud consumers and IT operators? It helps teams:

◈ Focus on accelerating innovation and reducing time to market by delivering applications wherever the cloud strategy dictates.
◈ Capitalize on the unique benefits of each provider by easing the management of multiple clouds.
◈ Reduce total cloud costs without compromising application performance by monitoring private and public cloud usage.
◈ Automate complex business processes to reduce digital waste and precious time and resources.

Championship teams operate best when the unique skills of each team member seamlessly come together to accomplish a common goal. CloudCenter Suite unifies your multicloud experience in the same way—enabling you to secure the best value from the ‘skills’ each cloud provider has to offer.

Saturday 30 March 2019

DevOps with CloudCenter Suite and Kubernetes in a Multicloud Environment – Part 2

This post is the 2nd part our series on DevOps and will focus on a CI/CD demo based on Cisco Multicloud Portfolio. You can find part one here. For our demo environment, we are using resources from 3 Kubernetes clusters, on-premises and in AWS.

Our lab


We have built a simple microservice-based application as shown by the picture below.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certification


 The source code of the 5 components is stored in a github repository, where new versions of the application are committed (uploaded) by developers. At each commit, the Jenkins orchestrator gets the source code and compiles it, building the container images ready to deploy the application.

The images are saved in a shared container registry (Harbor, see next picture) where Cisco CloudCenter (or Cisco CloudCenter Suite, as per the official new title) will be able to retrieve them when asked by Jenkins to deploy the application. Based on input parameters provided by Jenkins, Cisco CloudCenter will target the deployment to the most appropriate environment for the current phase of the project.

In our demo lab, the environments are “integration test”, “performance test” and “production”.

They correspond to three different Kubernetes clusters that have been created on-premises (integration and performance test) and in AWS (production).

Each environment has different set of policies, that will be inherited by every application that is deployed there: policies for security, networking, autoscaling, etc.

The 3 Kubernetes clusters mentioned above have been quickly deployed by the Cisco Container Platform (CCP) without having to manually create them on each side.

The value in using CCP here is simple: in few minutes we created and deployed 3 production-ready clusters, fully integrated with networking, storage, security, monitoring and logging without even touching the K8s installer or the underlying infrastructure.

The 2 clusters named “integration test” and “performance test” were created automatically inside VM in a local VMware environment, while the cluster named “production” was created in AWS (CCP uses the API exposed by AWS’s Managed Kubernetes Service (EKS) to do everything automatically, including the integration with AWS’s Identify and Access Management (IAM) for authentication, authorization and access control)

The automated deployments will repeat, in the three environments, in a sequence that tests each version before moving it to the next deployment environment, ensuring the quality of the release. In the real world you might want to run more complex testing activities (such as code quality inspection, security, resiliency etc) than the 2 tests in this example (functional and performance).

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certification

Demo flow


◈ The next picture is a sequence diagram showing all the actions that we have automated; we used a color code to represent the phases that are commonly referred to as Continuous Integration (the green part) and Continuous Deployment (the orange part).
CCC stands for Cisco CloudCenter, where K8s dev, test and production represent the 3 Kubernetes clusters mentioned above.

◈ The entire process is completely automated and brings a new version of the application to the production deployment without any human intervention. This complete automation is often referred to as Continuous Deployment and – although very useful and adopted by big players like Facebook (their pipeline is more complex than our simplified demo) – is not very common among the customers I generally meet.

Those that adopted DevOps still prefer to have some human checks in between the activities, so that they feel they have a better control on the process and its quality.

When they have more experience, they will probably be confident enough to delegate every check to the automation tools.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certification

Implementation


The automation is based on Jenkins, an open source orchestrator that benefits from the availability of hundreds of plugins; it can automate almost every component in your IT ecosystem, including Cisco CloudCenter of course.

In the Jenkins dashboard you can build different projects, like in the picture below. A project is a sequence of steps, using plugins to drive activities in the systems you want to automate (e.g. pull the source code from the repository, compile it, build container images, trigger a cloud deployment through Cisco CloudCenter, etc.).

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certification

Projects can call other projects, to make your orchestration modular and reusable. In the picture above, the project ‘TheWall’ (that is the name of our demo application) calls the other 5 projects in a sequence, checking that the outcome is positive before calling the next one.

◈ With this we are able to automate the deployments on those 3 Kubernetes clusters and run the functional test and the performance test of the application using an external tool (here we are using another open source product called Apache Jmeter).

◈ The functional test (which happens on the integration test cluster) is a sequence of user transactions, executed by the test tool using a pool of user identities and a pool of input data such as simulated clicks and text inputs, where assertions about the expected result are validated automatically. If the page generated by the application differs from the expected result, an error is logged, and the test can be considered failed. So, the functional test ensures that the application behaves as expected from a functional standpoint (and you can avoid a manual test for user acceptance).
The performance test (which happens on the performance test cluster), executed by the same tool, stresses the application and the infrastructure from a performance standpoint. A large number of concurrent users are simulated by the tool, invoking a sequence of user transactions with random wait time, reproducing a situation similar to the workload in a production environment. Response times are tracked and so are eventual errors, allowing the tool to declare whether the test is successful or not.

Based on the outcome produced by Jmeter, Jenkins will continue with the Continuous Deployment pipeline or abort it, notifying the developers that something went wrong, requiring a correction. In the latter scenario, the CI/CD cycle will start from the beginning: new modified source code modified committed, application built and deployed to the first environment, test executed, application promoted to next environment and tested… until the pipeline is completely executed without any warning or error and the application is released automatically in production.

The next picture shows the execution of the Jenkins pipeline for three different builds of the application. The most recent execution failed because the modification of the source code introduced an error that blocked the build. The other two executions succeeded, as demonstrated by the green color of every step in the pipeline.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certification

Jenkins logs all the activities, so that you can check what happened during the automated process.

The next picture shows the output of the sub-project named ‘TheWall_Deploy_Test’, that is the 7th stage in the pipeline in previous picture.

In order for us to ensure that governance policies are applied during deployment (such as access control, reporting, cost control etc), we have inserted CloudCente in the process. Jenkins will use the API exposed by Cisco CloudCenter to deploy the application ‘TheWall’ to the test environment.

Note that the performance test environment needs to be robust enough to sustain the workload of the performance test, whileon the contrary, the functional test can be executed in a smaller cluster with less computing power.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certification

You don’t have to code the API calls, because Cisco CloudCenter ships a plugin for Jenkins that integrates into its user interface graphically. But if you prefer, Jenkins can run scripts and commands from the CLI for you.

Thursday 28 March 2019

Enter The Cloud Maturity Era (Until The Next One)

Just before the end of the year we announced and made available our latest hybrid solution, a product of collaborating with AWS, the Cisco Hybrid Solution for Kubernetes on AWS (yes there are definitely more words in that title that I can count in one hand).

And at the same AWS re:Invent that we first showcased our solution, AWS announced more than 60 new features and services for their platform; new compute instances, storage and archiving, databases, data lakes, blockchain, ML/AI, serverless, networking and control services.

Amongst many topics at that conference, cost management was especially hot, and validates what we have been hearing from customers a lot lately. So at our equivalent annual European conference, Cisco Live, in Barcelona, we announced our Cisco CloudCenter Suite (including enhanced cost management features) as well as ACI Anywhere, enabling customers to extend their on-premises data center networks directly to pubic clouds.

And what happened in those few short months is just the tip of the iceberg. This amount of innovation coming from the industry is evidence of the industry maturing.

Customers are now evaluating and planning to adopt more advanced solutions between on-premises and public clouds than just IaaS and SaaS, and the industry is responding to this demand with a new generation of offerings.

Scaling up innovation


Is this really new news? What has been the evolution of cloud computing and what does that tell us about the paradigm, the market and its future overall?

Let’s take a step back. In 2012’s Gartner’s Hype Cycle, more mature cloud offerings and concepts PaaS or “cloud-optimized application design” were given 2-5 years for mainstream adoption, as opposed to IaaS or SaaS.

And looking at the date today makes you think they were about right, weren’t they? Score one for Gartner.

Cisco Tutorial and Material, Cisco Study Materials, Cisco Learning, Cisco Cloud

Indeed, we are slowly moving towards the maturity of the “platform” era, where cloud computing is about to become more interesting, honouring its roots to service-oriented architecture moving further away from being just a technical answer to an infrastructure use cases to being more closely aligned to business initiatives.

The result? To better align with business use cases, cloud computing will necessarily become more industry-specific and modular, being consumable directly by line of business users or developers in the form of reuseable building blocks.

And we’re talking not just about services from the leading public cloud providers but also from the myriads of different SaaS vendors that will come up with new offerings to support better or new use cases.

Unlocking innovation comes after internal change


But new technologies and solutions don’t mean anything without an interal readiness to adopt them. Cloud computing is driving more and deeper organizational change by decomposing technology silos, processes and teams.

It is therefore no surprise that in that same 2012 Hype Cycle, the term “DevOps” was just making its premiere appearance in the magic quadrant with a 5-10 years projection to maturity (notice that hybrid cloud and hybrid IT fall in the same bucket).

This amazing universe of innovation on top of a new landscape of technology is forcing change and requires organizations to adapt to adopt.

Change is not always easy to implement. It involves people, technology, process…in other words, the “big picture.” It involves being able to navigate between managing the on-premises existing investments in infrastructure and applications and deciding what portion to modernize and what to replace with new, all while increasing the adoption of public cloud services.

It also involves defining new governance models that drive a new culture in the way development and infrastructure teams collaborate together, especially when new offerings are further decoupling the infrastructure layer from the application.

And of course, we can’t forget the critical requirement of managing risk during the process.

The cloud era is producing a huge amount of opportunity and innovation. And how do organizations respond? By building strategies based on where they are in their own technology journey. And that can take time.

 A new kind of hybrid solution


And that brings up back to the present. Our collaboration with AWS was exactly based on making that connection between our customers’ existing environments and the innovation of the AWS platform. Im other words, it takes into account the need to combine “the existing and the new” as part of their multicloud strategy and is aimed at customers that want to maintain control while extending their investments with interoperable components.

The Cisco Hybrid Solution for Kubernetes on AWS is the first hybrid solution in the industry to integrate directly with AWS’s managed Kubernetes offering (EKS) – essentially a hybrid Container-as-a-Service offering.

Cisco Tutorial and Material, Cisco Study Materials, Cisco Learning, Cisco Cloud

This means users responsible for deploying Kubernetes clusters and handing them off to developer teams don’t have to manually deploy and configure Kubernetes on top of AWS’s IaaS layer. They can use the solution to both deploy on-premises and trigger deployment on AWS EKS, creating a consistent environment for developers to run applications.

Practically speaking, it means less time spent on operations and a common authentication method across the two locations.

Futhermore, the part that makes the solution truly extensible and goes beyond containers, is the optional software that supports the full lifecycle of existing, non-containerized applications and hardware on-premises or in other clouds. CSR1000v for connecting, CloudCenter Suite for deploying, Stealthwatch Cloud for securing, and AppDynamics for monitoring.

The result? Customers can now make containers and Kubernetes a core engine of their strategy and innovation and increase adoption of public cloud services, without creating more silos that don’t integrate with their existing investments and assets.

Just like many areas of cloud offerings, Kubernetes-based solutions are maturing and driving change for organizations. Successful organizations in multicloud will not be the early adopters necessarily, but the ones that adopt the latest and greatest in the best way to fit their strategy.

Wednesday 27 March 2019

Balancing the risks and rewards of connected manufacturing

The most expensive cyber security event ever, started with a software accounting package from the Ukraine. In its wake 25% of the world’s shipping was shut down, major automobile and pharmaceutical companies came to a stop. And now a major lawsuit between an insurance provider and its customer has come forward with the phrase “act of war” as a major point of contention.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

What factory manager saw that coming?

Chances are nobody did, and that’s why Cisco, Schneider Electric, and Aveva are working together to mitigate the risks of digital manufacturing so their connected industrial customers can seize IIoT’s many rewards.

Designing IT/OT networks with cybersecurity in mind


“What previously was protected by proprietary OT protocols and hard-wired connectivity across the factory floor is now open game to hackers trying to do their dirty work through targeted IoT endpoints — whether a smartphone, field engineer’s tablet, connected variable speed drive, or any IoT-enabled asset.”

So what to do? Where to start.

Let’s start with an attitude adjustment. While most ICS environments have an implicit trust model, we need to surround them with a resilient architecture built on a zero-trust approach. In short: allow only the absolutely necessary access to equipment and applications. It is a significant change and will require significant buy-in from all involved.

How to get there.

Segmentation – contain outbreaks and control access


Segmentation gives you the opportunity to stop those outbreaks while controlling access, whether it be a whole department or an individual switch port connected to a robot.

Start high – where the attacks first enter the factory – through the industrial DMZ. It is shocking how many modern Fortune 500 factories lack a properly managed firewall separating it from the enterprise network. Much of the impact of WannaCry /NotPetya could have been addressed with a properly configured firewall. The world’s most widely deployed next-generation firewall, FirePower, can help.

Next, work your way down through the Purdue model: Levels 3 down to individual machines, increasing granular control (micro-segmentation) along the way.  You will need to understand the production lines, their relationships, and componentry. To do that you have to have visibility.

Visibility builds better segmentation


Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Visibility into your factory and processes is requisite for your segmentation decisions. You MUST find the process communication trails and work with the automation engineers to determine what is critical to ongoing operations. Cisco Stealthwatch can trace the full range of manufacturing communication patterns, from the factory floor, across the IDMZ to corporate ERP systems, to your favorite robot vendor’s cloud based analytics platform.

With an understanding of system communications, now build out a network architecture with modern network equipment. Look to resilient design concepts with multiple possible paths. Build for the future with Software Defined Networking for Software Defined Access (SDA). Cisco drives these policies through ISE (Identity Services Engine) which takes device or user identities directly or through pxGrid integrations with other Cisco products like IND (Industrial Network Director) or third party tools like Nozomi and others.

Visibility for the big picture


Visibility also drives understanding of process challenges including security threats.

Your DMZ NGFW should be able to determine if telemetry feeds are to be queried or simply pushed to analysis tools in the cloud or back at the research lab. Coupling your historians connection history at the plant with what is seen at the enterprise and beyond to the cloud based analysis site can cross numerous organizational and network boundaries through the stitching capabilities of Stealthwatch and Stealthwatch Cloud.

Visibility includes understanding the end-points in the factory. Are your engineering workstations or historians running without endpoint protections, making them potentially vulnerable to malware? ISE can tell you if end point protections are there, and of course you can remediate that threat with the deployment of AMP (Advanced Malware Protection).  And the plant floor itself? With the knowledge that your metal press has a vulnerable HMI (as determined by IND and ISE) and that the next maintenance outage is seven months away (as determined by the production calendar), you can quickly apply a Talos-produced SNORT rule to protect that machine via an ISA 3000 industrial NGFW at its gateway edge.

Close the loop with a security control loop


Just as a control loop takes inputs and adjusts the process, so, too, should your security see the state of your process system’s security and actuate the proper controls. Proper security controls are dynamic and adaptable. A microsegmentation capable architecture and network is the base from which you enable visibility into new equipment and behaviors. Visibility provides the knowledge (with help from the operations team) to drive the policies which the network and security controls will enforce. And this process is as connected as your modern factory because we can stitch together the factory activity with the enterprise – crossing former boundaries to create stronger and more secure bonds.

Now’s the time to secure your factory floor


All this represents a dramatic shift for manufacturers, OT professionals, and even IT departments.

At Cisco, we’re proud to stand at the forefront of the effort, alongside our partners at Schneider Electric and Aveva, to secure digital manufacturing and prevent negative outcomes.

Tuesday 26 March 2019

Rakuten Cloud Platform is a Blueprint for the Future

Things that seem obvious today were not always that way. At some point, someone with a bit of courage and a flash of insight makes a bold move—like sticking a digital camera on the back of a phone. The rest of the world responds with a collective “of course!” and the world is changed, never to look back.

We had one of those moments a couple of weeks ago at Mobile World Congress in Barcelona when Rakuten announced their Rakuten Cloud Platform or RCP. Mickey Mikitani, Chairman, President and CEO of Rakuten introduced RCP the following way:

Rakuten has a founding vision of empowering people to realize their dreams and a history of disrupting the status quo to take the lead, in industries from e-commerce to fintech and digital content. We are very excited to launch a mobile network in Japan that is set to become the first choice of consumers and change global standards in telecommunications.

If you want to better understand how Rakuten is building RCP, I have some deep dive technical links at the end of this blog. For now, I wanted to explore why Rakuten decided to invest the time, effort and resources in building RCP.

For a while now, there has been growing tension between apps and services and the infrastructure they depend upon. This tension has increased as the center of gravity for app and service deployment has moved into the cloud. This, in turn, has given rise to cloud native architectures which further exacerbate stresses on infrastructure that was not originally designed for this brave new world.  At the customer end of things, we are now engaging with them in more ways and in more places. Not only do we have an explosion of phones and tablets, we are about to see an even larger explosion of connected cars, drones, cameras, refrigerators, and—my favorite—cows. Customers expect consistent and predictable services regardless of if they are at home, at play or on the move. Almost every network operator is making the investments to keep up with this sea change. But interestingly enough, app and service owners are also looking to take greater control of their own destiny. We saw the first movement in this direction with the large web players getting involved with projects like the Telecom Infra Project (TIP) and CORD. Their objective was to help service providers upgrade the infrastructure on which those web players were dependent to meet their growth goals. Netflix has, for years worked with ISPs to help improve the streaming experience of their subscribers. Rakuten has simply taken the logical next step. They are a cloud-first, mobile-first business and now they are building out bespoke infrastructure that is precisely calibrated to their needs. Moreover, as their business grows and evolves, Rakuten can be assured that their infrastructure will keep up with minimal lag.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

While not everyone wants to be or can be Rakuten, it is worthwhile understanding what they did and why they did it, as that insight will be valuable to anyone contemplating an architecture refresh. Tareq Amin, CTO of Rakuten, built RCP around three guiding principles:

◈ Zero Touch, End-to-End Automation and Assurance
◈ Software Defined Programmable Infrastructure
◈ Distributed and Common Carrier Grade Telco Cloud

Looking at the first two principles, we can tell this is an architecture meant to be run by machines (hello SkyNet!). When we look at the scale of Rakuten’s vision and their goals for service agility and customer experience, it’s really the only feasible approach. For velocity, agility and cost reasons, humans simply cannot be inline to the day-to-day operations of RCP. To make this a reality, two things need to happen. First, every element of RCP needs to be programmable. For most of you reading this, deployment of programmable infrastructure (and the ability to take advantage of it) is opportunistic and incremental. Any progress is good news, however, there is significant difference between 99% programmable and 100% programmable. Anything less than 100% means at some point, someone is still sitting at a keyboard and introducing friction into your workflows and acting as a constraint on your business. Cisco’s contribution to Rakuten’s programmable infrastructure goal was our NFVI solution and our IOS-XE, IOS-XR and ACI-based transport platforms. They all provide rich, capable, programmatic interfaces that met all of Rakuten’s design requirements–no keyboards required.

In concert with programmability is automation. Much like programmability, partially automating a service chain is helpful, but having 100% coverage of your end-to-end service chain really unlocks new possibilities around how you build and deliver services. Are example, operationally, you lower costs of operation and reduce the time to stand-up and tear-down service chains. That opens up the door to more dynamic capacity management, auto-scaling and assurance management.  That increases your efficiency and utilization which further lowers opex and frees budget dollars for further investment and a virtuous cycle is spawned. From a customer experience perspective, real benefit comes from minimizing the lag between creation of services and ability of the infrastructure to support them. This frees service owners to iterate offers more quickly, experiment more easily and makes customization and personalization more feasible.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Rakuten’s RCP automation framework is two-tiered to provide flexibility and horizontal scalability. The bottom tier is comprised of four domains: central data center, WAN, edge data center and far edge data center. The domain level automation is built from a combination of Cisco Network Services Orchestrator (NSO), the NFVO Function Pack for NSO, Cisco Elastic Services Controller (ESC) as a virtual network function (VNF) manager, and, on an interim basis, other partner VNF managers—Rakuten’s mid-term goal is to consolidate on ESC.  NSO then uses a feature called Layered Services Architecture (LSA) to tie those four domains together with a cross-domain instance of NSO. Together, this framework provides RCP with fast, dependable, scalable, sophisticated end-to-end service orchestration. Rakuten then takes advantage of the rich northbound software interfaces NSO offers to tie the automation framework to their OSS and BSS systems.

The final principle, distributed and common carrier grade telco cloud, is a reflection of the changing nature of traffic. It no longer makes sense to try and serve subscribers from some far-away central data center. Providers can also no longer make assumptions as to where their customers are located. Instead, RCP needs to be able to serve customers wherever they are, whichever device they are on, whatever service they are consuming. For both customers and service owners, Rakuten needs to be able to pervasively deliver consistent capabilities and predictable customer experience. Let’s take a closer look at how they do that and where we contribute to the effort.

A “telco cloud” is essentially a private cloud optimized for hosting virtualized network functions (VNFs). It is built from NFV Infrastructure (NFVI) that hosts the VNFs and a management and orchestration layer (MANO—discussed earlier). Cisco Virtualized Infrastructure Manager (CVIM) is an open, modular containerized NFVI software solution that forms the building blocks of RCP. The RCP deployment embeds Red Hat Enterprise Linux and Red Hat OpenStack Platform. Beyond support for Cisco and 3rd-party VNFs, CVIM provides key features like security hardening, automated zero-touch provisioning and full lifecycle management of VNFs. Underpinning it all, Cisco ACI and Cisco Nexus 9000 series switches link network, compute and storage resources.

RCP’s CVIM building blocks are flexible and fungible so a collection of CVIMs can be adapted to support any service or application today or in the future. This gives Rakuten great cost efficiencies with RCP, but it also gives service owners great freedom to build new services and get them deployed quickly without worry about what the infrastructure can or cannot do. At the same time, these basic NFVI building blocks can be deployed anywhere along the service chain that makes sense, since managing a CVIM instance in the central data center is no different than managing one in a far edge data center. Along those same lines, VNFs, content and resources can be placed and even moved around on the fly to optimize operations and customer experience—distributing them to wherever makes the most sense.

Mickey Mikitani stated “[w]ith automation and virtualization, Rakuten is redefining how mobile networks are designed and how services can be consumed.” RCP seems ready to do exactly that. Not only will their investment in RCP help Rakuten and its customers, it will serve as lab for their peers to learn and the industry to evolve.