Wednesday, 30 May 2018

Intent-Based Networking in the Cisco Data Center

Cisco Data Center, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

We’ve continued to expand our solutions to deliver Intent-Based Networking to our customers. Our years of designing, building, and operating networks tells us that you just can’t add automation to existing processes. The scale, complexity and new security threat vectors have grown to a point where we need to rethink in some fundamental ways how networks work, and beyond that, how networks and applications interact. Let’s dive in to what Cisco means by Intent-Based Networking, and how it can help you run your data center more efficiently and more intelligently for your business.

Networking is shifting from a box-by-box, configure-monitor-troubleshoot model to a model where the network globally understands the intent, or requirements, that need to be satisfied, automatically realizes them, and continuously verifies the requirements are met. This process has 3 key functions – translation, activation, and assurance.

Understanding the “intent” cycle in the data center


Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

Translation: The Translation function begins with the capture of intent. Intent is a high-level, global expression of what behavior is expected from the infrastructure to support business rules and applications. Intent can be captured from multiple locations; for instance, users may directly provide requirements or built-in profiling tools, capable of analyzing behavior in the network and workloads, may automatically generate them. Once intent is captured, it must be translated to policy and validated against what the infrastructure can implement.

Activation: The Activation function installs these policies and configurations throughout the infrastructure in an automated manner. This covers not just the network elements – physical and virtual switches, routers, etc. – but also covers software-based agents installed directly in the workload. Additionally, as datacenter networks become multicloud, this must work across multiple datacenters, colocation environments, and even public clouds.

Assurance: The last function, Assurance, is an important part of what makes Intent-Based Networking unique. It’s a new function we’ve never been able to offer in the network before. Assurance is the continuous verification, derived from network context and telemetry, that the network is operating in alignment with the programmed intent. It offers a continuous ground truth about not just what’s happening but also what’s possible in your network. It helps you confidently make changes with the advanced knowledge of how they will impact your infrastructure.

What Intent-Based Networking means for the data center


Now let’s think about Intent-Based Networking and its translation, activation, and assurance functions in the context of some of our datacenter products, Cisco ACI, Nexus 9000, Network Assurance Engine (NAE), and Tetration.

Cisco ACI offers a policy-based SDN fabric capable of providing translation and activation functions for the network. The Application Policy Infrastructure Controller (APIC) exposes a policy model abstraction that can be used to capture higher level requirements and automatically convert them into low level or concrete configuration. This configuration is automatically and transparently communicated to the network infrastructure, including Nexus 9000 switches, as part of the activation process.

Cisco Network Assurance Engine fulfills the assurance function in the network. NAE was designed to integrate with both the network devices as well as a network controller such as the APIC. NAE reads policy and configuration state from APIC as well as configuration, dynamic and hardware state on each device. Using this information to build a mathematical model of the network, NAE is able to proactively and continuously verify that the network is behaving in accordance with the operator intent and policy captured in the APIC. By codifying knowledge of thousands of built-in failure scenarios that run continuously against the model, NAE can identify problems in the network before they lead to outages and provide a path to remediation. It is precisely this closed-loop behavior that characterizes an Intent-Based Networking design.

Cisco Tetration contributes to multiple functions in an Intent-Based Network at an application and workload level. Its application dependency mapping capabilities play a critical role in profiling applications and ultimately capturing intent. Its cloud workload security and segmentation capabilities provide a means of delivering (or activating) a highly automated, zero-trust security environment. This includes advanced capabilities such as detecting software vulnerabilities, identifying deviations in process behavior in addition to building whitelist segmentation policies based on real-time telemetry. And Tetration’s network performance, insight, and forensic capabilities provide visibility and assurance of what is occurring in your environment.  It can described as a time machine or “DVR” due to its ability to play back past network behavior and model future trends.

Friday, 25 May 2018

7 Cisco Strategies for Overcoming Common Cloud Adoption Challenges

The recently released Cisco Global Cloud Index study predicts that by 2021, 94 percent of all workloads and compute instances will be processed in the cloud. Public cloud is expected to grow faster than private cloud and by 2021 the majority share of workloads and compute instances will live in the cloud. Many organizations are expected to adopt a hybrid approach to cloud as they transition some workloads and compute instances from internally managed private clouds to externally managed public clouds.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

While Cloud represents incredible opportunity for organizations, the cloud services provider (CSP) market continues to be very competitive. CSPs are increasingly focused on specialization and differentiating themselves through their core services portfolio as well as their vertical specific offerings.

CIOs and CTOs are therefore faced with having to determine the right mix of cloud services and integrating the selected services into their existing IT portfolio. Multicloud adoption is a journey and it is one that can be met with numerous challenges.

Below are the 7 common cloud adoption challenges we have observed and strategies to overcome each.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Adopt a common architectural framework that provides a common language between business and IT
◈ Think in terms of the city analogy – establish a governance model that will drive appropriate consideration of multiple perspectives
◈ Align investment decision making so that architectural impact is considered

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Plan for changes in your operating model
◈ Consider changes based on the Cisco Operating Model Transformation Map
◈ Execute changes across five key streams
     ◈ Image of Success
     ◈ Change Leadership
     ◈ Metrics
     ◈ Roles & Responsibilities
     ◈ Costing

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Shift from traditional waterfall funding methods to more agile funding processes
◈ Understand the TCO for existing and future services
◈ Develop an understanding of potential cloud providers’ cost structure
◈ Understand what hardware internal services are currently running on ANDwhere that equipment is in the lifecycle
◈ Develop a single pain of glass view that showcases current cloud consumption

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Your cloud strategy must deliver the right operational and financial outcomes:

◈ Understand and align business and IT priorities
◈ Develop appropriate prioritization / sequencing
◈ Build the value case for your proposed approach
◈ Create an implementation plan that delivers incremental value rapidly
◈ Validate value achievement

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Maintain an architectural perspective
◈ Align Technology to the Business Needs
◈ Technical Agility Creates Business Agility
◈ Optimize Tactical Technical Decisions into Strategic Technical Architecture
◈ Over-engineering vs. no engineering, choose carefully
◈ Fail Fast to Win Quick and be ready to adjust
◈ Include a Continuous Improvement Model through a project based Feedback Loop

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Make sure you are aligned to your “why” and can assess options based on value
◈ Invest the time to create a migration strategy that contemplates options and tradeoffs rather than just lifting and shifting
◈ Invest some effort to understand or validate your current environment
◈ Understand the elements of a services approach and consider what you can adopt

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Ensure your change management plan includes a description of the new value delivery model
◈ Paint a picture of the future state that is broadly understood throughout the organization
◈ Define and share new roles and responsibilities
◈ Anticipate the impact of automation on previous processes and plan for the migration of resources to higher value efforts
◈ Publicize the successful shifting of people to new (and more valuable) roles

Organizations may encounter the need for one, some or all of these strategies based on their adoption roadmap.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Cisco Cloud Advisory Services can help organizations navigate through these challenges and establish an actionable multicloud strategy.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Wednesday, 23 May 2018

Multicloud Workload Protection – Cisco Tetration Welcomes Container Workloads

The modern data center has evolved in a brief period of time into the complex environments seen today, with extremely fast, high-density switching pushing large volumes of traffic, and multiple layers of virtualization and overlays.  The result – a highly abstract network that can be difficult to monitor, secure and troubleshoot.  At the same time, networking, security, operations and applications teams are being asked to increase their operational efficiency and secure an ever-expanding attack surface inside the Data Center.  Cisco Tetration™ is a modern approach to solving these challenges without compromising agility.

It’s been almost two years since Cisco publicly announced Cisco Tetration™.  And, after eight releases of code, there are many new innovations, deployment options, and new capabilities to be excited about.

Container use is one of the fastest growing technology trends inside data centers.  With the recently released Cisco Tetration code (version 2.3.x), containers join an already comprehensive list of streaming telemetry sources for data collection.  Cisco Tetration now supports visibility and enforcement for container workloads. . . and thus, the focus of this blog.

Protecting data center workloads 


Most cybersecurity experts agree that data centers are especially susceptible to lateral attacks from bad actors who attempt to take advantage of security gaps or lack of controls for east-west traffic flows.  Segmentation, whitelisting, zero-trust, micro-segmentation, and application segmentation are all terms used to describe a security model that, by default, has a “deny all,” catch-all policy – an effective defense against lateral attacks.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

However, segmentation is the final act, so to speak.  The opening act? Discovery of policies and inventory through empirical data (traffic flows on the network and host/workload contextual data) to accurately build, validate, and enforce intent.

To better appreciate the importance of segmentation, Tim Garner, a technical marketing engineer from the Cisco Tetration Business Unit has put together an excellent blog that explains how to achieve good data center hygiene.

Important takeaway #1:  To reduce the overall attack surface inside the data center, the blast radius of any compromised endpoint must be limited by eliminating any unnecessary lateral communication. The discovery and implementation of extremely fine-grained security policies is an effective but not easily achieved approach.

Important takeaway #2:  A holistic approach to hybrid cloud workload security must be agnostic to infrastructure and inclusive of current and future-facing workloads.

Containers are one of the fastest growing technology trends inside the Data Center.  To learn more about how Cisco Tetration can provide lateral security for hybrid cloud workloads inclusive of containers, read on!!!

On to container support within Cisco Tetration . . .

The objective?  To demonstrate visibility and enforcement inclusive of current and future workloads – that is, workloads that are both virtual and containerized. To simulate a real-world application, the following deployment of a WordPress application called BorgPress is leveraged.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

A typical, but often difficult to keep up with, approach to tracking the evolution of an application’s lifespan is by using a logical application flow diagram. The diagram below documents the logical flow between the application tiers of BorgPress.  Network or security engineers responsible for implementing the security rules that allow required network communications through a firewall or security engine rely on such diagrams.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

A quickly growing trend by developers is the adoption of Kubernetes as an open-source platform (from Google) for managing containerized applications and services.  Bare metal servers still play a significant role 15 years after virtualization technology arrived.  It’s expected that, as container adoption occurs, applications will be deployed as hybrids – a combination of bare metal, virtual, and containerized workloads.  Therefore, BorgPress is deployed as a hybrid.

A wordpress web tier of BorgPress is deployed as containers inside a Kubernetes cluster.  The proxies and database tiers are deployed as virtual machines.

The Kubernetes environment is made up of one master node and two worker nodes.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Discovery of application policies is a more manageable task for containerized applications as compared to traditional workload types (bare metal or virtual machines).  This is because container orchestrators leverage declarative object configuration files to deploy applications. These files contain embedded information regarding which ports are to be used.  For example, BorgPress uses a YAML file— specifically, a replica set object, as shown in the figure below—to describe the number of wordpress containers to deploy and on which port (port 80) to expose the container.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

To allow external users access to the BorgPress application, Kubernetes leverages an external service object type of NodePort to expose a dynamic port within a default range of 30000‒32767.  Traffic received by the Kubernetes worker nodes destined to port 30000 (the service defined to listen to incoming requests for BorgPress) will be load-balanced to one of the three BorgPress endpoints.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Orchestrator integration

In a container eco-system, workloads are mutable and often short-lived.  IP addresses come and go. The same IP that is assigned to workload A might, in a blink of an eye, be now assigned to workload B. As such, the policies in a container environment must be flexible and capable of being applied dynamically.  A declarative policy that is abstract hides the underlying complexity.  Lower-level constructs, such as IP addresses, are given context, for example through the use of labels, tags, or annotations.  This allows humans to describe a simplified policy and systems to translate that policy.

Cisco Tetration supports an automated method of adding meaningful context through user annotations.  These annotations can be manually uploaded or dynamically learned in real time from external orchestration systems.  The following orchestrators are supported by Cisco Tetration (others can also be integrated through an open RESTful API):

◈ VMWare vCenter
◈ Amazon Web Services

In addition, Kubernetes and OpenShift now are also supported external orchestrators.  When an external orchestrator is added (through Cisco Tetration’s user interface) for a Kubernetes or OpenShift cluster, Cisco Tetration connects to the cluster’s API server and ingests metadata, which is automatically converted to annotations prefixed with an “orchestrator_” tag.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

In the example below, filters are created and used within the BorgPress application workspace to build abstract security rules that, when enforced, implement a zero-trust policy.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Data collection and flow search
To support container workloads, the same Cisco Tetration agent used on the host OS to collect flow and process information is now also aware and capable of doing the same for containers.  Flows are stored inside a data lake that can be queried using out-of-the-box filters or directly from annotations learned from the Kubernetes cluster.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Policy definition and enforcement

Application workspaces are objects for defining, analyzing, and enforcing policies for a particular application.  BorgPress contains a total of 6 virtual machines, 3 containers, and 15 IP addresses.

Scopes are used to determine the set of endpoints that are pulled into the application workspace and thus are affected by the created policies that are later enforced.

In the example below, a scope, BorgPress, is created that identifies any endpoint that matches the four defined queries.  The queries for the BorgPress scope are based on custom annotations that have been both manually uploaded and learned dynamically.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Once a scope is created, the application workspace is built and associated to the scope.  In the example below, a BorgPress application workspace is created and tied to the BorgPress scope.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Policies using prebuilt filters inside the application workspace are defined to build segmentation rules.  In the example below, five default policies have been built that define the set of rules for BorgPress to function based on the logical application diagram discussed earlier. The orange boxes are with a red border are filters that describe the BorgPress wordpress tier that abstracts or contains container endpoints.  The highlighted yellow box shows a single rule that allows any BorgPress database server (there are three virtual machine endpoints in this tier) to provide a service on port 3306 to the consumer –  which is a BorgPress database HAProxy server.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

To validate these policies, live policy analysis is used to cross-examine every packet of a flow against the five policies or intents and then classify each as permitted, rejected, escaped, or misdropped by the network.  This is performed in near-real time and for all endpoints of the BorgPress application.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

It’s important to point out that up to this point there is no actual enforcement of policies.  Traffic classification is just a record of what occurred on the network as it relates to the intentions of the policy you would like to enforce.  This allows you to be certain that the rules you ultimately enforce will work as intended.  Through a single click of a button, Cisco Tetration can provide holistic enforcement for BorgPress across both virtual and containerized workloads.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Every rule does not need to be implemented on every endpoint.  Once “Enforce Policies” is enabled, each endpoint, through a secure channel to the agent, receives only its required set of rules.  The agent leverages the native firewall on the host OS (iptables or Windows firewall) to translate and implement policies.

The set of rules can be viewed from within the Cisco Tetration user interface or directly from the endpoint.  In the example below, the rules received and enforced for the BorgPress database endpoint db-mysql01, a virtual machine, are shown.  The rules match exactly the policy built inside the application workspace and are translated into the correct IPs on the endpoint using iptables.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Now that we’ve seen the rules enforced in a virtual machine for BorgPress, let’s look at how enforcement is done on containers.  Enforcement for containers happens at the container namespace level. Since BorgPress is a Kubernetes deployment, enforcement happens at the pod level.  BorgPress has three wordpress pods running in the default namespace.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Just as with virtual machines, we can view the enforcement rules both using the Cisco Tetration user interface or on the endpoint.  In the example below, the user interface is showing the host profile of one of the Kubernetes worker nodes: k8s-node02.  With container support, a new tab next to the Enforcement tab (“Container Enforcement”) shows the list of rules enforced to each pod.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

At this point all endpoints, both virtual and container, have the necessary enforcement rules, and BorgPress is almost deployed with a zero-trust security model.  Earlier I discussed the use of a type of Kubernetes service object called a NodePort.  Its purpose is to expose the BorgPress wordpress service to external (outside the cluster) users.  As the logical application flow diagram illustrates, the Web-HAProxy receives incoming client requests and load-balances them to the NodePort that every Kubernetes worker node listens on.  Since the NodePort is a dynamically generated high-end port number, it can change over time.  This presents a problem.  To make sure the Web-Haproxy always has the correct rule to allow outgoing traffic to the NodePort, Cisco Tetration learns about the NodePort though the external orchestrator.  When policy is pushed to the Web-HAProxy, Cisco Tetration also pushes the correct rule to allow traffic to the NodePort.  If you noticed from the application workspace image earlier, there is no policy definition or rule for the NodePort 30000 to allow communication from Web-HAProxy to BP-Web-Tier.  However, looking at the iptables of Web-HAProxy (see figure below), you can see that Cisco Tetration correctly added a rule to allow outgoing traffic to port 30000.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

The Importance of an Information Security Strategy in Mergers and Acquisitions

Cisco Security, Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications

Organizations have many options when it comes to growing. Many grow by hiring additional staff when it comes time to expand. Others grow through mergers and acquisitions with related companies, or companies that represent an entryway into a desired new vertical or territory. Organizations that engage in M&A should include an information security strategy as part of the process.

Headlines in 2018 include several data breaches where the acquired company led to an incident for the acquirer. A large travel site reported a data breach of information on 880,000 payment cards  in March of 2018. The attack was believed to compromise systems months earlier. The investigation determined that the incident was potentially linked to legacy IT systems from an acquired company. Failure to update or integrate these systems left the parent company potentially vulnerable.

A Baltimore-based apparel manufacturer reported a data breach affecting customers who leverage the company’s sports tracking app. 150 million customer records associated with the app were compromised. The app creator was acquired by the parent company in 2015. Compromised data includes usernames, passwords and email addresses.

Companies with an acquisition strategy need to include information security in the M&A process. Many security tools can be leveraged to provide visibility into an organization’s network, users and information. These visibility tools should be used to determine the accessibility of information to both appropriate personnel and unauthorized parties. Understanding the vulnerabilities, network segmentation, access to assets and information, and asset lifecycle management are important negotiation metrics.

The acquiring company should be able to run visibility or vulnerability assessments of the target company as part of the negotiation. Vulnerability scanners help gather risk data. NetFlow and network traffic metadata tools provide visibility into the scope and nature of an organization’s traffic. This can help an organization identify and inventory assets. Visibility into web traffic, DNS queries, and applications in use all contribute to a view of an organization.

Cisco Security, Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications
Vulnerable software report from AMP for Endpoints

These tools can help to establish where the target company is in terms of risk mitigation and security posture. It can tell the acquiring company how many man hours will need to go to get the target company to the appropriate levels of risk. An intelligent organization’s leadership understands that security is essential to all parts of the network. Proactive planning for growth and development must also be part of that security strategy.

Incident Response teams often use security tools to provide visibility into an organization following a data breach. These same tools can provide visibility into a target company’s information systems and networks. Use of these tools in advance of an acquisition can provide insight into the projects, security awareness training and even culture change necessary to understand the role of security in modern IT. Implementation of non-disclosure agreements can protect both the acquiring company and the target from leaks due to any gaps in the organization’s security posture.

Legacy systems have led to organizations appearing in the headlines. The brand damage, class action lawsuit payouts, data breach notifications and payment for services such as identity theft are all avoidable. Introducing and executing on a strong information security strategy as part of the M&A process is one way for organizations to minimize risk exposure and to understand the challenges and steps to achieving their desired security posture.

Leaders in organizations are accountable for the risk and exposure of users, information and networks. Visibility into these facets of an organization are key to ongoing security and to informed expansion, including mergers and acquisitions. The call to action for these organizational leaders focuses on that visibility. Research visibility, traffic profiling, application discovery and vulnerability tools. Speak with the organization’s trusted advisors, both internal and external, about the tools available and their recommendations. Regularly speak with the organization’s business leaders about emerging markets and potential mergers. Create and maintain an open dialogue about the potential risks and exposures that come with M&A. Many business leaders understand the importance of security in day-to-day operations. Including potential future business expansion in that conversation will help to craft a strategic information security policy.

Sunday, 20 May 2018

How Cloud Native and Container Platforms Change the Way We Think about Networking

Networking has been a foundational component of our economy since the Internet days. In the early days, defining protocols and standards for how to connect, route, and interoperate local, metro, and wide area networks was critical to the businesses strategy. The computer networking exports with their TCP/IP computer centric view of the stack went head to head with the telecommunications giants and their more traditional telephony driven model of switching, FCAPS (Fault, Configurations, Accounting, Performance, and Security management), and OSI Model. As is often the case in these debates, both sides have good architecture and design principles and in the end, while TCP/IP one the war, very important concepts were adopted into the network model to account for Quality of Service (QoS), traffic engineering, segmentation of network traffic for control and data plane, and hierarchy of the network Simply stated, a flat network with no segmentation or hierarchy from the network stack on the computer though the Internet would cause fault, configuration performance, and security issues. This was understood largely by all in the industry and IT.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

TCP/IP Model Versus OSI Model


In the past several years as the need to agility and driving time to market down, there has been a major mind shift that I have noticed in the largest of companies. Networking, with all its complexity in configuration, routing and segmentation, was causing major delays in delivering the true speed and agility required by the business. There have been several attempts to address these using technologies like VPN, NAT, and predefined network blocks per deployment region. These solutions are static in nature and the speed at which technologies mature and innovation happens, these solutions were very limiting. In the industry, we have looked at automating and orchestrating the network parameters with an API and called this Software Defined Networking.  SDN is definitely a step in the right direction; however it geared towards Network Administrators and not business owners or developers. Wikipedia has a good explanation of SDN and this simple diagram:

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

SDN Overview


While SDN is a great step forward – programmable APIs to the network, this is way too complex for the business owner or developer to program against (not in developer language and not written in a developer model) and most importantly, it does not solve the issues of complexity and ease of configuration in a rapidly changing and software centric world.

As cloud computing use increased in the public cloud, the abstraction of the underlying network became an important driver in adoption. No longer is defining or understanding the network important. As the industry moves to containers, the desire to simply and flatten the network is rapidly becoming the new standard for cloud native and container networking, orchestration, and microservices architecture.

While this may appear to be the right direction to take the least path of resistance, the network matters more today than it ever has before. Why you ask, let’s look at the tradition application architecture.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

N-Tiered Application Model


In this model, you have the following build in networking parameters:

◈ Presentation – On Isolated Network with NAT/PAT, firewall, and Logical separation of traffic onto a VLAN that is for just web traffic. The control (routing) traffic is on a separate network from the web (data) traffic
◈ Logic – On a separated network isolated behind a firewall on a separate VLAN than the web traffic. The control (routing) traffic is on a separate network from the application logic (data) traffic
◈ Database – On a separate isolated network behind a firewall on a separate VLAN than the web and application logic traffic. The control (routing) traffic is on a separate network from the database (data) traffic

Now let’s compare this to the cloud native architecture:

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

Cloud Native Application Model


In this new architecture, all traffic is on a common network with no isolation, no segmentation of control and data traffic, and leveraging the Linux kernel networking stack – which is another blog on why that is a very bad idea if you care about performance and scale. With the move to APIs and Rest interface, there is an added level of very chatty API traffic running over the very same network. All the complexity of the application architecture is handled by the network which makes the network more critical today than ever before.

Now, I’m not saying, it’s all about the network. As with all things in life, balance and focusing on what matters most is the best path to take (although it’s the path less traveled)  What I propose is that the intelligence can be built into the network. I like the analogy that everyone uses for cloud native with Pets versus Cattle. My network analogy is that cattle need isolation, direction, and fences! How can we as an industry move to more agile cloud native architecture and still corral the cattle? The answer comes from looking at this from 2 separate but equally important perspectives.

From a top down (application developer) perspective, the network requirements need to be represented as business intent with constraints that the business understands like latency, priority, security, and performance. If we enable a simple definition that is focused on the application’s business objectives, that will make it easy for the business to define what they care about.

From a bottom up (network administrator) perspective, the network administrators understand how to address business objectives and can easily programmatically define network and network security policies to meet the requirements. This will requires extending SDN capabilities to understand application policy and network specification to be created to support cloud native architectures, but these patterns are well known in the networking world today. The next step will be to use data generated by the application, services, and components to enable analytics to address performance, security, reliability, and latency issues in real or almost real time.

Friday, 18 May 2018

What is the difference between Cisco ASA 5505 and 5510 series firewalls?

There are many differences between the ASA 5505 and the 5510. 5505 is suitable for small offices and home networks while the 5510 is more suitable for bigger networks.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco ASA 5505 and 5510

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco ASA 5505 and 5510

For guys interested in Cisco study materials “Practice Exams, Syllabus Details, Sample Questions,… etc” I recommend  www.nwexam.com/cisco

Wednesday, 16 May 2018

Cisco ACI and NetBrain: Delivering Application-Centric Network Operations

Introduction


We launched Cisco ACI – NetBrain joint solution that extends NetBrain core capabilities to Cisco ACI. This blog is meant to raise awareness on how this solution and its key features benefit customers to transition to an Application-Centric Datacenter and further optimize Day-2 data center network operations.

Cisco ACI, Cisco NetBrain, Cisco Tutorials and Materials, Cisco Learning, Cisco Data Center

NetBrain is renowned for its network automation and troubleshooting capabilities and has regularly featured in Gartner’s Market guide for Network automation. NetBrain also boasts a strong 2000+ Enterprise customer base to complement its numerous awards and innovation recognition.

Cisco ACI is a market leading, SDN based networking technology that keeps applications as the focal point of data center infrastructure and enables the creation of an agile, open and secure architecture.

Challenge


Transitioning to an Application Centric data center and getting used to the new network operation model is a gradual process. To ensure a smooth transition, it is important to have tools to manage this heterogeneous network environment, where modern SDN based, open networking technologies are deployed alongside legacy networks. In such a scenario customers struggle to get deep visibility, effectively monitor and troubleshoot security and change management issues without impacting SLA.

Solution


Cisco ACI, Cisco NetBrain, Cisco Tutorials and Materials, Cisco Learning, Cisco Data Center

The NetBrain solution for Cisco ACI provides a single consistent view containing both network-centric and application-centric contexts of data centers, aiding enterprises to seamlessly transition to an application-centric, intent-based network enabled by Cisco ACI. The integration creates a scalable, versatile automation platform to provide network visualizations and automation for “Day 2” operation workflows, giving network operations teams deeper network visibility and enhanced workflow management for operational tasks.

NetBrain utilizes ACI open REST API framework to collect network data which feeds into its modeling engine. The resulting data model is used to dynamically create visualizations and serve as the foundation for automation and troubleshooting.

Key Use Cases and Benefits


◈ Enhanced visibility across heterogeneous infrastructures

The solution provides numerous forms of visualizations that allow users to visualize ACI network alongside legacy networks, trace application path end-to-end among other visualizations capabilities thereby providing a deep understanding of different design aspects in a heterogeneous environment.

◈ Real-time insights

With the solution, the user can superimpose different data sets from ACI as well as from other management systems in a single consistent view getting powerful change management, correlation, and troubleshooting capabilities.

◈ Cross-organization collaboration and Knowledge management

Using the integration, users can code best practices and solutions to known problems in the form of a Runbook automation routine and share across the organization. This fosters not only better cross-organization collaboration but also helps enterprise move towards standardizing their troubleshooting workflows.

◈ Reduced resolution time

Leveraging executable Runbook monitor the solution can monitor incidents and trigger a “Level-0” troubleshooting diagnosis as the first course of action. This utility can be further integrated with any ticketing and monitoring solution for expedited incident management.