Tuesday 26 January 2021

Introduction to Terraform with Cisco ACI, Part 1

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Many customers are starting to look at third party tools such as Terraform and Ansible to deploy and manage their infrastructure and applications. In this five-part series we’ll introduce Terraform and understand how it’s used with Cisco products such as ACI. Over the coming weeks this blog series will cover:

1. Introduction to Terraform

2. Terraform and ACI​​​​​​​

3. Explanation of the Terraform configuration files

4. Terraform Remote State and Team Collaboration

5. Terraform Providers – How are they built?

Code Example

https://github.com/conmurphy/intro-to-terraform-and-aci 

​​​​​​Infrastructure as Code

Before diving straight in, let’s quickly explore the category of tool in which Terraform resides, Infrastructure as Code (IaC). Rather than directly configuring devices through CLI or GUI, IaC is a way of describing the desired state with text files. These text files are read by the IaC tool (e.g. Terraform) which implements the required configuration.

Imagine a sysadmin needs to configure their VCenter/ESXi cluster including data centres, clusters, networks, and VMs. One option would be to click through the GUI to configure each of the required settings. Not only does this take time, but also may introduce configuration drift as individual settings are configured over the lifetime of the platform.

Recording the desired configuration settings in a file and using an IaC tool eliminates the need to click through a GUI, thus reducing the time to deployment.

Additionally, the tool can monitor the infrastructure (e.g Vcenter) and ensure the desired configuration in the file matches the infrastructure.

Here are a couple of extras benefits provided by Infrastructure as Code:

  • Reduced time to deployment
    • See above
    • Additionally, infrastructure can quickly be re-deployed and configured if a major error occurs.
  • Eliminate configuration drift
    • See above
  • Increase team collaboration
    • Since all the configuration is represented in a text file, colleagues can quickly read and understand how the infrastructure has been configured
  • Accountability and change visibility
    • Text files describing configuration can be stored using version control software such as Git, along with the ability to view the config differences between two versions.
  • Manage more than a single product
    • Most, if not all, IaC tools would work across multiple products and domains, providing you the above mentioned benefits from a single place.

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

There are a couple of components of Terraform which we will now walk through.

Configuration Files

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

When you run commands Terraform will search through the current directory for one or multiple configuration files. These files can either be written in JSON (using the extension .tf.json), or in the Hashicorp Configuration Language (HCL) using the extension .tf.​​​​​​​

The following link provides detailed information regarding Terraform configuration files.


As an example, here is a basic configuration file to configure an ACI Tenant, Bridge Domain, and Subnet.

provider "aci" {
  # cisco-aci user name
  username = "${var.username}"
  # cisco-aci password
  password = "${var.password}"
  # cisco-aci url
  url      =  "${var.apic_url}"
  insecure = true
}

resource "aci_tenant" "terraform_tenant" {
  name        = "tenant_for_terraform"   
  description = "This tenant is created by the Terraform ACI provider"
}

resource "aci_bridge_domain" "bd_for_subnet" {
  tenant_dn   = "${aci_tenant.terraform_tenant.id}"
  name        = "bd_for_subnet"
  description = "This bridge domain is created by the Terraform ACI provider"
}

resource "aci_subnet" "demosubnet" {
  bridge_domain_dn                    = "${aci_bridge_domain.bd_for_subnet.id}"
  ip                                  = "10.1.1.1/24"
  scope                               = "private"
  description                         = "This subject is created by Terraform"

When Terraform runs (commands below), the ACI fabric will be examined to confirm if the three resources (Tenant, BD, subnet) and their properties match what is written in the configuration file.

If everything matches no changes will be made.

When there is a difference between the config file and ACI fabric, for example the subnet does not already exist in ACI, Terraform will configure a new subnet within the BD. Since the Tenant and BD already exist in ACI, no changes will be made to these objects.

Cross checking the configuration file against the resources (e.g. ACI fabric), reduces the amount of configuration drift since Terraform will create/update/delete the infrastructure to match what’s written in the config file.​​​​​​​

Resources and Providers


Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. Alibaba Cloud, AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Cloud, DNSimple, Cloudflare), however the ones that we’ll be looking at are Cisco ACI and Intersight.

Resources exist within a provider.

A Terraform resource describes one or more infrastructure objects, for example in an ACI Tenant, EPG, Contract, BD.

A resource block in a .tf config file declares a resource of a given type e.g. (“aci_tenant”) with a given local name (“my_terraform_tenant “). The local name can then be referenced elsewhere in the configuration file.

The properties of the resource are specified within the curly braces of the resource block.

Here is an ACI Tenant resource as example.

resource "aci_tenant" "my_terraform_tenant" {
  name        = "tenant_for_terraform"   
  description = "This tenant is created by the Terraform ACI provider"
}

To create a bridge domain within this ACI tenant we can use the resource, aci_bridge_domain, and provide the required properties.

resource "aci_bridge_domain" "my_terraform_bd" {
  tenant_dn   = "${aci_tenant.my_terraform_tenant.id}"
  name        = "bd_for_subnet"
  description = "This bridge domain is created by the Terraform ACI provider"
}

Since a BD exists within a tenant in ACI, we need to link both resources together.

In this case the BD resource can reference a property of the Tenant resource by using the format, “${terraform_resource.given_name_of_resource.property}”

This makes it very easy to connect resources within Terraform configuration files.

Variables and Properties


As we’ve just learnt resources can be linked together using the format, “${}”. When we need to receive input from the user you can use input variables as described in the following link.


For many resources computed values such as an ID are also available. These are not hard coded in the configuration file but provided by the infrastructure.

They can be accessed in the same way as previously demonstrated. Note that in the following example the ID property is not hard coded in the aci_tenant resource, however this is referenced in the aci_bridge_domain resource. This ID was computed behind the scenes when the tenant was created and made available to any other resource that needs it. ​​​​​​​

resource "aci_tenant" "my_terraform_tenant" {
  name        = "tenant_for_terraform"   
  description = "This tenant is created by the Terraform ACI provider"
}

resource "aci_bridge_domain" "my_terraform_bd" {
  tenant_dn   = "${aci_tenant.my_terraform_tenant.id}"
  name        = "bd_for_subnet"
}

State Files

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

In order for Terraform to know what changes need to be made to your infrastructure, it must keep track of the environment. This information is stored by default in a local file named “terraform.tfstate“

NOTE: It’s possible to move the state file to a central location and this will be discussed in a later post

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

As you can see from examining this file Terraform keeps a record of how your infrastructure should be configured. When you run the plan or apply command, your desired config (.tf files) will be cross checked against the current state (.tfstate file) and the difference calculated.

For example if a subnet exists within the config.tf file but not within the terraform.tfstate will configure a new subnet in ACI and update terraform.tfstate.

The opposite is also true. If the subnet exists in terraform.tfstate but not within the config.tf file, Terraform assumes this configuration is not required and will delete the subnet from ACI.

This is a very important point and can result in undesired behaviour if your terraform.tfstate file was to change unexpectedly for some reason.

Here’s a great real world example. ​​​​​​​


Commands


There are many Terraform commands available however the key ones you should know about are as follows:

terraform init


Initializes a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

During init, Terraform searches the configuration for both direct and indirect references to providers and attempts to load the required plugins.

This is important when using the Cisco infrastructure providers (ACI and Intersight)

NOTE: For providers distributed by HashiCorp, init will automatically download and install plugins if necessary. Plugins can also be manually installed in the user plugins directory, located at ~/.terraform.d/plugins on most operating systems and %APPDATA%\terraform.d\plugins on Windows.

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

terraform plan


Used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.

This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.


Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

terraform apply


The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

terraform destroy


Infrastructure managed by Terraform will be destroyed. This will ask for confirmation before destroying.

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Cisco Exam Prep, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Study Material

Source: cisco.com

Monday 25 January 2021

Co-Packaged Optics and an Open Ecosystem

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

Some technology transitions are easy to spot and their adoption is inevitable. The only question is when the transition happens and how quickly will it be adopted.

Co-packaged optics (CPO), or in-package optics (IPO) depending on your terminology, is one of those technologies. Bringing optics and switch silicon together in the same package creates a synergy between once disjoint and independent technologies thereby saving significant power.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

The industry acknowledges this important evolution is coming but hasn’t been able to predict its arrival. Experts previously claimed that an ASIC’s electrical SerDes I/O would be unable to pass 10 Gbit/sec. When we pushed past this barrier, predictions were that 56 Gbit/sec SerDes would be impossible and therefore the entire 12.8 Tbps switch silicon generation would be based on CPO. Today we are in the 112 Gbit/sec SerDes generation with 25.6 Tbps silicon and we have yet to see the arrival of this CPO technology in any meaningful way, so what has changed and why are we talking about it more seriously now?

Before we get into when this critical transition will happen, I think it’s important to analyze why I say the technology transition is inevitable. The answer lies in two simple assumptions:

1. Analyzing historical growth trends provide a good indicator of future requirements.

2. We are at an inflection point in the industry where power has now become the ultimate limiting factor.

Analyzing Historical Trends – Switch Silicon


Analyzing the historical trends of switch silicon highlights two long-running trends.

1. Approximately once every two years the bandwidth of the switch silicon doubles tracking well to the notion of Moore’s Law which states that the number of transistors in a piece of silicon doubles every two years.

2. To support the increase in total switch silicon bandwidth both the speed and number of SerDes increased. The SerDes speed increased from 10 Gbit/sec to 112 Gbit/sec and the number of SerDes around the chip increased from 64-lanes to a projected 512-lanes in the 51.2 Tbps generation.

Unfortunately, Moore’s Law governs the number of transistors which more closely tracks to digital logic rather than the SerDes which includes analog portions in their design.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

If we further analyze the data, we find that to achieve the 80x switch silicon bandwidth increase from 640 Gbps to 51.2 Tbps the total power of the switch silicon increased by 9.5x. Or said another way, although the power efficiency increased with each new advanced CMOS node the total power still increases generation after generation.

Further breaking this down we can see that the silicon core power has increased by 7.4x, while the per-SerDes power increased by 2.84x. Coupled to the increasing number of SerDes, the total SerDes power in the switch silicon increased by 22.7x causing the ratio of power spent on SerDes to increase dramatically over-time.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

From this historical context, we can extrapolate that a 51.2 Tbps switch device will arrive in 2022 and 102.4 Tbps device will arrive in 2024 and that the power associated with the SerDes interconnect will continue to increase as a percentage of the total switch power and consume more of the system budget, ultimately dominating the total power consumption of the switch.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

Analyzing Historical Trends – Copper to Optical


The next piece of historical context has to do with how devices are connected. When global communication infrastructure was first deployed it used copper cables. Today in the Service Provider and Web-Scale networks most links outside of the rack are optical while wiring within the rack is copper. As speeds increase the longest copper links need to move to optical. Eventually, all the links leaving a silicon package will be optical rather than electrical.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

Power – The Ultimate Limiting Factor


 In my How Cisco Silicon One Can Help You Save Millions, I go into some of the reasons why power is so impactful for our customers and the environment, but taking a step back and thinking about the broader picture I believe power is the ultimate limiting factor because:

◉ Power limits what systems we can build, creating a technology imperative
◉ Power limits what our customers can deploy, creating a business imperative

And most importantly,

◉ Power limits what our planet can sustain, creating a moral imperative

These three imperatives create a perfect environment for us to drive innovation.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

Because SerDes power is such a large portion of the total system power today, and it’s reducing slower compared to system bandwidth increases, it’s an area that we must address with architectural innovations.

These trends and limits are why solving how to implement CPO today are so important.

Minimizing Interconnect Power


From “Through the Looking Glass – The 2018 Edition: Trends in Solid-State Circuits from the 65th ISSCC” we can see the strong relationship between power efficiency and the insertion loss of the channel the SerDes is designed to drive.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

As the distance, or more precisely the channel insertion loss, decreases the SerDes can be simplified saving significant power. This means the closer two devices are to each other the lower power it takes to send a signal between them. Taking this concept to the extreme, bringing the optical engine directly into the switch silicon package creates the shortest possible electrical traces thereby saving significant power.

This is the advantage of co-packaged optics.

Cisco SP360: Service Provider, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides, Cisco Learning, Cisco Career

Why 51.2T?


At this point, we have done enough analysis to show that a 51.2 Tbps based Ethernet switch can be built supporting 64 x QSFP-DD800 pluggable modules so we aren’t forced to build CPO to ship the product. However, our power analysis shows that a CPO-based switch design is significantly more power-efficient than a traditional 51.2 Tbps design with pluggable optics.

It is also clear that the 102.4 Tbps generation based on 224 Gbps SerDes will be a power-hungry and challenging system design, while the 204.8 Tbps generation will further challenge our traditional design techniques.

Architecting, designing, deploying, and operationalizing systems with CPO is an incredibly difficult task and therefore it is critical as an industry that we start before it’s too late. Therefore, I believe that the 51.2 Tbps switch silicon generation is the correct time to introduce CPO.

Cisco is in a unique position in the industry where we have industry-leading silicon, pluggable optics, on-board optics, silicon photonics, and system design in-house and we are working hard in conjunction with our customers to bring these technologies together to enable this important transition.

Creating an Ecosystem


Despite having our own extensive in-house experience and capabilities, Cisco believes that any such disruptive technology can only succeed when the right ecosystem is in place. The industry has a long history of standardization efforts such as the OIF, IEEE, and the MSAs which have defined the standards for pluggable optical modules. These standardization efforts have resulted in interoperable products being available from a wide variety of suppliers that customers can be confident will work together, providing customers with choice, the security of supply, and shorter time to market. These collaborative efforts are the foundational bedrock that our industry needs in order to progress at the technological and commercial pace that is required.

As a precursor to a broader standards effort, today I am pleased to announce a collaboration between Cisco and Inphi to cooperate on the definition of a CPO-based switch/optics solution to drive the industry forward and ensuring interoperability between silicon and optical engines from multiple different companies.

This collaboration will help our customers to enjoy a diverse and open ecosystem and interoperable best-of-breed technologies from a variety of suppliers.

Source: cisco.com

Sunday 24 January 2021

Dynamic Service Chaining in a Data Center with Nexus Infrastructure

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material

In an application-centric data center, the network needs to have maximum agility to manage workloads and incorporate services such as firewalls, load balancers, proxies and optimizers. These network services enhance compliance, security, and optimization in virtualized data centers and cloud networks. Data center ops teams need an elegant method to insert service nodes and have the ability to automatically redirect traffic using predefined rules as operations change.

Enterprises running their data centers on the Nexus 9000 and NX-OS platform can now seamlessly integrate service nodes into their data center and edge deployments using the new Cisco Enhanced Policy Based Redirect (ePBR) to easily define and manage rules that control how traffic is redirected to individual services.

Challenges with Service Insertion and Service Chaining

The biggest challenge when it comes to introducing service nodes in a data center is onboarding them into the fabric, and subsequently creating the traffic redirection rules. Today, there are two ways of implementing traffic redirection rules – by influencing the traffic path using routing metrics, or by selective traffic redirection using policy-based routing.

The challenge with using routing to influence the forwarding path is that all traffic traverses the same path. This often ends up making the service node a bottle neck. The only practical way to achieve scale is by vertically scaling the node, which is expensive and  limited by the extent the node can be expanded.

Policy Based Routing (PBR) rules are also complex to maintain since separate rules are needed for forward and reverse traffic directions in order to maintain symmetry for stateful service nodes. In addition, when there are multiple service nodes in a chain, maintaining PBR rules to redirect traffic across them increases complexity even more.

Introducing Enhanced Policy Based Redirect

NX-OS version 9.3(5) provides Enhanced Policy Based Redirect. The goal of ePBR is to solve some of the challenges with existing redirection rules. In a nutshell, ePBR:

◉ Simplifies onboarding service nodes into the network

◉ Creates selective traffic redirection rules across a single node or a chain of service nodes

◉ Auto-generates reverse redirection rules to maintain symmetry across a service node chain

◉ Provides the ability to redirect and load-balance

◉ Supports pre-defined and customizable probes to monitor the health of service nodes

◉ Supports the ability to either drop traffic, bypass a node, or fallback to routing lookup when a node in a chain fails

ePBR supports all of these capabilities across a fabric running VXLAN with BGP EVPN, as well as a classic core, aggregation, access data center deployment, at line rate switching, with no penalty to throughput or performance. Let’s look at three ePBR use cases.

Use Case 1: ePBR for Selective Traffic Redirection

Various applications may require redirection across different sets of service nodes. With ePBR, redirection rules can match application traffic using Source Destination IP and L4 ports and redirect them across different service nodes or service chains. In the diagram below, client traffic for Application 1 traverses the firewall and IPS, whereas Application 2 traverses the proxy before reaching the server. This flexibility that ePBR enables customers to on-board multiple applications on their network and comply with security requirements.

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Use Case 1: ePBR for Selective Traffic Redirection

Use Case 2: Selective Traffic Redirection Across Active/Standby Service Node Chain


In this use case, traffic from clients is redirected to a firewall and load-balancer service chain, before being sent to the server. Using probes, ePBR intelligently tracks which node in each cluster is active and automatically redirects the traffic to a new active node if the original active node fails. In this example, the service chain is inserted in a fabric running VXLAN. As a result, traffic from clients is always redirected to the active firewall and then the active load-balancer.

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Use Case 2: Selective Traffic Redirection Across Active/Standby Service Node Chain

Use Case 3: Load-Balancing Across Service Nodes


With exponential growth in traffic, ePBR can intelligently load-balance across service nodes in a cluster, providing the ability to horizontally scale the network. ePBR ensures symmetry is maintained for a given flow by making sure that traffic in both forward and reverse directions is redirected to the same service node in the cluster. The example below shows how traffic inside a mobile packet core is load-balanced across a cluster of TCP optimizers.

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Use Case 3: Load-Balancing Across Service Nodes

Improving Operational Efficiency with Innovations in Cisco ASICs and NX-OS

Cisco continues to provide value to our customers by fully leveraging capabilities designed into Cisco ASICs and innovations in NX-OS software. ePBR enables the rapid on-boarding of a variety of services into data center networks, and simplifies how traffic chaining rules are setup, thus reducing time spent provisioning services and improving overall operational efficiency.

Saturday 23 January 2021

Cisco’s Role in the Monumental Vaccination Effort

Cisco Preparation, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career

Big challenges require big solutions. But when it comes to technology for coronavirus vaccine access and administration, many of those big solutions already exist.

As you read this, COVID-19 vaccines are being rolled out in different capacities around the world. The troubling news is, getting the vaccines to the public is continuing to present an evolving array of challenges. Limited availability, complex transportation and storage, and phasing are all creating confusion. The good news is that technology is helping to overcome those challenges by building bridges between the government agencies in charge of the vaccination effort, the retail pharmacies and healthcare organizations administering the vaccines, and the communities who need them.

During the past nine months, Cisco has been powering an inclusive recovery through efficient vaccine administration; helping essential organizations stand up the technology and communications needed by medical and healthcare facilities, retail pharmacies, essential government services, and other frontline efforts. And, today, Cisco continues to do its part as a trusted technology partner. We’re helping enable vaccine administration by improving three key functions—communications and access, field operations and administration, and security and application performance.

Communications and access

By providing communications and access solutions—such as Cisco Webex and Webex Contact Center—we’re enabling better patient access and outreach, better care provider and administrative collaboration, and more virtual engagements. We’re also providing a more comprehensive way for government agencies, healthcare facilities, and retail sites to efficiently scale their efforts to address increased volume and equitable access to critical information and services.

Field operations and administration

With field operations and administration solutions—like networking, WiFi analytics, video, collaboration, and cloud-delivered location services and security—we’re helping organizations respond to dynamic community needs, set up field hospitals and mobile clinics, provide equitable access, improve citizen experiences, and simplify equipment monitoring.

Security and application performance

Finally, our innovative security and application performance tools—among them, application monitoring and management, IoT sensors, cameras, and cloud-enabled security—are ensuring the safety, security, privacy, performance, and compliance necessary for organizations to successfully administer vaccines and operate efficiently around the clock.

As you can imagine, vaccine administration systems will likely remain under immense pressure until the millions of people who need vaccinations get them. So, it is vital for government, healthcare, and retail organizations to keep these mission-critical services running as smoothly as technologically possible. That, as it turns out, is our strong suit.

Cisco Preparation, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career

Keep in mind, performing in this capacity is nothing new for Cisco. All of the solutions and use cases mentioned above are customer-validated and proven.

As it always has, Cisco provides its customers with solutions that help people and communities access technology, information, advice, and anything else they might require. We were here for our customers before the pandemic. We’re here for them today as we navigate our way through COVID-19 together. And as any trusted partner should, we will be here for our customers tomorrow to take on whatever comes next. That’s why so many leaders around the world, across all levels of government, healthcare, and retail, have trusted and relied on us to stand by them through their ongoing digital transformation efforts.

Thursday 21 January 2021

300-715 Free Exam Questions & Latest Cisco CCNP Security Study Guide


Cisco SISE Exam Description:

The Implementing and Configuring Cisco Identity Services Engine v1.0 (SISE 300-715) exam is a 90-minute exam associated with the CCNP Security and Cisco Certified Specialist - Security Identity Management Implementation certifications. This exam tests a candidate's knowledge of Cisco Identify Services Engine, including architecture and deployment, policy enforcement, Web Auth and guest services, profiler, BYOD, endpoint compliance, and network access device administration. The course, Implementing and Configuring Cisco Identity Services Engine, helps candidates to prepare for this exam.

Cisco 300-715 Exam Overview:

Related Articles:-

Wednesday 20 January 2021

Cisco 350-401 ENCOR Exam Journey: Tips That Make Your Prep Easier

Organizations are on the constant lookout for specialists who can competently execute core technologies. That's why there's a surge in requirements for those who can demonstrate they have the right skills. If you want to benefit from these opportunities, you require to follow the proper approach that comprises training, taking exams, and achieving certifications. Cisco has a magnificent plan for you by providing several certifications at different entry levels, associate to professional, expert, and architect. To gain any, you should use it for an exam, and this article is dedicated to the Cisco 350-401 ENCOR exam. We'd like you to know essential things about this exam and how you can prepare for it competently.

Desktops in the Data Center: Establishing ground rules for VDI

Since the earliest days of computing, we’ve endeavored to provide users with efficient, secure access to the critical applications which power the business.

From those early mainframe applications being accessed from hard-wired dumb terminals to the modern cloud-based application architectures of today, accessible to any user, from anywhere, on any device, we’ve witnessed the changing technology landscape deliver monumental gains in user productivity and flexibility.  With today’s workforce being increasingly remote, the delivery of secure, remote access to corporate IT resources and applications is more important than ever.

Although the remote access VPN has been dutifully providing secure, remote access for many years now, the advantages of centrally administering and securing the user desktop through Virtual Desktop Infrastructure (VDI) are driving rapid growth in adoption.  With options including hosting of the virtual desktop directly in the data center as VDI or in the public cloud as Desktop-as-a-Service (DaaS), organizations can quickly scale the environment to meet business demand in a rapidly changing world.

Allowing users to access a managed desktop instance from any personal laptop or mobile device, with direct access to their applications provides cost efficiencies and great flexibility with lower bandwidth consumption…. and it’s more secure, right?  Well, not so fast!

Considering the Risks

Although addressing some of the key challenges in enabling a remote workforce, VDI introduces a whole new set of considerations for IT security.  After all, we’ve spent years keeping users OUT of the data center…. and now with VDI, the user desktop itself now resides on a virtual machine, hosted directly inside the data center or cloud, right inside the perimeter security which is there to protect the organization’s most critical assets. The data!

This raises some important questions around how we can secure these environments and address some of these new risks.

◉ Who is connecting remotely to the virtual desktop?

◉ Which applications are being accessed from the virtual desktops?

◉ Can virtual desktops communicate with each other?

◉ What else can the virtual desktop gain access to outside of traditional apps?

◉ Can the virtual desktop in any way open a reverse tunnel or proxy out to the Internet?

◉ What is the security posture of the remote user device?

◉ If the remote device is infected by virus or malware, is there any possible way that might infect the virtual desktop?

If the virtual desktop itself is infected by virus or malware, could an attacker access or infect other desktops, application servers, databases etc. Are you sure?

With VDI solutions today ranging from traditional on-premises solutions from Citrix and VMware to cloud offered services with Windows Virtual Desktop from Azure and Amazon Workspaces from AWS, there are differing approaches to the delivery of a common foundation for secure authentication, transport and endpoint control.  What is lacking however, is the ability to address some of the key fundamentals for a Zero Trust approach to user and application security across the multiple environments and vendors that make up most IT landscapes today.

How can Cisco Secure Workload (Tetration) help?

Cisco Secure Workload (Tetration) provides zero trust segmentation for VDI endpoints AND applications.  Founded on a least-privilege access model, this allows the administrator to centrally define and enforce a dynamic segmentation policy to each and every desktop instance and application workload.  Requiring no infrastructure changes and supporting any data center or cloud environment, this allows for a more flexible, scalable approach to address critical security concerns, today!

Establishing Control for Virtual Desktops

With Secure Workload, administrators can enforce a dynamic allow-list policy which allows users to access a defined set of applications and resources, while restricting any other connectivity.  Virtual desktops are typically connected to a shared virtual network, leaving a wide-open attack surface for lateral movement or malware propagation so this policy provides an immediate benefit in restriction of desktop to desktop communication.

This flexible policy allows rules to be defined based on context, whether identifying a specific desktop group/pool, application workloads or vulnerable machines, providing simplicity in administration and the flexibility to adapt to a changing environment without further modification.

◉ Do your VDI instances really need to communicate with one another?

With a single policy rule, Secure Workload can enforce a desktop isolation policy to restrict communication between desktop instances without impacting critical services and application access.  This simple step will immediately block malware propagation and restrict visibility and lateral movement between desktops.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning
Figure 1: Deny policy for virtual desktop isolation

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning
Figure 2: Lateral communication between desktops blocked (inbound and outbound)

◉ Want to permit only a specific user group access to your highly sensitive HR application?

Secure Workload will identify the desktop instances and application workloads by context, continuously refreshing the allow-list policy rules to permit this communication as users log in and out of their virtual desktops and as the application workloads evolve.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning
Figure 3: Context based application access control

◉ Need full visibility of which applications are being accessed, how and when?

Tetration not only enforces the allow-list policy to protect your assets, but also records flow data from every communication, ensuring continuous near-real-time compliance monitoring of traffic to identify malicious or anomalous behaviors.

◉ Need to meet segmentation requirements for regulatory compliance?

Natural language policy definition based on dynamic labels and annotations ensures traffic complies with regulatory policy constraints from one well-defined policy intent.

◉ Require the ability to automatically quarantine vulnerable virtual desktops or application workloads to protect against exploit?

Tetration natively detects vulnerable software packages to apply automated policy controls which only apply until remediation.

All offered from SaaS, this can be achieved without any change to existing infrastructure, with distributed enforcement at scale from virtual desktops to application workloads for end to end protection.