Thursday, 30 November 2023

Making Your First Terraform File Doesn’t Have to Be Scary

Making Your First Terraform File Doesn’t Have to Be Scary

For the past several years, I’ve tried to give at least one Terraform-centric session at Cisco Live. That’s because they’re fun and make for awesome demos. What’s a technical talk without a demo? But I also see huge crowds every time I talk about Terraform. While I wasn’t an economics major, I do know if demand is this large, we need a larger supply!

That’s why I decided to step back and focus to the basics of Terraform and its operation. The configuration applied won’t be anything complex, but it should explain some basic structures and requirements for Terraform to do its thing against a single piece of infrastructure, Cisco ACI. Don’t worry if you’re not an ACI expert; deep ACI knowledge isn’t required for what we’ll be configuring.

The HCL File: What Terraform will configure


A basic Terraform configuration file is written in Hashicorp Configuration Language (HCL). This domain-specific language (DSL) is similar in structure to JSON, but it adds components for things like control structures, large configuration blocks, and intuitive variable assignments (rather than simple key-value pairs).

At the top of every Terraform HCL file, we must declare the providers we’ll need to gather from the Terraform registry. A provider supplies the linkage between the Terraform binary and the endpoint to be configured by defining what can be configured and what the API endpoints and the data payloads should look like. In our example, we’ll only need to gather the ACI provider, which is defined like this:

terraform {

  required_providers {

    aci = {

      source = “CiscoDevNet/aci”

    }

  }

}

Once you declare the required providers, you have to tell Terraform how to connect to the ACI fabric, which we do through the provider-specific configuration block:

provider "aci" {

username = "admin"

password = "C1sco12345"

url      = "https://10.10.20.14"

insecure = true

}

Notice the name we gave the ACI provider (aci) in the terraform configuration block matches the declaration for the provider configuration. We’re telling Terraform the provider we named aci should use the following configuration to connect to the controller. Also, note the username, password, url, and insecure configuration options are nested within curly braces { }. This indicates to Terraform that all this configuration should all be grouped together, regardless of whitespaces, indentation, or the use of tabs vs. spaces.

Now that we have a connection method to the ACI controller, we can define the configuration we want to apply to our datacenter fabric. We do this using a resource configuration block. Within Terraform, we call something a resource when we want to change its configuration; it’s a data source when we only want to read in the configuration that already exists. The configuration block contains two arguments, the name of the tenant we’ll be creating and a description for that tenant.

resource "aci_tenant" "demo_tenant" {

name        = "TheU_Tenant"

description = "Demo tenant for the U"

}

Once we write that configuration to a file, we can save it and begin the process to apply this configuration to our fabric using Terraform.

The Terraform workflow: How Terraform applies configuration


Terraform’s workflow to apply configuration is straightforward and stepwise. Once we’ve written the configuration, we can perform a terraform init, which will gather the providers from the Terraform registry who have been declared in the HCL file, install them into the project folder, and ensure they are signed with the same PGP key that HashiCorp has on file (to ensure end-to-end security). The output of this will look similar to this:

[I] theu-terraform » terraform init


Initializing the backend...


Initializing provider plugins...

- Finding latest version of ciscodevnet/aci...

- Installing ciscodevnet/aci v2.9.0...

- Installed ciscodevnet/aci v2.9.0 (signed by a HashiCorp partner, key ID 433649E2C56309DE)


Partner and community providers are signed by their developers.

If you'd like to know more about provider signing, you can read about it here:

https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider

selections it made above. Include this file in your version control repository

so that Terraform can guarantee to make the same selections by default when

you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running “terraform plan” to see any changes required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

Once the provider has been gathered, we can invoke terraform plan to see what changes will occur in the infrastructure prior to applying the config. I’m using the reservable ACI sandbox from Cisco DevNet  for the backend infrastructure but you can use the Always-On sandbox or any other ACI simulator or hardware instance. Just be sure to change the target username, password, and url in the HCL configuration file.

Performing the plan action will output the changes that need to be made to the infrastructure, based on what Terraform currently knows about the infrastructure (which in this case is nothing, as Terraform has not applied any configuration yet). For our configuration, the following output will appear:

[I] theu-terraform » terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

 + create

Terraform will perform the following actions:


# aci_tenant.demo_tenant will be created

+ resource "aci_tenant" "demo_tenant" {

+ annotation                    = "orchestrator:terraform"

+ description                   = "Demo tenant for the U"

+ id                            = (known after apply)

+ name                          = "TheU_Tenant"

+ name_alias                    = (known after apply)

+ relation_fv_rs_tenant_mon_pol = (known after apply)

}


Plan: 1 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if

you run "terraform apply" now.

We can see that the items with a plus symbol (+) next to them are to be created, and they align with what we had in the configuration originally. Great!  Now we can apply this configuration. We perform this by using the terraform apply command. After invoking the command, we’ll be prompted if we want to create this change, and we’ll respond with “yes.”

[I] theu-terraform » terraform apply                                                      

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the

following symbols:

  + create


Terraform will perform the following actions:


  # aci_tenant.demo_tenant will be created

  + resource "aci_tenant" "demo_tenant" {

      + annotation                    = "orchestrator:terraform"

      + description                   = "Demo tenant for the U"

      + id                            = (known after apply)

      + name                          = "TheU_Tenant"

      + name_alias                    = (known after apply)

      + relation_fv_rs_tenant_mon_pol = (known after apply)

    }


Plan: 1 to add, 0 to change, 0 to destroy.


Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only 'yes' will be accepted to approve.


  Enter a value: yes


aci_tenant.demo_tenant: Creating...

aci_tenant.demo_tenant: Creation complete after 3s [id=uni/tn-TheU_Tenant]


Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The configuration has now been applied to the fabric!  If you’d like to verify, log in to the fabric and click on the Tenants tab. You should see the newly created tenant.

Finally – if you’d like to delete the tenant the same way you created it, you don’t have to create any complex rollback configuration. Simply invoke terraform destroy from the command line. Terraform will verify the state that exists locally within your project aligns with what exists on the fabric; then it will indicate what will be removed. After a quick confirmation, you’ll see that the tenant is removed, and you can verify in the Tenants tab of the fabric.

[I] theu-terraform » terraform destroy                                                    

aci_tenant.demo_tenant: Refreshing state... [id=uni/tn-TheU_Tenant]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the

following symbols:

  - destroy


Terraform will perform the following actions:


  # aci_tenant.demo_tenant will be destroyed

  - resource "aci_tenant" "demo_tenant" {

      - annotation  = "orchestrator:terraform" -> null

      - description = "Demo tenant for the U" -> null

      - id          = "uni/tn-TheU_Tenant" -> null

      - name        = "TheU_Tenant" -> null

    }



Plan: 0 to add, 0 to change, 1 to destroy.


Do you really want to destroy all resources?

  Terraform will destroy all your managed infrastructure, as shown above.

  There is no undo. Only 'yes' will be accepted to confirm.


  Enter a value: yes


aci_tenant.demo_tenant: Destroying... [id=uni/tn-TheU_Tenant]

aci_tenant.demo_tenant: Destruction complete after 1s


Destroy complete! Resources: 1 destroyed.

Complete Infrastructure as Code lifecycle management with a single tool is pretty amazing, huh?

A bonus tip


Another tip regarding Terraform and HCL relates to the workflow section above. I described the use of curly braces to avoid the need to ensure whitespace is correct or tab width is uniform within the configuration file. This is generally a good thing, as we can focus on what we want to deploy rather than minutiae of the config. However, sometimes it helps when you format the configuration in a way that’s aligned and easier to read, even if it doesn’t affect the outcome of what is deployed.

In these instances, you can invoke terraform fmt within your project folder, and it will automatically format all Terraform HCL files into aligned and readable text. You can try this yourself by adding a tab or multiple spaces before an argument or maybe between the = sign within some of the HCL. Save the file, run the formatter, and then reopen the file to see the changes. Pretty neat, huh?

Source: cisco.com

Saturday, 25 November 2023

10 Useful Tips to Ace Cisco 300-415 ENSDWI Exam

Embarking to become a Cisco Certified Network Professional (CCNP) is a formidable task, and the 300-415 ENSDWI (Implementing Cisco SD-WAN Solutions) exam is a pivotal gateway. In this comprehensive guide, we'll navigate the intricacies of the 300-415 ENSDWI exam, offering valuable insights, expert tips for preparation, and a glimpse into the rewarding career opportunities that await those who successfully conquer this certification.

Decoding 300-415 ENSDWI Exam

The 300-415 ENSDWI exam delves into Implementing Cisco SD-WAN Solutions, evaluating candidates' proficiency in orchestrating WAN edge routers to connect to the SD-WAN fabric. This exam is not merely a test of theoretical knowledge but a practical assessment of your ability to deploy, manage, and troubleshoot SD-WAN solutions in real-world scenarios.

Before delving into preparation tips, it's crucial to understand the terrain you'll be navigating. The 300-415 ENSDWI exam typically consists of 55-65 questions; candidates are allotted 90 minutes for completion. The format encompasses a variety of question types, including multiple-choice, drag-and-drop, and simulation-based scenarios.

Cisco 300-415 ENSDWI Exam Objectives

  • Architecture (20%)
  • Controller Deployment (15%)
  • Router Deployment (20%)
  • Policies (20%)
  • Security and Quality of Service (15%)
  • Management and Operations (10%)
  • Ten Expert Tips for 300-415 ENSDWI Preparation

    1. Create a Study Plan

    Building a structured study plan is the foundation of successful exam preparation. Allocate dedicated time daily to cover specific topics, ensuring a balanced approach to all exam objectives. Consistency is key, and a well-organized study plan will help you stay on track.

    2. Hands-On Practice

    Theory is vital, but practical application is paramount. Set up a virtual lab environment to experiment with SD-WAN configurations. The hands-on experience will reinforce theoretical concepts and enhance your troubleshooting skills—a crucial aspect of the exam.

    3. Utilize Official Cisco Resources

    Cisco provides many official resources, including documentation, whitepapers, and video tutorials. These materials offer insights into SD-WAN technologies and Cisco's expectations for exam candidates. Leverage these resources to complement your study materials.

    4. Join Online Communities

    Engage with fellow candidates and networking professionals in online forums and communities. Discussing concepts, sharing experiences, and seeking clarification on doubts can provide a fresh perspective and fill gaps in your understanding. The collective wisdom of the community is a valuable asset.

    5. Use Cisco 300-415 ENSDWI Practice Test

    Perform Cisco 300-415 ENSDWI practice tests to familiarize yourself with the time constraints and pressure. Timed practice exams assess your knowledge and train you to manage time effectively during the test. This step is crucial for building confidence and reducing exam-day anxiety.

    6. Focus on Weak Cisco 300-415 ENSDWI Syllabus Topics

    Regularly evaluate your progress and identify weak areas. Allocate additional time to reinforce your understanding of these topics. Whether it's troubleshooting, security configurations, or SD-WAN policies, addressing weaknesses proactively will contribute to a more well-rounded preparation.

    7. Stay Updated with Industry Trends

    The world of networking is dynamic, with technologies evolving rapidly. Stay abreast of industry trends, especially those related to SD-WAN. Familiarizing yourself with the latest developments ensures that your knowledge is exam-centric and reflective of real-world scenarios.

    8. Explore Third-Party Study Materials

    While official Cisco resources are indispensable, exploring third-party study materials can offer diverse perspectives. Books, practice exams, and online courses from reputable sources can provide alternative explanations and additional context, enriching your understanding.

    9. Teach to Learn

    The act of teaching reinforces your understanding. Collaborate with study partners or create study guides for specific topics. Explaining concepts in your own words solidifies your knowledge and highlights areas in which you need further clarification.

    10. Review and Revise Cisco 300-415 ENSDWI Exam Topics

    Continuous revision is the key to retention. Periodically revisit previously covered topics to reinforce your memory. The spaced repetition technique, where you review information at increasing intervals, is particularly effective in ingraining knowledge for the long term.

    Beyond 300-415 Exam: The CCNP Enterprise Certification

    Earning the 300-415 ENSDWI certification is a significant accomplishment, but it's just one piece of the puzzle. Combining it with the 350-401 ENCOR (Implementing and Operating Cisco Enterprise Network Core Technologies) exam leads to the coveted CCNP Enterprise certification.

    1: Career Paths with CCNP Enterprise

    The CCNP Enterprise certification opens doors to a myriad of career opportunities. You become a sought-after professional capable of designing and implementing complex enterprise network solutions. Roles such as Network Engineer, Systems Engineer, and Network Administrator are within reach.

    2: Industry Recognition

    CCNP Enterprise certification is globally recognized and respected. It serves as a testament to your expertise in networking technologies and positions you as a qualified professional in the eyes of employers worldwide. This recognition can be a game-changer in job interviews and salary negotiations.

    3: Salary Advancement

    With CCNP Enterprise certification, you're not just acquiring knowledge but investing in your earning potential. Certified professionals often command higher salaries than their non-certified counterparts. The certificate becomes a tangible asset, showcasing your dedication to continuous learning and mastery of your craft.

    Conclusion

    The journey to mastering the 300-415 ENSDWI exam is undoubtedly challenging, but the rewards are commensurate with the effort invested. As you delve into the intricacies of SD-WAN solutions, remember that each concept mastered is a step closer to unlocking a world of career possibilities. The CCNP Enterprise certification, born from the synergy of 300-415 ENSDWI and 350-401 ENCOR, is your passport to a future where your expertise is valued and indispensable in the ever-evolving networking landscape. So, gear up, embrace the challenge, and set forth on the path to CCNP Enterprise mastery. Success awaits those who dare to venture into the realm of possibilities that certification unlocks.

    Thursday, 23 November 2023

    Secure Multicloud Infrastructure with Cisco Multicloud Defense

    It’s a multicloud world!


    Today applications are no longer restricted to the boundaries of a data center; applications are deployed everywhere – this change brings a need for a solution that can provide end-to-end visibility, control, policy management, and ease of management.

    Market Trend


    Organizations are embracing the power of the public cloud because it provides agile, resilient, and scalable infrastructure, enabling them to maximize business velocity. A recent study shows that 82% of IT leaders have adopted hybrid cloud solutions, combining private and public clouds. Additionally, 58% of these organizations are using between two and three public clouds1, indicating a growing trend towards multicloud environments. As organizations lean further into multicloud deployments, security teams find they are playing catch up, tirelessly attempting to build a security stack that can keep up with the agility and scale of their cloud infrastructure. Teams also face a lack of unified security controls across their environments. By definition, cloud service provider security solutions are not designed to achieve end-to-end visibility and control in the multicloud world, hardening silos and creating greater security gaps. Organizations need a cloud-agnostic solution that unifies security controls across all environments while securing workloads at cloud speed and scale.

    Cisco Multicloud Defense is a highly scalable, on-demand “as-a-Service” solution that provides agile, scalable, and flexible security to your multicloud infrastructure. It unifies security controls across cloud environments, protects workloads from every direction, and drives operational efficiency by leveraging secure cloud networking.

    Secure cloud networking can be broken down into three pillars:

    • Security: Provides a full suite of security capabilities for workload protection
    • Cloud: Integrates with cloud constructs, enabling auto-scale and agility
    • Networking: Seamlessly and accurately inserts scalable security across clouds without manual intervention

    One of the key benefits of Cisco Multicloud Defense is not only its ability to unify security controls across environments but enforce those policies dynamically. With dynamic multicloud policy management, you can:

    • Keep policies up to date in near-real time as your environment changes.
    • Connect continuous visibility and control to discover new cloud assets and changes, associate tag-based business context, and automatically apply the appropriate policy to ensure security compliance.
    • Power and protect your cloud infrastructure with security that runs in the background via automation, getting out of the way of your cloud teams.
    • Mitigate security gaps and ensure your organization stays secure and resilient.

    Another key benefit of Multicloud Defense is how it adds enforcement points (PaaS) in both distributed and centralized architectures.

    Cisco Multicloud Defense Overview


    Cisco Multicloud Defense uses a common principle in public clouds and software-defined networking (SDN) which decouples the control and data plane, translating to the Multicloud Defense Controller and the Multicloud Defense Gateways.

    The Multicloud Defense Gateway(s) are delivered as Platform-as-a-Service (PaaS) in AWS, Azure, Google Cloud Platform (GCP), and Oracle Cloud Infrastructure (OCI). These gateways are delivered, managed, and orchestrated by a SaaS-based Multicloud Defense Controller.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 1: Cisco Multicloud Defense Overview

    • Multicloud Defense Controller (Software-as-a-Service): The Multicloud Defense Controller is a highly reliable and scalable centralized controller (control plane) that automates, orchestrates, and secures multicloud infrastructure. It runs as a Software-as-a-Service (SaaS) and is fully managed by Cisco. Customers can access a web portal to utilize the Multicloud Defense Controller, or they may choose to use Terraform to instantiate security into the DevOps/DevSecOps processes.
    • Multicloud Defense Gateway (Platform-as-a-Service): The Multicloud Defense Gateway is an auto-scaling fleet of security software with a patented flexible, single-pass pipelined architecture. These gateways are deployed as Platform-as-a-Service (PaaS) into the customer’s public cloud account(s) by the Multicloud Defense Controller, providing advanced, inline security protections to defend against external attacks, block egress data exfiltration, and prevent the lateral movement of attacks.

    Multicloud Defense Gateways


    In the Cisco Multicloud Defense solution, organizations can use the controller to deploy highly scalable and resilient Egress Gateways or Ingress Gateways into their public cloud account(s).

    Egress Gateway: Protect outbound and east-west traffic. The egress gateway provides security capabilities like FQDN filtering, URL filtering, data loss prevention (DLP), IPS/IDS, antivirus, forward proxy, and TLS decryption.

    Ingress Gateway: Protects inbound traffic and provides security capabilities like web application firewall (WAF), IDS/IPS, Layer-7 protection, DoS protection, antivirus, reverse proxy, and TLS decryption.

    Note: Multicloud Defense Gateways are an auto-scaling fleet of instances across two or more availability zones, providing agility, scalability, and resiliency.

    Figure 2 shows security capabilities of the ingress and egress Multicloud Defense Gateway.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 2: Cisco Multicloud Defense Gateway

    The gateway uses a single pass architecture to provide:

    • High throughput and low latency
    • Reverse proxy, forward proxy, and forwarding mode
    • Flexibility in selecting relevant advanced network security inspection engines, including TLS decryption and re-encryption, WAF (HTTPS and web sockets), IDS/IPS, antivirus/anti-malware, FQDN and URL filtering, DLP

    Security Models


    This solution provides a flexible way for security insertion in the customer’s infrastructure using three highly scalable and automated deployment models (centralized, distributed, and combined).

    Centralized security model

    In the centralized security model, the Multicloud Defense Controller seamlessly adds gateways in the centralized security VPC/VNet/VCN. In this architecture, ingress and egress traffic is sent to a centralized security VPC/VNet/VCN for inspection before it is sent to the destination. This architecture ensures scalability, resiliency, and agility using cloud deployment best practices.

    Figure 3 shows egress and ingress gateways in a security VPC/VNet/VCN.

    • For scalability, autoscaling is supported.
    • For resiliency, auto-scaled instances are deployed in multi-availability zones.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 3: Centralized Security Model

    In a centralized security model, gateways are deployed in a hub inside the customer’s cloud account. However, customers can choose to have multiple hubs across accounts/subscriptions.

    Distributed security model

    In the distributed security model, the Multicloud Defense Controller seamlessly adds gateways in each VPC/VNet/VCN. In this architecture, ingress, and egress traffic stays local in the VPC/VNet/VCN.

    Based on direction, traffic flow is inspected by egress or ingress gateways. This deployment ensures scalability, resiliency, and agility using cloud deployment best practices.

    Figure 4 shows egress and ingress gateways in each VPC/VNet/VCN.

    • For scalability, autoscaling is supported.
    • For resiliency, auto-scaled instances are deployed in multi-availability zones.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 4: Distributed Security Model

    Combined security model (Centralized + Distributed)

    This security model uses centralized and distributed models. In this case, some flows are protected by gateways deployed in the security VPC/VNet/VCN, and some flows are protected by gateways in the VPC/VNet/VCN.

    Based on the traffic flow, traffic is inspected by egress or ingress gateways. This deployment ensures scalability, resiliency, and agility using cloud deployment best practices.

    Figure 5 shows egress and ingress gateways in a centralized security VPC/VNet/VCN in addition to gateways deployed in the application VCPs/VNets/VCNs.

    • For scalability, autoscaling is supported.
    • For resiliency, auto-scaled instances are deployed in multi-availability zones.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 5: Centralized + Distributed Security Model

    Use-cases


    Egress security

    Figure 6 shows egress traffic protection in a centralized and distributed security model.

    • In the centralized security model, traffic is inspected by gateways deployed in the security VPC/VNet/VCN.
    • Gateways are auto-scale and multi-AZ aware.
    • In the distributed security model, traffic is inspected by gateways deployed in the application VPC/VNet/VCN.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 6: Egress traffic flow

    Ingress security

    Figure 7 shows ingress traffic protection in a centralized and distributed security model.

    • In the centralized security model, traffic is inspected by gateways deployed in the security VPC/VNet/VCN.
    • In the distributed security model, traffic is inspected by gateways deployed in the application VPC/VNet/VCN.
    • Gateways are auto-scale and multi-AZ aware.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 7: Ingress traffic flow

    Segmentation (east-west)

    Figure 8 shows intra and inter-VPC/VNet/VCN traffic protection in a centralized and distributed security model.

    • In the centralized security model, intra and inter-VPC/VNet/VCN traffic is inspected by gateways deployed in the security VPC/VNet/VCN.
    • In the distributed security model, intra-VPC/VNet/VCN traffic is inspected by gateways deployed in the application VPC/VNet/VCN.
    • Gateways are auto-scale and multi-AZ aware.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 8: Segmentation (East-West) traffic flow

    URL & FQDN filtering for egress traffic

    URL & FQDN filtering prevents exfiltration and attacks that use command-and-control. The Multicloud Defense Gateway enforces URL & FQDN-based filtering in a centralized or distributed deployment model.

    • URL filtering requires TLS decryption on the gateway.
    • FQDN-based filtering can be enforced on encrypted traffic flows.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 8: URL & FQDN filtering for cloud egress

    Coming soon: Multicloud Networking use cases

    In our upcoming release (2HCY23), we are adding a set of Multicloud Cloud Networking use cases that enable secure connectivity — bringing all cloud networks together.

    Multicloud Networking: Cloud-to-Cloud Networking

    An egress gateway with VPN capability provides a secure connection to other cloud infrastructures. The egress gateway is delivered as-a-Service and provides resiliency and autoscaling. This architecture requires deploying the egress gateways with VPN capability “ON.” These gateways use IPsec connectivity for a secure interconnection.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 9: Cloud-to-Cloud Networking (IPsec)

    Multicloud Networking: Site-to-Cloud Networking

    An egress gateway with VPN capability provides a secure connection to on-premises infrastructure. This architecture requires deploying the egress gateways with VPN capability “ON” in security VPC/VNet/VCN and a device at the data center edge for IPsec termination.

    Secure Multicloud Infrastructure with Cisco Multicloud Defense
    Figure 10: Site-to-Cloud Networking (IPsec)

    Conclusion

    It is a multicloud world we live in, and organizations need a cloud-agnostic solution that unifies security controls across all environments while securing workloads at cloud speed and scale. With Cisco Multicloud Defense, organizations can leverage a simplified and unified security experience helping them navigate their multicloud future with confidence.

    Source: cisco.com

    Tuesday, 21 November 2023

    Cisco DNA Center Has a New Name and New Features

    Cisco DNA Center is not only getting a name change to Cisco Catalyst Center, it also offers lots of new features, and add-ons in the API documentation. Let me tell about some of them.

    Version selection menu


    The first improvement I want to mention is the API documentation version selection drop down menu. You’ll find it in the upper left-hand corner of the page. When you navigate to the API documentation website, by default you land on the latest version of the documentation as you can see in the following image:

    Cisco DNA Center Has a New Name and New Features

    You can easily switch between different versions of the API documentation from that drop down menu. Older versions of the API will still be named and referenced as Cisco DNA Center while new and upcoming versions will reflect the new name, Cisco Catalyst Center.

    Event catalog


    The second addition to the documentation that I want to mention is the event catalog. We’ve had several requests from our customers and partners to have the event catalog for each version of Catalyst Center published and publicly available. I am happy to report that we have done just that. You can see in the following image a snippet of the event catalog that can be found under the Guides section of the documentation.

    Cisco DNA Center Has a New Name and New Features

    Not only is there a list of all the events generated by Catalyst Center, but for each event we have general information, tags, channels, model schema, and REST schema as you can see in the following images:

    Cisco DNA Center Has a New Name and New Features

    Cisco DNA Center Has a New Name and New Features

    List of available reports


    Another popular request was to have a list of available reports generated by Catalyst Center published and easily referenced in the documentation. Under the Guides section you can now also find the Reports link that contains a list of all available reports including the report name, description and supported formats. By clicking on the View Name link you can also see samples for each of the reports.

    Cisco DNA Center Has a New Name and New Features

    OpenAPI specification in JSON format


    These are all nice extra features and add-ons. However, my favorite one must be the fact that you can now download the Catalyst Center OpenAPI specification in JSON format! This one has been a long time coming and I’m happy to announce that we finally have it. You can find the download link under the API Reference section.

    Cisco DNA Center Has a New Name and New Features

    Cisco DNA Center Has a New Name and New Features

    Net Promoter Score


    We have also enabled NPS (Net Promoter Score) on the Catalyst Center API documentation site. As you navigate the website, a window will pop up in the lower right-hand corner of the page asking you to rate our docs.

    Cisco DNA Center Has a New Name and New Features

    Your feedback is most welcome


    Please do take time to give us feedback on the documentation and tell us what you liked or what we can improve on.

    Cisco DNA Center Has a New Name and New Features

    Source: cisco.com

    Saturday, 18 November 2023

    The Power of LTE 450 for Critical Infrastructure

    The Power of LTE 450 for Critical Infrastructure

    In case of disasters, a reliable communication network is critical. The emergency centers need to be able to exchange information to coordinate their response in the field. Service providers need to keep their network live. Power utilities need to be able to keep the electric grid up and running.

    In Europe, the communication networks used to control components of the power grid and all other critical infrastructure, are required to remain operational for at least 24 hours in the event of a power failure. This is well beyond what most commercial cellular networks can offer.

    The solution identified by the energy industry is LTE 450. Public protection and disaster recovery (PPDR) regulations in Germany, Scandinavia, and parts of Africa allow critical industries to reserve the 450 MHz band in their areas to deploy private LTE networks, replacing legacy public safety voice networks with technology capable of data transmission.

    This means LTE 450 can offer privileged access to the network, without public mass market services.

    A key differentiator of the LTE450 MHz band is its long-range coverage. The high frequencies can deliver higher data rates to any number of smart devices, but they are affected by rapid signal attenuation and require dense base station coverage. On the other hand, the 450 MHz band sits on the other side of the spectrum.

    With commercial LTE, a complete countrywide network might require tens of thousands of base stations to achieve full geographical coverage. LTE 450 only takes a few thousand base stations to achieve the same coverage and requires less power at the edge. This results in:

    • A reduced number of base stations need to be kept up and running; it’s easier to manage the network.
    • It’s easier to reach rural areas due to the extended coverage.
    • Backup battery power can be used to continue to connect critical devices in the event of a power failure.

    In addition, the reduced attenuation coming from the low frequency signals of LTE 450, allows increased penetration through walls and other solid materials, bringing obvious advantages for devices deployed indoors, underground and in other hard-to-reach locations.

    Thus LTE 450 is a resilient cellular communication network tailored to the needs of mission and business critical use cases. Few examples:

    • a private wireless network to connect thousands of SCADA systems used to control and monitor substations and other renewable energy assets;
    • a public network to serve a broad range of power utilities, including water, gas, heat distribution networks and smart power grids.

    Cisco solution for critical networks


    Cisco has introduced an LTE 450Mhz plug in module for the popular Cisco Catalyst IR1101 Rugged Router. This platform provides the ability to connect to 450Mhz networks and additionally provides a second fallback module for private 4G, 5G or commercial cellular networks.

    The Power of LTE 450 for Critical Infrastructure
    Figure 1: The Catalyst IR1101 Rugged Router

    Critical traffic (such as SCADA or other critical control traffic) can be routed via 450Mhz and non-critical traffic routed via the cellular connections.

    The IR1101 rugged router also provides secure encrypted tunnels for critical traffic from the remote site to a secure headend (e.g., Utility control center).

    For management of remotely deployed IR1101 routers, the Cisco Catalyst SD-WAN platform supports secure zero touch onboarding, provisioning, and visibility to allow IR1101 routers to be deployed easily in the field.

    Source: cisco.com

    Thursday, 16 November 2023

    ESG Survey results reinforce the multi-faceted benefits of SSE

    ESG Survey results reinforce the multi-faceted benefits of SSE

    When it comes to protecting a hybrid workforce, while simultaneously safeguarding internal resources from external threats, cloud-delivered security with Security Service Edge (SSE) is seen as the preferred method.

    Enterprise Strategy Group (ESG) recently conducted a study of IT and security practitioners, evaluating their views on a number of topics regarding SSE solutions. Respondents were asked for their views on security complexity, user frustration, remote/hybrid work challenges, and their take on the expectations vs. reality when it came to the benefits of SSE. The results provide critical insights into how to protect a hybrid workforce, streamline security procedures, and enhance end-user satisfaction. Some of the highlights from their report include:

    • Remote/hybrid workers were found to be the biggest source of cyber-attacks with 44% coming from them.
    • Organizations are moving towards cloud-delivered security, as 75% indicated a preference for cloud-delivered cybersecurity products vs. on-premises security tools.
    • SSE is delivering value, with over 70% of respondents stating they achieved at least 10 key benefits involving operational simplicity, improved security, and better user experience.
    • SecOps teams report significantly fewer attacks, with 56% stating they observed over a 20% reduction in security incidences using SSE.

    ESG Survey results reinforce the multi-faceted benefits of SSE

    Delving further into the report, ESG provides details explaining why organizations have gravitated towards SSE and achieved significant success. SSE simplifies the security stack, substantially improving protection for remote users, while enhancing hybrid worker satisfaction with easier logins and better performance. It helps avert numerous challenges, from stopping malware spread to shrinking the attack surface.

    Here’s some of the added benefits that SSE users see.

    Overcome cybersecurity complexity


    Among the respondents, more than two-thirds describe their current cybersecurity environment as complex or extremely complex. The top cited source (83%) involved the accelerated use of cloud-based resources and the need to secure access, protect data, and prevent threats. The second most common source of complexity was the number of security point products required (78%) with an average of 63 cybersecurity tools in use. Number three on the hit parade was the need for more granular access policies to support zero trust principles (77%) and the need to apply least privilege policies with user, application, and device controls. Other factors mentioned by wide margins include an expanded attack surface from work-from-home employees, use of unsanctioned applications and a growing number of more sophisticated attacks.

    Organizations can offset these challenges by deploying SSE. These protective services reside in the cloud, between the end-user and the cloud-based resources they utilize as opposed to on-premises methods that are ‘out of the loop’. SSE consolidates many security features including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS) and Cloud Access Security Broker (CASB) with one dashboard to simply operations. With advanced ZTNA with zero trust access (ZTA) authorized users can only connect to specific, approved apps. Discovery and lateral movement by compromised devices or unauthorized users are prevented.

    Enhance end-user experience


    The report found current application access processes often result in user frustration. Respondents reported their workforce uses a collective average of 1,533 distinct business applications. As these apps typically reside in the cloud, secure usage is no longer straightforward. To support zero trust, many organizations have shifted to more stringent authentication and verification tasks. While good from a security perspective, 52% of respondents indicated their users were frustrated with this practice. Similarly, 50% mentioned user frustration at the number of steps to get to the application they need and 45% at having to choose the method of connection based on the application.

    Performance was also cited as an issue, with 43% indicating user frustration. More than one-third (35%) indicated that latency impacting the end-user experience. In some cases, this leads to users circumventing the VPN, which was cited by 38% of respondents. Such user noncompliance can introduce additional risk and the potential for malicious actors to view traffic flows.

    VPNs were found to be poorly suited to supporting zero trust principles. They do not allow for granular access policies to be applied (mentioned by 31% of respondents) and are visible on the public internet, allowing attackers a clear entry point to the network and corporate applications (cited by 22%).

    By implementing SSE with ZTA administrators can give remote users the same type of straightforward, performant experience as when they are in the office, without IT teams being forced to make a trade-off between security and user satisfaction. ZTA allows users to access all, not some, of the potentially thousands of apps needed. ZTA provides a transparent and seamless ‘one-click’ process to login. Backed by advanced protocols, users can obtain HTTP3 level speeds with reduced latency and more resilient connections. Ultra-granular access with one user to one app ‘micro tunnels’ ensure security while providing resource obfuscation and preventing lateral movement.

    Solve hybrid work security challenges


    It’s challenging to secure hybrid workforces that include remote workers, contractors, and partners. This new hyper-distributed landscape results in an expanded attack surface, as well as an increase in device types and inconsistent performance. Respondents cited the need to ensure malware does not spread from remote devices to corporate locations and resources (55%) as their most critical concern. The second biggest issue mentioned is the need to check device posture (51%) consistently and continuously. In third place, IT listed defending an expanding attack surface due to users directly accessing cloud-based apps (50%). Other items of note include the lack of visibility into unsanctioned apps (45%) and protecting users as they access cloud apps (40%).

    SSE is tailor-made to address these roadblocks to security. Multiple defense-in-depth features from the cloud ensure malware and other malicious activity is routed out and prevents infection before it starts. Continuous, rich posture checks with contextual insights ensure device compliance. Thorough user identification and authentication procedures combined with granular access control policies prevent unauthorized resource access. CASB provides visibility into what applications are being requested and controls access. Remote Browser Isolation (RBI), DNS-filtering, FWaaS and other features protect end users as they use Internet or public cloud services.

    Benefits derived through SSE


    The survey clearly demonstrates that many organizations who are utilizing SSE solutions are reaping a broad set of benefits. These can be categorized in three pillars: increased user and resource security, simplified operations, and enhanced user experience. When respondents were asked if they felt their initial expected benefits were subsequently realized once SSE was deployed, over 73% reported achieving at least ten critical advantages. A partial list of these factors include:

    • Simplified security operations/increased efficiency with ease of configuration and management
    • Improved security specifically for remote/hybrid workforce
    • Enacting principles of least privilege by allowing remote access only to approved resources
    • Superior end-user access experience
    • Prevention of malware spread from remote users to corporate resources
    • Increased visibility into remote device posture assessment

    Cisco leads the way in SSE


    Cisco’s SSE solution goes way beyond standard protection. In addition to the four principal features previously listed (ZTNA, SWG, FWaaS, CASB), our Cisco Secure Access includes RBI, DNS filtering, advanced malware protection, Intrusion Prevention System (IPS), VPN as a Service (VPNaaS), multimode Data Loss Prevention (DLP), sandboxing and digital experience monitoring (DEM). This feature rich array is backed by the industry-leading threat intelligence group, Cisco Talos, giving security teams a distinct advantage in detecting and preventing threats.

    ESG Survey results reinforce the multi-faceted benefits of SSE

    With Secure Access:

    • Authorized users can access any app, including non-standard or custom, regardless of the underlying protocols involved.
    • Security teams can employ a safer, layered approach to security, with multiple techniques to ensure granular access control.
    • Confidential resources remain hidden from public view with discovery and lateral movement prevented.
    • Performance is optimized with the use of next-gen protocols, MASQUE and QUIC, to realize HTTP3 speeds
    • Administrators can quickly deploy and manage with a unified console, single agent and one policy engine.
    • Compliance is maintained via continuous in-depth user authentication and posture checks.

    Source: cisco.com

    Wednesday, 8 November 2023

    The Evolution of Oil & Gas Industry

    The Evolution of Oil & Gas Industry

    The Oil & Gas industry has changed a lot. From Upstream through to Downstream, advancements in technology have made operations safer and more productive. Those who work in the industry have a front row seat to these changes but most of us see the industry through mainstream information channels and miss some of the significant changes happening behind the scenes. Below are just a few examples of how the Oil & Gas industry has changed.

    Exploration and Drilling:


    Past: In the past, oil and gas exploration was largely based on geological surveys, seismic data, and educated guesswork. Drilling technology was less advanced, and there was a higher risk of drilling dry wells.

    Now: Modern technology, such as 3D seismic imaging and advanced drilling techniques, has greatly improved the success rate of exploration. Companies now use more data-driven and scientific approaches to identify and extract hydrocarbons.

    Reserves Replacement:


    Past: Oil and gas companies focused on finding and extracting easily accessible reserves, often in known fields. Reserves replacement was a less pressing concern.

    Now: As existing reserves are depleted, companies are increasingly focused on finding and developing new reserves to replace what they extract. This has led to more extensive exploration efforts and investments in unconventional resources like shale oil and gas.

    Environmental Awareness:


    Past: Environmental concerns and regulations were less prominent. Companies had fewer incentives to minimize their environmental impact, leading to more pollution and ecological damage.

    Now: Environmental considerations are paramount. Companies face stricter regulations and public pressure to reduce their environmental footprint. Many are investing in cleaner technologies, carbon capture, and renewable energy as part of their operations.

    Technology and Automation:


    Past: Manual labor and basic machinery were used for drilling, extraction, and processing. Automation was limited.

    Now: Automation and digital technology play a crucial role in optimizing operations. Robotics, AI, and IoT (Internet of Things) devices are used for drilling, monitoring, and maintenance, improving efficiency and safety.

    Globalization:


    Past: Oil and gas operations were often concentrated in a few key regions, and companies were mainly national or multinational corporations.

    Now: The industry has become more globalized. Companies operate in diverse geographic regions, and the supply chain is highly interconnected, with a more significant presence in emerging markets.

    Energy Transition:


    Past: Oil and gas companies were primarily focused on fossil fuels, with limited diversification into alternative energy sources.

    Now: Many oil and gas companies are investing in renewable energy, such as wind, solar, and hydrogen, as they adapt to the energy transition and a growing demand for cleaner energy sources.

    Social Responsibility:


    Past: Social responsibility was less emphasized, and there was less concern for the social impacts of operations.

    The Evolution of Oil & Gas Industry
    Now: Companies are increasingly expected to contribute positively to the communities where they operate by adhering to ethical and sustainable business practices.

    As the energy sector continues to evolve, from a focus on traditional exploration and drilling to a more technologically advanced, environmentally conscious, and diversified approach that encompasses alternative energy sources, Cisco can be a key partner for customers looking to thrive in this dynamic environment.

    Cisco’s technologies play a pivotal role in ensuring that operations are efficient, secure, and sustainable with a portfolio of business outcomes that reflects the evolving demands of society, technology, and the energy market.

    The Cisco Portfolio Explorer for Oil & Gas is an interactive tool that builds the bridge between business priorities and technology solutions by showcasing use cases and architectures to solve your greatest business challenges. The tool has four themes that cover primary areas of Oil & Gas operations including: Plant and Field Operations, Secure Connected Workforce, Industrial Safety and Security, and Energy Transition. Within each theme you will find three to five use cases that dive deeper, explaining the business and technical application in the industry. It also provides case studies and partners as well as showcasing demos, financing options and links to industry experts so you can transform your business with security and trust.

    Source: cisco.com

    Tuesday, 7 November 2023

    Bridging the IT Skills Gap Through SASE: A Path to Radical Simplification and Transformation

    Bridging the IT Skills Gap Through SASE: A Path to Radical Simplification and Transformation

    Imagine a world where IT isn’t a labyrinth of complexity but instead a streamlined highway to innovation. That world isn’t a pipe dream—it’s a SASE-enabled reality.

    As we navigate the complexities of a constantly evolving digital world, a telling remark from a customer onstage with me at Cisco Live in June lingers: “We don’t have time to manage management tools.” This sentiment is universal, cutting across sectors and organizations. An overwhelming 82% of U.S. businesses, according to a Deloitte survey, were prevented from pursuing digital transformation projects because of a lack of IT resources and skills. Without the right experts to get the job done, teams are often entangled in complex, disparate systems and tools that require specific skills to operate.

    The IT talent crunch


    Today’s tech landscape presents a challenge that IT leaders can’t ignore: complex IT needs combined with a fiercely competitive talent market. Internally, teams are overwhelmed, often struggling to keep up with ever-evolving technical demands. In fact, many teams are strapped and rely on early-in-career staff to fill wide gaps left behind by more experienced predecessors. And the problem is only going to get worse.

    For experienced IT workers, it’s an attractive time to entertain new opportunities. According to a global Deloitte study, 72% of U.S. tech employees are considering leaving their jobs for better roles. Interestingly, a mere 13% of employers said they were able to hire and retain the tech talent they most needed.

    Now more than ever, organizations must rethink their approach to talent management and technology adoption to stay ahead of the curve.

    Convergence as a catalyst for transformation


    In an era where time is a premium and complexity is the norm, the need for convergence has never been more apparent. Technical skills, while essential, are not enough. The real game-changers are adaptability, cross-functional collaboration, and strategic foresight. And yet, these “soft skills” can’t be optimally used if teams are entangled in complex, disparate systems and tools that require specialized skills to manage and operate.

    So how do organizations tackle this dilemma? How do they not just keep the lights on but also innovate, improve, and lead? In a word: convergence. Unifying siloed network and security teams as well as systems and tools with a simplified IT strategy is key to breaking through complexity.

    A platform to radically simplify networking and security


    Secure access service edge (SASE) is more than just an architecture; it’s a vision for the future where the worlds of networking and security are not siloed and become one. Cisco takes a unified approach to SASE, where industry-leading SD-WAN meets industry-leading cloud security capabilities in one, robust platform to make managing networking and security easy.

    Bridging the IT Skills Gap Through SASE: A Path to Radical Simplification and Transformation
    Figure 1. SASE architecture converging networking and security domains

    Unified SASE converges the two domains into one, streamlining operations across premises and cloud. Admins from both domains gain end-to-end visibility into every connection, making it easier to optimize the application experience for users, providing seamless access to critical resources wherever work happens. This converged approach to secure connectivity through SASE delivers real outcomes that matter to resource-strapped organizations.

    Simplify IT operations and increase productivity

    ◉ Administrators find it easier to manage networking and security when they are consolidated
    ◉ 73% reduction in application latency improves collaboration and enhances overall productivity
    ◉ 40% faster performance on Microsoft 365 improves employee experience

    Do more with less

    ◉ 60% lower TCO for zero-trust security enables budget reallocation to strategic initiatives3
    ◉ 65% reduction in connectivity costs helps ease the burden on IT budgets3

    Enhance security without adding complexity

    ◉ Simplify day-2 operations with centralized policy management, which makes it easier for IT teams to execute
    ◉ Improve security posture through consistent enforcement—from endpoints and on-premises infrastructure to cloud—across your organization

    Scale and adapt

    ◉ Cloud-native architecture supports scaling and addresses the challenges of rapidly evolving IT landscapes
    ◉ Prepares your organization for changes, reducing the need for constant upskilling or reskilling in IT teams

    Organizations can use SASE architecture to advance their technological frameworks and strategically address the IT skills gap, leading to long-term business success.

    Shifting gears: Unifying, simplifying, innovating


    SASE is not merely a technological evolution; it’s a paradigm shift in how we approach IT management. This lets IT admins focus less on tool management and more on driving business innovation, enriching user experiences, and evolving in tune with market demands.

    Figure 2. Introducing unified SASE with Cisco+ Secure Connect, a better way to manage networking and security

    The path ahead with unified SASE from Cisco


    Cisco offers a unified, cloud-managed SASE solution, Cisco+ Secure Connect. From on-premises to cloud, this comprehensive SASE solution delivers simplicity and operational consistency, unlocking secure hybrid work for employees wherever they choose to work. The beauty of Cisco’s unified SASE solution lies in the principle of interconnecting everything with security everywhere–if it is connected, it is protected. It’s that easy.

    Source: cisco.com