Monday 18 January 2021

Cisco DNA Center and Cisco Umbrella- Automate your journey towards DNS Security

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

Introducing Cisco DNA Center Integration with Umbrella

Cisco Umbrella provides the first line of defense against threats on the internet wherever users go. Umbrella delivers complete visibility into internet activity across all locations, devices, and users, and blocks threats before they ever reach your network or endpoints. Cisco Umbrella helps in securing traffic using Secure Internet Gateway(SIG) in cloud. In this blog, we will look at how Integration of Cisco Umbrella with Cisco DNA Center will help in automating and securing WLAN’s to provide maximum visibility and granularity using network infrastructure.

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning
Wi-Fi is an expected service- but is your Wi-Fi also a liability?

In the world of connected things wireless infrastructure plays a major role in connecting people, processes, and things. According to Cisco VNI, 66% of Global Population will have Internet Access by 2023 and this brings in a bigger question of how to secure the endpoints (It can be Enterprise devices, Guest devices or even IoT Endpoints). It’s interesting that I mentioned about IoT Endpoints, reason being according to Cisco VNI, by 2023, IoT Endpoints will account for 50 percent (14.7 billion) of all global networked devices and one third of those devices will be wireless. The addition of billions of devices to the network edge drives the need for enterprises to provide actionable insights and scalable solutions to secure employees’ devices, IoT connections, infrastructure, and proprietary data.

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning      Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco LearningCisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

Enabling Cisco Umbrella on Catalyst 9800 WLC brings in a whole lot of capabilities such as granular policy enforcement per SSID, visibility in identifying internet threats and reporting. Umbrella on WLAN enforces security at the Domain Name System (DNS) layer, which means you can block requests to malicious domains and IPs before a connection is ever made.

The need for Network Policy Automation

In today’s digital world, the network needs to adapt quickly to changing business requirements. The network needs to support an increasingly diverse and fast-changing set of users, devices, applications, and services. It needs to seamlessly and securely onboard this diverse set of devices and deliver the desired user and application experience.

Cisco DNA Center and Cisco Umbrella

Cisco DNA Center provides an intuitive GUI workflow to enable Umbrella policies on WLAN Controllers. Cisco DNA Center supports Umbrella configuration on Cisco Catalyst 9800 Series Wireless Controller running software version 16.12.x or higher and Cisco Catalyst 9100 Series Access Points on local, flex connect mode, and on Mobility Express (ME) AP’s. The supported Cisco DNA Center release version is 2.1.x.

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

As a pre-requisite, necessary keys, such as the API key, legacy token, management key, and secret, needs to be created in the Umbrella Account. To integrate DNA Center with Umbrella Organization ID, Management API Keys, and Network Device API Keys & token needs to be entered manually in Cisco DNA Center.

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

Once Integrated, Cisco DNA Center can now configure Umbrella policies to Catalyst 9800 WLC, which are managed and provisioned by Cisco DNA Center. Cisco DNA Center provides a comprehensive view all the WLAN Controllers that are eligible for Umbrella deployment in a site. If the WLAN Controllers are Not ready for Umbrella deployment, Cisco DNA Center also provides information on why the Network device is not ready. The major advantage of integration is, Cisco DNA Center can now retrieve policies created in the Umbrella cloud and provides an option to assign these policies at per SSID level to all the eligible WLAN Controllers. This way umbrella policies can be pushed to multiple SSID’s on multiple WLAN Controllers with few simple clicks.

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

Cisco DNA Center also provides base assurance capabilities for Total DNS Queries and Blocked DNS Queries in the Umbrella Services Dashboard.

Cisco DNA Center, Cisco Umbrella, DNS Security, Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

The integration of Cisco DNA Center and Umbrella helps deploy Umbrella policies quickly with minimal disruption to other services, ensures that edge devices are secured at the DNS layer without any added latency. This helps maintain the network infrastructure stay up to date by aligning to dynamic business needs.

Friday 15 January 2021

Cisco’s Data Cloud Transformation: Moving from Hadoop On-Premises Architecture to Snowflake and GCP

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Prep, Cisco Career

The world is seeing an explosion of data growth. There are countless data-generating devices, digitized video and audio content, and embedded devices such as RFID tags and smart vehicles that have become our new global norm. Cisco is experiencing this dramatic shift as more data sources are being ingested into our enterprise platforms and business models are evolving to harness the power of data, driving Cisco’s growth across Marketing, Customer Experience, Supply Chain, Customer Partner Services and more.

Enterprise Data Growth Impact on Cisco

Enterprise data at Cisco has also grown over the years—with the size of legacy on-premises platforms having grown 5x over the past five years alone. The appetite and demand for data-driven insights has also grown exponentially as Cisco realized the potential of driving growth and business outcomes with insights from data, revealing new business levers and opportunities.

Cloud Data Transformation Drivers 

When Cisco started its migration journey several years ago, its data warehouse footprint was entirely on-Prem. With the business pivoting towards an accelerated data-to-insights cycle and the demand for analytics exploding, it quickly became apparent that some of the existing technologies would not allow us to scale to meet data demands.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Prep, Cisco Career

Why Snowflake and GCP?


Key technology leaders and architects within Data & Analytics conducted market assessments of various data warehousing technologies and reviewed Gartner assessments to shortlist products. We then performed comparative capability assessments, performance benchmarked POCs with representative workloads on Hadoop. Ongoing operational costs are a critical success factor of any solution, which is why cost assessment between the performance and ease of use played a key decision factor.

After significant evaluation, Snowflake and Google Cloud Platform were the chosen Cloud Platforms; Snowflake for our enterprise data and GCP for unstructured data processing and analytics.

Our early POCs indicated that Snowflake was 2-4 times faster than Hadoop for complex workloads. The fact that this was ANSI SQL-based yielded several advantages, including a larger qualified talent pool, shorter development cycles, and improved time to capability. The platform also offered a higher concurrency and lower latency compared to Hadoop. Snowflake was a clear winner!

GCP, by virtue of the rich set of tools it provides for analytics, was the chosen solution across multiple organizations in the enterprise and was a natural choice for analytics with the data residing in Snowflake.

Journey and key success factors


To migrate to Snowflake and GCP, we had to mobilize the enterprise to migrate out of Hadoop within a six-quarter timeline. From a central program management perspective, monumental effort went into planning, stakeholder engagement, vendor selection, and training and enablement of the entire enterprise.

As of December 2020, 100% of the Hadoop workload has been migrated to Snowflake, with key stakeholders like Marketing, Supply Chain, and CX fully migrated and leveraging the benefits of the Cloud Platform.

Some of the key enablers for our successful migration within such a short timeframe include:

1. Security certification: The first question from all of our enterprise stakeholders was on the security aspects of storing our data in the Cloud. Extensive work was done with InfoSec and the cryptography team on enabling security with IP whitelisting and Cisco’s private key encryption with Snowflake’s tri-secret secure feature. A lot of attention also went into the D&A Data foundation architecture to enable Role-Based Access Control (RBAC) and granular role separation to manage applications safely and securely.

2. Innovation with foundational capabilities: Right from the start, we knew that in order to accelerate migration for the enterprise, the foundation of ingesting data from on-prem sources to the cloud, of maintaining data quality in the cloud data warehouse, of automated on-boarding of new users and applications were critical. The innovative enabler we are especially proud of is the custom ingestion framework that ingests data from our on-prem sources to Snowflake at the speed of ~240MBPS, with an average of 12TB of incremental data ingested each day into Snowflake.

3. Automation, automation, automation: This was our mantra. With a talented team, APIs were developed for aspects of enforcing security like token/DB credentials rotation and automating common administration and data access flows. We also built client-facing tools so application teams could own and meter their performance and costs: cop jobs, self-service warehouse resizing are two such examples.

4. Proactive cost management: One key paradigm shift in the Cloud is the fact that platform costs are no longer someone else’s problem, or something you worry about only every few years when planning for capacity. With the ability to track usage and costs at a granular level by application comes the responsibility to manage costs better. Visibility of these usage patterns is key to enabling actionable insights for each application team. Data & Analytics has enabled several dashboards that display costs, usage trends over time, a prediction of costs based on current trends, and more. Alerts are also sent based on customizable criteria, such as a week on week spike.

5. Enterprise enablement: With the monumental task of having to migrate nearly 300 applications, developed over five years in Hadoop, to Snowflake in 6 quarters, it was critical to ensure that the technology barrier was reduced right away. Over 25 training sessions were conducted with over 3000 participants trained over the course of FY20. This, coupled with numerous working sessions with Snowflake and Data & Analytics architects to share best practices and learnings across the teams, enabled a successful migration for all our stakeholders.

6. Enterprise alignment: Lastly (but definitely not the least), ensuring we have stakeholder buy-in early in the game was critical to the success of a transformation at this scale. We worked at the grassroots level with the execution team, the leadership team, and executives to secure commitment and support towards this enterprise wide program.

Results observed and testimonials


As a data warehousing platform, Snowflake has significantly surpassed the performance across multiple dimensions, both in reporting and transformations.  Transformation jobs that would take 10 or more hours to run are now completing within an hour, a 10x performance improvement. This provides our business teams more current data on their dashboards, allowing for more accurate insights based on the latest data. Reports are now on an average 4 times faster, with a 4x concurrency improvement, which gives our analysts the flexibility to run reports in parallel based on business needs.

The simple SQL-based technology has reduced the overall time to develop new capabilities or enhance existing ones. Our enterprise stakeholders report about 30% productivity improvement allowing faster time to capability, a key goal with this journey.

Some Testimonials:

◉ “The Cloud will help us deliver insights to drive business growth, agility needed for faster and for more informed decision making, and improve productivity” — Digital Marketing

◉ “Customer Service agents can immediately pull case reports and support Cisco customers on average 20x faster than Hadoop” — Customer Experience

◉ “Virtual Demand Center users on Snowflake receive more accurate customer and partner data and receive leads that are more likely to buy.” — Sales and Marketing

The Cloud Data Platform’s rapidly evolving features also bring additional avenues to improve data governance, enforce more granular data security and harness the power of data – both public and Cisco data, more effectively partner with our customers and partners, and deliver data-driven outcomes.

Source: cisco.com

Wednesday 13 January 2021

Evolving Threat Landscapes: Learning from the SolarWinds Breach

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Prep

During 2020 we saw a huge expansion and adoption of online services precipitated by a global pandemic. By all accounts, a good proportion of these changes will become permanent, resulting in greater reliance on resilient, secure services to support activities from online banking and telemedicine to e-commerce, curbside pickup, and home delivery of everything from groceries to apparel and electronics.

While this blog typically focuses on topics specific to financial services, the growth of online services has brought with it new and expanding operational risks that have the potential to impact not just a particular entity or industry, but are a serious concern for all private and public industries alike. Recently we witnessed just how serious and threatening a particular risk – the compromise of a widely used supply chain – can be. When we think about supply chain attacks, we tend to conjure up an image of grocery or pharmaceutical products being deliberately contaminated or some other physical threat against things we buy or the components that collectively become a finished product. What the recent SolarWinds breach has starkly highlighted, to a much broader audience, is the threat that is posed to our digital products and the truly frightening cascade effect to the digital supply chain of a single breach across all industries and, in turn, to their end customers. When we embrace a technology or platform and deploy it on-premise, any threat associated with it is now inside our environment, frequently with administrative rights – and although the threat actors may be external to the company, the threat vector is internal. Essentially, it has become an insider threat that is unfettered by perimeter defenses, and if not contained, may move unchecked within the organization.

To illustrate, consider the potential risk to a software solutions provider compromised by a digital supply chain attack. Unlike most physical supply chain attacks, the compromised systems are not tied to a downstream product. The risk of lateral movement in the digital realm once inside perimeter defenses is far greater: in a worst-case scenario, malicious actors could gain access to the source code for multiple products. Viewing the inner workings of an application may reveal undisclosed vulnerabilities and create opportunities for future malicious activity and, in extreme cases, may allow an attacker to modify the source code. This in itself represents a potential future supply chain compromise. The entities who have potentially been breached due to their use of SolarWinds includes both private and public sector organizations. While neither rely on SolarWinds directly for their business activities, the nature of a supply chain compromise has exposed them to the possibility that one breach can more easily beget another.

What should private and public institutions do to protect themselves? When we examine organizational risk, we look, primarily, at two things – How can we reduce the probability of a successful attack? How do we mitigate damage should an attack be successful?

Preparing the environment

◉ Identify what constitutes appropriate access in the environment – which systems, networks, roles, groups or individuals need access to what and to what degree?

◉ Baseline the environment – ensure we know what “normal” operation looks like so we can identify “abnormal” behavior in the environment.

◉ Ensure an appropriate staffing level, what our team/individual roles & responsibilities are and ensure staff are trained appropriately. No amount of technology will prevent a breach if the staff are not adequately trained and/or processes break down.

◉ Implement the tools & processes mentioned in later sections. Test the staff, tools & processes regularly – once an attack is underway, it’s too late.

Reducing the probability

◉ Ensure users are who they claim to be, and employ a least privilege approach, meaning their access is appropriate for their role and no more. This can be accomplished by deploying Multi-Factor Authentication (MFA) and a Zero-Trust model, which means that if you are not granted access, you do not have implicit or inherited access.

◉ Enforce that only validated secure traffic can enter, exit or traverse your environment, including to cloud providers, by leveraging NextGen Firewalls (NGFW), Intrusion Prevention/Detection Systems (IPS/IDS), DNS validation and Threat Intelligence information to proactively safeguard against known malicious actors and resources, to name a few.

◉ For developers, implement code validation and reviews to ensure that the code in the repository is the same code that was developed and checked into the repository and enforce access controls to the repository and compilation resources.

Reducing the impact

Former Cisco Chairman John Chambers famously said, “There are two types of companies: those that have been hacked, and those who don’t know they have been hacked”. You can attempt to reduce the probability of a successful attack; however, the probability will never be zero. Successful breaches are inevitable, and we should plan accordingly. Many of the mechanisms are common to our efforts to reduce the probability of a successful attack and must be in place prior to an attack. In order to reduce the impact of a breach we must reduce the amount to time an attacker is in the environment and limit the scope of the attack such as the value/criticality of the exposure. According to IBM, the average time to detect and contain a breach in 2019 was 280 days and costs an average of $3.92m but reducing that exposure to 200 days could save $1m in breach related costs.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Prep
◉ A least privilege or zero-trust model may prevent an attacker from gaining access to the data they seek. This is particularly true for third party tools that provide limited visibility into their inner workings and that may have access to mission critical systems.

◉ Appropriate segmentation of the network should keep an attacker from traversing the network in search of data and/or from systems to mount pivot attacks.

◉ Automated detection of, and response to, a breach is critical to reducing the time to detect. The longer an attacker is in the environment the more damage and loss can occur.

◉ Encrypt traffic on the network while maintaining visibility into that traffic.

◉ Ensure the capability to retrospectively track where an attacker has been to better remediate vulnerabilities and determine their original attack vector.

The SolarWinds breach is a harsh example of the insidious nature of a digital supply chain compromise. It’s also a reminder of the immeasurable importance of a comprehensive security strategy, robust security solution capabilities, and technology partners with the expertise and skills to help enterprises – including financial services institutions – and public institutions meet these challenges confidently.

Tuesday 12 January 2021

Network Security and Containers – Same, but Different

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

Introduction

Network and security teams seem to have had a love-hate relationship with each other since the early days of IT. Having worked extensively and built expertise with both for the past few decades, we often notice how each have similar goals: both seek to provide connectivity and bring value to the business. At the same time, there are also certainly notable differences. Network teams tend to focus on building architectures that scale and provide universal connectivity, while security teams tend to focus more on limiting that connectivity to prevent unwanted access.

Often, these teams work together — sometimes on the same hardware — where network teams will configure connectivity (BGP/OSPF/STP/VLANs/VxLANs/etc.) while security teams configure access controls (ACLs/Dot1x/Snooping/etc.). Other times, we find that Security defines rules and hands them off to Networking to implement. Many times, in larger organizations, we find InfoSec also in the mix, defining somewhat abstract policy, handing that down to Security to render into rulesets that then either get implemented in routers, switches, and firewalls directly, or else again handed off to Networking to implement in those devices. These days Cloud teams play an increasingly large part in those roles, as well.

All-in-all, each team contributes important pieces to the larger puzzle albeit speaking slightly different languages, so to speak. What’s key to organizational success is for these teams to come together, find and communicate using a common language and framework, and work to decrease the complexity surrounding security controls while increasing the level of security provided, which altogether minimizes risk and adds value to the business.

As container-based development continues to rapidly expand, both the roles of who provides security and where those security enforcement points live are quickly changing, as well.

The challenge

For the past few years, organizations have begun to significantly enhance their security postures, moving from only enforcing security at the perimeter in a North-to-South fashion to enforcement throughout their internal Data Centers and Clouds alike in an East-to-West fashion. Granual control at the workload level is typically referred to as microsegmentation. This move toward distributed enforcement points has great advantages, but also presents unique new challenges, such as where those enforcement points will be located, how rulesets will be created, updated, and deprecated when necessary, all with the same level of agility business and thus its developers move at, and with precise accuracy.

At the same time, orchestration systems running container pods, such as Kubernetes (K8S), perpetuate that shift toward new security constructs using methods such as the CNI or Container Networking Interface. CNI provides exactly what it sounds like: an interface with which networking can be provided to a Kubernetes cluster. A plugin, if you will. There are many CNI plugins for K8S  such as pure software overlays like Flannel (leveraging VxLAN) and Calico (leveraging BGP), while others tie worker nodes running the containers directly into the hardware switches they are connected to, shifting the responsibility of connectivity back into dedicated hardware.

Regardless of which CNI is utilized, instantiation of networking constructs is shifted from that of traditional CLI on a switch to that of a sort of structured text-code, in the form of YAML or JSON- which is sent to the Kubernetes cluster via it’s API server.

Now we have the groundwork laid to where we begin to see how things may start to get interesting.

Scale and precision are key

As we can see, we are talking about having a firewall in between every single workload and ensuring that such firewalls are always up to date with the latest rules.

Say we have a relatively small operation with only 500 workloads, some of which have been migrated into containers with more planned migrations every day.

This means in the traditional environment we would need 500 firewalls to deploy and maintain minus the workloads migrated to containers with a way to enforce the necessary rules for those, as well. Now, imagine that a new Active Directory server has just been added to the forest and holds the role of serving LDAP. This means that a slew of new rules must be added to nearly every single firewall, allowing the workload protected by it to talk to the new AD server via a range of ports – TCP 389, 686, 88, etc. If the workload is Windows-based it likely needs to have MS-RPC open – so that means 49152-65535; whereas if it is not a Windows box, it most certainly should not have those opened.

Quickly noticeable is how physical firewalls become untenable at this scale in the traditional environments, and even how dedicated virtual firewalls still present the complex challenge of requiring centralized policy with distributed enforcement. Neither does much to aid in our need to secure East-to-West traffic within the Kubernetes cluster, between containers. However, one might accurately surmise that any solution business leaders are likely to consider must be able to handle all scenarios equally from a policy creation and management perspective.

Seemingly apparent is how this centralized policy must be hierarchical in nature, requiring definition using natural human language such as “dev cannot talk to prod” rather than the archaic and unmanageable method using IP/CIDR addressing like “deny ip 10.4.20.0/24 10.27.8.0/24”, and yet the system must still translate that natural language into machine-understandable CIDR addressing.

The only way this works at any scale is to distribute those rules into every single workload running in every environment, leveraging the native and powerful built-in firewall co-located with each. For containers, this means the firewalls running on the worker nodes must secure traffic between containers (pods) within the node, as well as between nodes.

Business speed and agility

Back to our developers.

Businesses must move at the speed of market change, which can be dizzying at times. They must be able to code, check-in that code to an SCM like Git, have it pulled and automatically built, tested and, if passed, pushed into production. If everything works properly, we’re talking between five minutes and a few hours depending on complexity.

Whether five minutes or five hours, I have personally never witnessed a corporate environment where a ticket could be submitted to have security policies updated to reflect the new code requirements, and even hope to have it completed within a single day, forgetting for a moment about input accuracy and possible remediation for incorrect rule entry. It is usually between a two-day and a two-week process.

This is absolutely unacceptable given the rapid development process we just described, not to mention the dissonance experience from disaggregated people and systems. This method is ripe with problems and is the reason security is so difficult, cumbersome, and error prone within most organizations. As we shift to a more remote workforce, the problem becomes even further compounded as relevant parties cannot so easily congregate into “war rooms” to collaborate through the decision making process.

The simple fact is that policy must accompany code and be implemented directly by the build process itself, and this has never been truer than with container-based development.

Simplicity of automating policy

With Cisco Secure Workload (Tetration), automating policy is easier than you might imagine.

Think with me for a moment about how developers are working today when deploying applications on Kubernetes. They will create a deployment.yml file, in which they are required to input, at a minimum, the L4 port on which containers can be reached. The developers have become familiar with networking and security policy to provision connectivity for their applications, but they may not be fully aware of how their application fits into the wider scope of an organizations security posture and risk tolerance.

This is illustrated below with a simple example of deploying a frontend load balancer and a simple webapp that’s reachable on port 80 and will have some connections to both a production database (PROD_DB) and a dev database (DEV_DB). The sample policy for this deployment can be seen below in this `deploy-dev.yml` file:

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

Now think of the minimal effort it would take to code an additional small yaml file specified as kind:NetworkPolicy, and have that automatically deployed by our CI/CD pipeline at build time to our Secure Workload policy engine which is integrated with the Kubernetes cluster, exchanging label information that we use to specify source or destination traffic, indeed even specifying the only LDAP user that can reach the frontend app. A sample policy for the above deployment can be seen below in this ‘policy-dev.yml’ file:

Cisco Network Security, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Certification, Cisco DevOps

As we can see, the level of difficulty for our development teams is quite minimal, essentially in-line with the existing toolsets they are familiar with, yet this yields for our organizations immense value because the policy will be automatically combined and checked against all existing security and compliance policy as defined by the security and networking teams.

Key takeaways


Enabling developers with the ability to include policy co-located with the software code it’s meant to protect, and automating the deployment of that policy with the same CI/CD pipelines that deploy their code provides businesses with speed, agility, versioning, policy ubiquity in every environment, and ultimately gives them a strong strategic competitive advantage over legacy methods.

Monday 11 January 2021

McMahons Builders Providers Deliver Exceptional Customer Experiences for Another 190 Years

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

McMahons Builders Providers is one of Ireland’s largest independent building providers, offering quality building supplies and do-it-yourself materials to the trade and public since 1830.

Reliable and secure WAN connectivity key to continued success

With 14 retail stores spread across the Republic and Northern Ireland plus a Roof Truss manufacturing plant, WAN connectivity is critical to McMahons Builders Providers operations. WAN outages result in orders that can’t be taken through its centralized point of sales system, disrupting sales and impacting customer experiences. McMahons needs WAN connectivity that is fully redundant, secure, and manageable. And for this, McMahons turned to Logicalis and Cisco.

Logicalis Managed Services provided a one-stop shop to assess, design, and build a new WAN and server environment, greatly improving McMahons’ network reliability and security while simplifying overall manageability.

McMahons was due to upgrade its aging connectivity and server environment. Its new IT manager requested a move to a more centralized environment. McMahons no longer wanted server infrastructure in its retail stores. Instead, management wanted a fully redundant and secure centralized system that would allow expansion while reducing cooling and power needs.  McMahons also wanted offsite backup and disaster recovery, all at an affordable price. Decision makers looked at cloud solutions but favored the Logicalis and Cisco design.

A solution, not boxes

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

In answer to these requests, Logicalis didn’t just sell McMahons a host of boxes, leaving it up to the company to figure out how to assemble and manage an optimized solution. Rather, Logicalis worked with McMahons Builders Providers to truly understand its business and technical challenges and then designed and implemented a Cisco-based solution to meet its needs.

For example, all site-to-site traffic flows through Cisco Meraki firewalls, adding high availability to the McMahons WAN; the Meraki solution decides to route traffic either over MPLS or VPN at each site. The Meraki solution also helps to ensure that McMahons has improved visibility into what is happening across its network, including all store and corporate locations. Security and performance of the network has been greatly enhanced with the adoption of Meraki as the standard site firewall at McMahons.

As another example, Logicalis implemented a Cisco HyperFlex solution with offsite Cisco servers for backup and disaster recovery. This helped reduce rack space, cooling requirements and power consumption, while minimizing any day-to-day management overhead. With this onsite steady state environment, McMahons gained more control of its IT resources while also reducing overall costs.

Logicalis also leveraged Cisco’s broad network of partners to enhance the overall solution. Consider this scenario: A Cisco server running VMware Hypervisor is located in a remote disaster recovery site, providing offsite disaster recovery. In addition, a separate Cisco server provides all system backups, running Veeam Backup and Replication software. Together, the Cisco and Veeam solution helps keep McMahons applications and data available 24/7, giving the company a reliable backup and recovery solution that simply works, requiring limited IT staff intervention.

Finally, as part of its fully managed service offering, Logicalis also provides on-going Tier 3 support, helping to ensure the reliability and security of the infrastructure.

Laying the foundation for continued years of success

With its new connectivity and server environment, store associates don’t experience point of sales downtime that might inhibit their ability to process transactions. That means customers can buy merchandise any time the store is open. In addition, McMahons has experienced increased performance and reliability of its infrastructure, along with reduced cooling and power consumption.

The McMahons IT team manages the new Meraki- and HyperFlex-based environments on a day-to-day basis. They are much more easily managed than the earlier environment, which is important to McMahons because it has a small IT department. For example, in the past, IT staff would have had to come in over the weekend to do a firewall upgrade. Now, an IT staff member can perform a firewall upgrade remotely through an app on an iOS device.

IT staff productivity has increased as well. Trips to the computer room are now a rare occurrence, and visits to remote sites to manage IT infrastructure have significantly decreased. All these factors help to reduce ongoing IT costs and enable IT to focus on new projects and customer service improvements.

Staying in business for 190 years is no small feat. No doubt, continuing to satisfy customers day in and day out is key to this success. By partnering with Logicalis Managed Services and Cisco, McMahons Builders Providers is at the cutting edge of its digital journey to providing exceptional customer experiences.

Sunday 10 January 2021

Security Outcomes Report: Top Findings from Around the World

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

The Security Outcomes Study has been out for a few weeks now and I’ve had time to sit back and read it over with coffee in hand. The report empirically measures what factors drive the best security outcomes. The part that really caught me from the outset was the fact that this was based on a survey wherein the respondents didn’t in fact know that it was for Cisco. I think this is a point that absolutely must be highlighted right from the beginning. It was interesting to look at how the respondents set themselves apart from each other when a geographic lens was focused on the collected data.

To be quite clear, there were many similarities between the different regions around the world. Whether in APJC, EMEAR or the Americas it showed that there is in fact a significant push towards technology refresh in every region. The study shows a significant improvement in security when organizations have a proactive approach to refreshing their IT and security technology. This makes sense because rather than continuing to operate on systems and software that may be deprecated, the study shows that by creating refresh projects, organizations could mitigate a significant amount of security issues that had been lingering for a multitude of reasons. This helped organizations to alleviate some of the accumulated security debt.

Now as we break out into different regions, we see that the priorities tend to diverge. When we look at the data collected from APJC we see that some of the focal points (the squares in the matrix with the darkest shades of blue) such as building executive confidence on threat detection so as to secure more budget are a challenge. This is the top-rated point for the survey from respondents in Asia for this report.

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

The data from EMEAR however shows an increase in focus on proactive tech refresh for the goals of satisfying meeting compliance regulations. Here too, as we see in APJC, that cost effectiveness is also important. Timely incident response also registers high on the ranking for working to manage the top security risks facing organizations. The top listed data point for the EMEAR is hands down on working to meet compliance regulations at 11.2%.

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

Now as we shift our discussion to the Americas, we see that the priorities shift. In contrast to APCJ and EMEAR regions, for the Americas this doesn’t register in the data as it pertains to threat detection and security budgeting. There are two items that leap off the page are for priorities in the Americas. First is a focus on running a cost-effective shop with well-integrated technology. The second point which ranks highest overall is the need to retain security talent to help manage the well-integrated technology deployments.

Cisco Tutorial and Material, Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Prep

This survey was a bit of an eye opener for me personally as I did not expect that a proactive technology refresh program would be as much of a focus for organizations as it is. However, it does make sense. To help manage the accrual of security debt a tech refresh program will go a long way to helping to alleviate the issues introduced by risk management that has not been able to close out issues.

This was really rather amazing reading for a survey driven study and my hat is off to the team who drove this project and the incredible insights that it provides, not only from a sheer statistical point of view but also from the perspective of a regional break out.

Saturday 9 January 2021

Trustworthy Networking is Not Just Technological, It’s Cultural – Part 3

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Networking

Part 3: Developing a Culture of Trust

In my two previous posts on the topic of trustworthy networking, I’ve focused on the multiple technologies Cisco designs and embeds into all our hardware and software and how they work together to defend the network against a variety of attacks. I explored how it’s not just about the trust technologies but also about the culture of trustworthy engineering that is the foundation of all that we do. In this post I’ll focus on how Cisco builds and maintains a culture of trustworthiness.

But first, what is culture? What is does trustworthy mean? Just as there are a diversity of human societies, there are different characterizations of culture and trust.

Fusing several definitions, we can summarize culture as:

◉ The quality in a person or society that arises from a concern for what is regarded as excellent in arts, letters, manners, scholarly pursuits, etc. and provides important social and economic benefits.

◉ Culture enhances our quality of life and increases overall well-being for both individuals and communities.

Trustworthy is another word with a variety of implications:

◉ Trust describes something you can rely on, and the word worthy, describes something that deserves respect.

◉ Trust is intangible – it is an intellectual asset, a skill, and an influencing power for leaders. Showing trustworthiness by competence, integrity, benevolence, and credibility makes a difference in daily leadership work.

◉ Trustworthy describes something you can believe in — it’s completely reliable.

Therefore, a culture of trustworthiness provides a consistent approach to designing, building, delivering, and supporting secure products and solutions that customers can rely on to “do what they are expected to do in a verifiable way”. When engineers approach product design and development with integrity and security of product functionality and ensure the safety of customer data from day one of a project, then the outcome has an excellent chance of being trustworthy. Let’s look at how Security Leadership permeates Cisco culture with reliability and credibility through education, social contracts, and a strict adherence to Cisco Secure Development Lifecycle (CSDL).

A Culture of Trustworthiness Starts with Continuous Security Education

Designing trustworthy networks requires a commitment to professional improvement with deep learning into secure technologies, threat awareness, and industry-standard principles. At Cisco this education starts with levels of Cisco Security Space Center program that every employee and contractor must complete to various levels of proficiency depending on their jobs. To date, over 75,000 people in the Cisco workforce have completed the required levels of Security training. This greatly increases security awareness throughout the organization. It also gives the workforce a common language to discuss the principles of trustworthy design and support.

Pervasive cultural security also requires a legion of advocates inclusive of Cisco employees, vendors, partners, and customers. For example, embedded in every aspect of engineering are Security Advocates who advise, monitor, and report on the implementation of trustworthy security processes. Advocates pride themselves as having a thorough understanding of Cisco Security Space Center training. Security and Vulnerability Audits provide assurance that CSDL is followed and as problems are uncovered during the development and testing cycle they cannot be ignored. Audit teams reports not to engineering management but to the C-suite to ensure that problems are completely fixed or a release red-lighted until they are remediated. This is another example of a culture of trust that permeates across functional departments all the way to the C-level—all in service of protecting the customer.

Threat modeling is another skillset reinforced through training and applied consistently throughout the development lifecycle. It represents a repeatable process for identifying, understanding, and prioritizing solution security risks. Engineers analyze external interfaces, component interactions, and the flow of data through a system to identify potential weaknesses where solutions might be compromised by external threats.

Development security policies not only set the rules for protecting the organization, but also protect investments across people, processes, and technology.

◉ Employee and supplier codes of conduct are signed annually to keep people focused on the importance of trust and their promise to deliver secure products across the value chain and never intentionally do harm.

◉ Enterprise information security and data protection policies are aligned with security standards like ISO 27001.

◉ Using site audits to continuously monitor Cisco and partner development properties ensures that physical security policies—such as camera monitoring, security checkpoints, alarms and electronic or biometric access control—are being maintained.

◉ Data protection and incident response policies are available to customers to help them understand the processes Cisco has in place to protect their data privacy and the actions that will be taken should a data breach occur.

◉ The Product Security Incident Response Team (PSIRT) is independent from engineering and is critical to keeping an unbiased watchful eye on all internally and externally developed code. Anyone at Cisco, customers, and partners can report security issues in shipping code and be assured that they will be logged and addressed appropriately.

Tailoring Cisco Secure Development Lifecycle (CSDL) to Solution Type

We examined the Cisco Secure Development Lifecycle in Part 1 of this series but considering how rapidly networks are evolving to accommodate “data and applications everywhere” and the dispersal of the workforce from campus environments, it deserves another look relating to the culture of trust. The constantly evolving development techniques that are needed to address emerging security threats resulting from these increasingly dispersed workplace.  The evolving workforce means that secure development processes must be adapted depending on the type of solution and where they are deployed:

◉ on-premises networking device

◉ appliance running application

◉ network controller/management

◉ application running in the cloud

◉ combination of on-prem and cloud; aka hybrid cloud.

During development, engineers are trained to approach each of these according to the end deployment. For example, standardized toolsets, such as Cisco Cloud Maturity Model (CCMM), provide a consistent method to assess the quality of all of Cisco’s SaaS offerings. It includes evaluations of many quality attributes, such as availability, reliability, security, scalability, etc. CCMM provides a quantitative and standardized method to gauge the health of all Cisco cloud offerings.

Infusing a Culture of Trust Throughout the Value Chain

If a trustworthy culture stopped at the walls of Cisco and the minds of our employees, there would still be room for bad actors and malicious code to wreak havoc. That’s why Cisco extends our trustworthy principles to partners and suppliers throughout the value chain. We strive to put the right security in the right place at the right time to continually assess, monitor, and improve the security of our value chain throughout the entire lifecycle of Cisco solutions.

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Networking
Cisco Trust Value Chain

Cisco value chain security continually assesses, monitors, and improves the security of our partners who are third-party providers of hardware components, assembly, and open-source software that are an integral part of our solutions’ life cycles.

We strive to ensure that our solutions are genuine and not counterfeited or tainted during the manufacturing and shipment processes. The steps Cisco and our partners adhere to ensure that our solutions operate as customers direct them to and are not controlled or accessible by unauthorized rogue agents or software threats.

These investments in our people and partners, along with services like Technology Verification, help Cisco provide a comprehensive plan that covers how and what we are doing to support the security, trust, privacy, and resiliency of our customers. Earning customer trust is about being transparent and accountable as we strive to connect everything securely.

To understand our complete Trustworthy Networking story, please refer to Part 1: The Technology of Trust and Part 2 How Trustworthy Networking Thwarts Security Attacks of this blog series, as well as The Cisco Trust Center web site.