Sunday, 10 April 2022

Supercharging indoor IoT management – Cisco DNA Spaces IoT Services Policy Engine

IoT Management at scale

Cisco DNA Spaces IoT Services provides tools to manage a myriad of IoT devices easily. However, the management of these IoT devices was still a manual operation. Each IoT device had to be individually onboarded and configured. If there was an error, it needed to be manually reconfigured. This becomes cumbersome as the number of managed devices increases. Furthermore, manual maintenance for a large number of managed devices is equally taxing. How do we know when a device is about to run out of battery? How to ensure that customer experience is not impacted if someone moves a beacon from one zone to another? How to roll out firmware upgrades without impacting operation? Even with IoT Management, these problems remained intractable at scale.

IoT Services Policy Engine

Enter Cisco DNA Spaces IoT Services Policy Engine. IoT Service policies are use-case-based and address unique problems that the scale and complexity of a large IoT deployment entail. Devices no longer need to be individually onboarded to deploy a use case. Customized policies can be created beforehand and associated with a class of devices at a specific location. Whenever a new device is turned on, it inherits the policy associated with that location and gets auto-configured. IoT Services even provides policy templates to support single-click use case deployment.

Groups

Policies are configured to act on device groups. Classes of devices can be logically organized into groups. Groups can be created manually or based on some logical criteria such as the beacon location, manufacturer, or the mac address prefixes. Let’s say a customer wants to enable Asset Tracking on all the beacons in a certain zone of a building. In that case, the customer first creates a dynamic group targeting the zone. Whenever DNA Spaces locates a beacon in that zone, it automatically assigns it to the group. Group assignment for a beacon gets propagated through firehose notifications as well.

Fig #1: Dynamic Grouping

Policies


Policies help in rolling out use cases across device groups. Each policy solves a specific customer use case and comes with a suggested policy template which helps in rolling out a policy across a group easily. Customers can thus deploy a policy once and then DNA Spaces IoT Services ensures that the use case is always enforced across all the targeted beacons. This completely eliminates the need for manual onboarding or maintaining IoT devices.

Fig #2: Policy Configuration

Once a policy is deployed, IoT Services also displays the number and list of devices on which the policy got applied.

Fig #3: Policy Device count

Alerts


When a policy is applied or it fails to get applied, an alert is generated. Alerts may be system alerts that can be viewed in the DNA Spaces dashboard or notification alerts like emails. Notification alerts are batched and delivered every 15 mins.

Fig #4: Policy Alert

Alerts are especially important for monitoring and security-based policies such as battery monitoring or beacon spoofing.

A New Era


Cisco DNA Spaces IoT Services Policy ushers in a new era of hands-free enterprise IoT Management. It brings together unmatched processing and machine intelligence to deliver a seamless management experience hitherto unseen in enterprise IoT. With new policies being added over time, it is destined to become a bedrock for IoT Management.

Source: cisco.com

Saturday, 9 April 2022

Addressing the noisy neighbor syndrome in modern SANs

The noisy neighbor syndrome on cloud computing infrastructures

The noisy neighbor syndrome (NNS) represents a problematic situation often found in multi-tenant infrastructures. IT professionals associate this figurative expression with cloud computing. It comes manifest when a co-tenant virtual machine monopolizes resources such as network bandwidth, disk I/O or CPU and memory. Ultimately, it will negatively affect performance of other VMs and applications. Without implementing proper safeguards, appropriate and predictable application performance is difficult to achieve, resulting into ensuing end user dissatisfaction.

The noisy neighbor syndrome originates from the sharing of common resources in some unfair way. In fact, in a world of finite resources, if someone takes more than licit, others will only get leftovers. To some extent, it is acceptable that some VMs utilize more resources than others. However, this should not come with a reduction in performance for the less pretentious VMs. This is arguably one of the main reasons for which many organizations prefer to avoid virtualizing their business-critical applications. This way they try to reduce the risk of exposing business critical systems to noisy neighbor conditions.

To tackle the noisy neighbor syndrome on hosts, different solutions have been considered. One possibility comes from reserving resources to applications. The downside is a reduction in the average infrastructure utilization. Moreover, it will increase cost and impose artificial limits to vertical scale of some workloads. Another possibility comes from rebalancing and optimizing workloads on hosts in a cluster. Tools exist to resize or reallocate VMs to hosts for better performance. All this happens at the expense of an additional level of complexity.

In other cases, greedy workloads might be best served on a bare metal server rather than virtualized. Using bare metal instead of virtualized applications can address the noisy neighbor challenge at the host level. This is because bare metal servers are single tenant, with dedicated CPU and RAM resources. However, the network and the centralized storage system remain shared resources and so multi-tenant. Infrastructure over-commitment due to greedy workloads remains a possibility and that would limit overall performance.

The noisy neighbor syndrome on storage area networks

Generalizing the concept, the noisy neighbor syndrome can also be associated with storage area networks (SANs). In this case, it is more typically described in terms of congestion. There are four well-categorized situations determining congestion at the network level. They are poor link quality, lost or insufficient buffer credits, slow drain devices and link overutilization.

The noisy neighbor syndrome does not manifest in the presence of poor link quality or lost and insufficient buffer credits, nor with slow drain devices. That’s because they are essentially underperforming links or devices. The noisy neighbor syndrome is instead primarily associated to link overutilization. At the same time, the noisy neighbor terminology would refer to a server, not a disk. That’s because communication, either reads or writes, originates from initiators, not targets.

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Learning, Cisco Tutorial and Materials

The SAN is a multi-tenant environment, hosting multiple applications and providing connectivity and data access to multiple servers. The noisy neighbor effect occurs when a rogue server or virtual machine uses a disproportionate quantity of the available network resources, such as bandwidth. This leaves insufficient resources for other end points on the same shared infrastructure, causing network performance issues.

The treatment for the noisy neighbor syndrome may happen at one or multiple levels, such as host, network, and storage level, depending on the specific circumstances. A common situational challenge presents when a backup application monopolizes bandwidth on ISLs for a long period of time. This may come to the performance detriment of other systems in the environment. In fact, other applications will be forced to reduce throughput or increase their wait time. This challenge is best solved at the network level. Another example is when a virtualized application is monopolizing the shared host connection. In this case, the solution might involve remediation at both the host and network level. Intuitively, this phenomenon becomes more pervasive as the number of hosts and applications increases in data center environments.

Strategies to solve the noisy neighbor syndrome


The solution to the noxious noisy neighbor syndrome is not found by statically assigning resources to all applications, in a democratic way. In fact, not all applications need the same quantity of resources or have the same priority. Dividing available resources in equal parts and assigning them to applications would not do justice to the heaviest and often mission critical ones. Also, the need for resources might change over time and be hard to predict with a level of accuracy.

The true solution for silencing noisy neighbors comes from ensuring any application in a shared infrastructure receives the necessary resources when needed. This is possible by designing and properly sizing the data center infrastructure. It should be able to sustain the aggregate load at any time and include ways to dynamically allocate resources based on needs. In other words, instead of provisioning your datacenter to average load, you should design to deal with the peak load or close to that.

At the storage network level, the best way to solve the noisy neighbor challenge is by doing a proper design and adding bandwidth, as well as frame buffers, to your SAN. At the same time, try making sure storage devices can handle input/output operations per second (IOPS) above and beyond the typical demand. Multiport all flash storage arrays can reach IOPS levels in the range of millions. Their adoption has virtually eliminated any storage I/O contention issues on the controllers and media, shifting the focus onto storage networks.

Overprovisioning of resources is an expensive strategy and not often a possibility. Some companies prefer to avoid this and postpone investments. They strive to find a balance between the cost of infrastructure and an acceptable level of performance. When shared resources are insufficient to satisfy all needs simultaneously, a possible line of defense comes from prioritization. This way, mission-critical applications will be served appropriately, while accepting that less important ones may get impacted.

Features like network and storage quality of service (QoS) can control IOPS and throughput for applications, limiting the noisy neighbor effect. By setting IOPS limits, port rate limits and network priority, we can control the quantity of resources each application receives. Therefore, no single server or application instance monopolizes resources and hinders the performance of others. The drawback of the QoS approach is the accretive administrative burden. It takes time to determine priority of individual applications and to configure the network and storage devices accordingly. This explains the low adoption of this methodology.

Another consideration is that traffic profile of applications changes over time. The fast detection and identification of SAN congestion might not be sufficient. The traditional methods for fixing SAN congestion are manual and unable to react quickly to changing traffic conditions. Ideally, always prefer a dynamic solution for adjusting the allocation of resources to applications.

Cisco MDS 9000 to the rescue


Cisco MDS 9000 Series of switches provides a set of nifty capabilities and high-fidelity metrics that can help address the noisy neighbor syndrome at the storage network layer. First and foremost, the availability of 64G FC technology coupled with a generous allocation of port buffers proves helpful in eliminating bandwidth bottlenecks, even on long distances. In addition, a proper design can alleviate network contention. This includes the use of a low oversubscription ratio and making sure ISL aggregate bandwidth matches or exceeds overall storage bandwidth.

Several monitoring options, including Cisco Port-Monitor (PMON) feature, can provide a policy-based configuration to detect, notify, and take automatic port-guard actions to prevent any form of congestion. Application prioritization can result from configuring QoS at the zone level. Port rate limits can impose an upper bound to voracious workloads. Automatic buffer credit recovery mechanisms, link diagnostic features and preventive link quality assessment using advanced Forward Error Correction techniques can help to address congestion from poor link quality or lost and insufficient buffer credits. The list of remedies includes Fabric Performance Impact Notification and Congestion Signals (FPIN), when host drivers and HBAs will support that standard-based feature. But there is more.

Cisco MDS Dynamic Ingress Rate Limiting (DIRL) software prevents congestion at the storage network level with an exclusive approach, based on an innovative buffer to buffer credit pacing mechanism. Not only does Cisco MDS DIRL software immediately detect situations of slow drain and overutilization in any network topology, but it also takes proper action to remediate. The goal is to reduce or eliminate the congestion by providing the end device the amount of data it can accept, not more. The result will be a dynamic allocation of bandwidth to all applications. This will eventually eliminate congestion from the SAN. What is exceedingly interesting about DIRL is its being network-centric and not requiring any compatibility with end hosts.

The diagram below shows a noisy neighbor host becoming active and monopolizing network resources, determining throughput degradation for two innocent hosts. Let’s now enable DIRL on the Cisco MDS switches. When repeating the same scenario, DIRL will prevent the same rogue host from monopolizing network resources and gradually adjust it to the performance level where innocent host will see no impact. With DIRL, the storage network will self-tune and reach a state where all the neighbors happily coexist.

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Learning, Cisco Tutorial and Materials

The trouble-free operation of the network can be verified by using the Nexus Dashboard Fabric Controller, the graphical management tool for Cisco SANs. Its slow drain analysis menu can report about situations of congestion at the port level and facilitate administrators with an easy to interpret color coding display. Similarly deep traffic visibility offered by SAN Insights feature can expose metrics at the FC flow level and in real time. This will further validate optimal network performance or help to evaluate possible design improvements.

Final note


In conclusion, Cisco MDS 9000 Series provides all necessary capabilities to contrast and eliminate the noisy neighbor syndrome at the storage network level. By combining proper network design with high-speed links, congestion avoidance techniques such as DIRL, slow drain analysis and SAN Insights, IT administrators can deliver an optimal data access solution on a shared network infrastructure.  And don’t regret if your network and storage utilization is not coming close to 100%. In a way, that would be your safeguard against the noisy neighbor syndrome.

Source: cisco.com

Thursday, 7 April 2022

Three Reasons to Prepare for Your Next Broadband Infrastructure Investment

Cisco Exam Prep, Cisco Learning, Cisco Career, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Material

Two years after the COVID-19 pandemic proved the internet invaluable with so many of us working, shopping, educating our children, and accessing health care – all from home – we’re still faced with a digital divide between those who have access to broadband Internet and those who don’t. Efforts by service providers to upgrade their network infrastructure to handle increased load has been both rapid and impressive, but more is needed. There remains a significant percent of the population lacking sufficient broadband to fully participate in the digital economy and society. This must change, but how?

There are three areas we need to focus on if we hope to expand much-needed internet access to those who lack it: bridging the digital divide, locating and securing available funds, and improved expertise and planning. But first let’s examine the numbers as related to the ever-increasing value of the internet and those who lack full access to its benefits.

In March 2022, Cisco released its Global Broadband Index Report surveying more than 60,000 workers across 30 different markets about their home broadband access, quality, and usage. Below are a few stats that caught my eye:

• 84% use the internet at home for four or more hours each day

• 78% agree that everyone should be able to securely connect to fast and reliable internet regardless of location

• 65% believe access to affordable and reliable broadband will become a major issue in the future

• 58% state that they were unable to access critical services during lockdown due to unreliable internet

In the United States, there are about 20 million who lack access to high-speed broadband services, and some 17 million school children don’t have internet access at home. Ensuring broadband access and affordability are critical to closing the digital divide. The problem is significantly greater in rural areas, where about 19.3% of the total U.S. population resides. In rural areas, the cost to build and deliver broadband internet services are much higher due to lower population density, harsher environments, and other factors.

Bridging the digital divide is a great idea, but who’s going to pay for it?

The good news is the U. S. Federal Government is providing another $62 billion in grant dollars on top of the $38 billion pre-pandemic grants for broadband internet build outs. Along with wireless expansion, the government’s funding focus has also shifted to fiber and this new money, provided by the Infrastructure Investment and Jobs Act (IIJA), is part of a five-year program. This funding makes it easier to scale your network infrastructure because with the government helping to fund the last mile, it allows service providers to upgrade their middle mile as well, to support additional users and increased bandwidth. Using federal grants helps you build up the network backbone that might have otherwise been too costly.

The additional $65 billion seeks to address the digital divide and specifically focuses on groups of people that are “underserved” and “unserved” as defined in the law. By underserved we’re talking about those who are served by lower speed broadband that doesn’t exceed a certain threshold, for example 100 Mbps download by 20 Mbps upload. Unserved refers to those having internet speeds below 25 Mbps download by 3 Mbps upload.

Below are some of U.S. federal programs that are in the middle of funding broadband deployments, waiting on program rules, or still waiting for funding to be appropriated.

Cisco Exam Prep, Cisco Learning, Cisco Career, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Material

The most significant grant program for both public and private entities is the Broadband Equity Access and Deployment (BEAD) with $42 billion set aside for last-mile broadband deployment. This is where both public and private entities can win grant money to deploy broadband to the unserved and underserved. This also means there’s a need for new affiliations like Public-Private Partnerships (PPP) which are contracts between a private party and a government agency to offer a public asset or service such as municipality-provided broadband through a partnership with an internet service provider. PPPs make obtaining right of ways much easier because you’re directly partnering with cities and counties.

PPPs provide many benefits to public entities such as Wi-Fi access and improved broadband for schools, and they help scale the economy because you’re adding subscribers who will consume content, shop online, and seek out other internet-based services. They need ISP partners in order to deliver these benefits.

Knowledge and expertise are key to success


Yet, funding alone is not enough to close the digital divide. You need to determine the right combination of solutions for a particular use case, region, and implementation to get the results you expect. This may require extensive expertise and answering all the questions ahead of time has proved difficult—until now.

Cisco is delivering a new generation of network infrastructure technologies and innovation that provide more capacity and greater flexibility at a lower cost per subscriber, helping to import the economics of the Internet. Here are a few examples:

• Capacity at lower cost with Cisco Silicon One and Routed Optical Networking
• Lower OpEx with simplified networks and automation
• Improved sustainability and flexibility for remote deployment scenarios
• Flexible consumption and payment methods that enable you to pay as you grow

These technologies can make it much easier and less expensive for service providers to expand their offerings in rural regions. Now you can experience them up close and in person at the Cisco Broadband Innovation Center located in Research Triangle Park, NC. This is a perfect opportunity to expand your knowledge and expertise in rural broadband development. Not only will you see how to model and address your own specific use cases, but service providers can also focus on how to be more prepared for grant applications by understanding ways to benefit from Cisco’s next-generation network innovations. And it’s important to remember that federal grants will be awarded to the service providers with the best solutions, so it’s critical to work with a proven company at the forefront of rural broadband development.

Source: cisco.com

Tuesday, 5 April 2022

Intelligent alert management

Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

The challenge

In cyber security, we all know what alert fatigue is, and we know there is no silver bullet to get out of it. In our previous incarnation, our product was guilty as well. Who wants to go through 20,000 alerts one by one? And this was just from one product.

Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

Building a detection engine


This article is part of a series in which we will explore several features, principles, and the background behind what we consider to be the building blocks of a security detection engine within an extended detection and response (XDR) product.

In this first article, we’ll start with alert fatigue and how we avoid it through the creation of intelligent alerts.

To manage alert fatigue, we are aware of several traditional approaches. “We only pay attention to High and Critical alerts,” some have said. That certainly helps, but at the expense of bringing more problems aboard. Apart from missing a large chunk of the sometimes-notable message that the security product is trying to convey, the “inbox” of the product becomes a dump of unclosed alerts.

“In your next release, could you add elaborate filters and checkboxes so that I can mass close those alerts?” some have asked. We tried this way, but we found ourselves amidst views containing tables within tables, a very baroque system with the delicacy and simplicity on par with the space shuttle.

“We gave up and got a SIEM and a SOAR!” we heard from others. That is all fine, when one wishes to move their SOC staff from security specialist roles to engineering integrators.

To sum up, we observed that in any case, we were really trading one issue for another. Rather than trying to manage the alert fatigue problem, we switched our approach to avoiding it in the first place. We introduced Alert Fusion.

Alert Fusion


In the Alert Fusion system, the basic unit of work is the alert. Rather than having one alert per each security event, we build the alerts intelligently, to mimic the unit of work of the security analyst.

Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

Here is an example of such a unit of work. It covers two assets, having detected an identical set of threats on both. It’s easy to see that WannaCry, SMB service discovery, and Excessive communication likely go together. While remediating these infections, one might want to have a look at the Emotet infection as well. Altogether, neglecting this this unit of work is considered a critical risk, so it easily makes it to the top of the alert list.

Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

The second example has a single ArcadeYum Threat spanning a larger base of 78 assets. It is one of those pesky browser-altering, additional software promoting things that one might want to eradicate en masse, rather than one by one. Admittedly, it isn’t as problematic as WannaCry though, so it is considered a medium risk.

Altogether, these two alerts cover nearly a hundred significant security events and many more contextual ones. Apart from removing the need for manual correlation, we can immediately discern the nature, the breadth, and the depth of the risks presented.

To sum up, an alert serves to collate findings that the analyst might want to solve in ensemble, either by working on it as an incident or getting rid of it due to reasons of his choosing. To prioritize their work, an alert has a risk, and the alerts are ordered using this value.

The risk, as well as the grouping, are determined automatically by the system using what it knows about the detections. Now, let’s dive deeper into the basic ingredients in the cookbook: the threats and the assets.

Threats


Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

A threat is anything we can name as a security risk. In this example, we feature Shlayer. It is important to note that we express threats in the language of threat intelligence and risk management – “what” has been detected as opposed to the technical language of detection means – “why” was it detected. We’ll cover the exact means of detection in a later article. For now, let’s assume that we somehow detected it.

A threat has a severity, in this example it is critical, which serves as a basis for the risk calculation. Threats come with factory default severities which be changed freely to suit the threat model of each customer. For example, some customers may not care as much about crypto mining on their assets when compared to other customers.

We realize that detection methods are not infallible, especially in the world of machine learning. So, we assign a confidence value when a threat is detected. Currently, it can be either high or medium. The latter means the detector is not quite sure of the detection, so the risk is dialed down.

Assets


Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

Similarly, we organize assets into Asset Groups that bear a business value. The organization is up to the customer and their threat model. Some customers have more diverse needs, while others have more of a flat structure. Where possible, we offer an educated guess of the default value for an Asset Group.  For example, servers get a high value, while guests get a low value. In any case, the values can be changed freely. The medium business value has no impact on the risk, while others will either increase or decrease it accordingly.

Reactive system


Cisco Security, Cisco Secure Endpoint, Cisco Secure Network Analytics, SecureX, User Endpoint Protection, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification

In summary, we see that Alert Fusion presents alerts which act as units of work and are prioritized by their risk, calculated from customer-applied settings such as threat severity and asset value.

It wouldn’t be realistic to expect that all configuration, if any, was done to the system upfront. For example, a detection on a guest network might make one realize that the business value of this asset group might need to be lowered.  So, we provide the option to tweak alerts on the fly. We support a reactive workflow model.

The existing alerts may be reorganized at any time by turning a few knobs, namely the threat severity and asset value. This gives the option to explore safely. When not satisfied with the change, simply turn them back, rinse, and repeat.

Wrap-up


So, have we tackled alert fatigue successfully? As the saying goes, time will tell. It is already beginning to do so.

Since this system was introduced in 2020, we have seen a significant reduction in alerts per customer, usually in a few orders of magnitude. Our UI does not have to work as hard, in terms of checkboxes, pagination, and filtering. Consequently, more customers reach the nice-to-be-in place of a zero-alert inbox, where 100% of the alerts have been viewed and interacted with.

Source: cisco.com

Sunday, 3 April 2022

New in SecureX: Device Insights

Since its release, Cisco SecureX has helped over 10,000 customers gain better visibility into their infrastructure. As the number of devices in many customer environments continues to increase, so does the number of products with information about those devices. Between mobile device managers (MDM), posture agents, and other security products, a wealth of data is being collected but is not necessarily being shared or, more importantly, correlated. With the new device insights feature in Cisco SecureX, now available for all SecureX customers, we’re changing that.

Introducing Device Insights

Device insights, which is now generally available, extends our open, platform approach to SecureX by allowing you to discover, normalize, and consolidate information about the devices in your environment. But this isn’t just another dashboard pulling data from multiple sources. Device insights fetches data from sources you might expect, like your mobile device manager, but also leverages the wealth of data available in your Cisco Secure products such as Cisco Secure Endpoint, Orbital, Duo, and Umbrella. Combining these sources of data allows you to discover devices that may be sneaking through gaps in your normal device management controls and gain a comprehensive view into each device’s security posture and management status. With device insights, you’ll be able to answer these all-important questions:

◉ What types of devices are connected in our environment?

◉ What users have been accessing those devices?

◉ Where are those devices located?

◉ What vulnerabilities are associated with each device?

◉ Which security agents are installed?

◉ Is the security software is up to date?

◉ What context do we have from technologies beyond the endpoint?

Supported Data Sources

Now, you might ask: what types of data can I bring into device insights? When we created SecureX, we built a flexible architecture based on modules that anyone can create. Device insights extends this architecture by adding a new capability to our module framework. Here’s a look at what data sources will be supported at launch:

SecureX, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Prep, Cisco Skills, Cisco Job

Bringing Everything Together


Once you’ve enabled your data sources, device insights will periodically retrieve data from each source and get to work. Some sources can also publish data in real time to device insights using webhooks. We normalize all of the data and then correlate it between sources so you have one view into each of your devices, not a mess of duplicate information. This results in a single, unified dashboard with easy filtering, a high level view into your environment, and a customizable table of devices (which you can export too!). To see more information about a device, just click on one and you’ll see everything device insights knows, including which source provided which data.

SecureX, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Prep, Cisco Skills, Cisco Job

SecureX, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Prep, Cisco Skills, Cisco Job

Source: cisco.com

Saturday, 2 April 2022

Cisco SD-Access in Healthcare: A Comprehensive Secure Access Solution for a Changing Industry

Cisco SD-Access, Healthcare, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco HIT

The healthcare industry is undergoing unprecedented change. The pandemic has accelerated the process of digitization and the need for an always available and secure digital infrastructure. In particular, Healthcare IT (HIT) faces several significant challenges:

◉ Prevent security breaches across hospitals, clinics, and research centers

◉ Protect patient and research data through standards, integration, and governance

◉ Understand and support technological innovations in healthcare

◉ Provide simple, secure access to data and analytics to all key stakeholders

To address these challenges and support the connectivity and security needs of hospitals, branch clinics, and telehealth, HIT needs to build and maintain a resilient network architecture that is secure, automated, and provides a continuous feedback loop with rich analytics.

Cisco Software-Defined Access (SD-Access) is a network controller-based solution that helps organizations enable policy-based automation to address access control and segmentation. With its broad adoption in healthcare organizations worldwide, a set of use cases and best practices have emerged that demonstrate how HIT is using Cisco SD-Access to address changing network requirements and meet the needs of the healthcare workforce and patients.

Cisco SD-Access, Healthcare, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco HIT
Cisco SD-Access

Simplify Network Expansion


Healthcare networks in the modern all-digital world must provide service-level resiliency and be modifiable on demand. Cisco SD-Access provides ample support for site additions and site expansions and is flexible enough to spin up a new site in hours. It provides full lifecycle management of existing campus and branch environments in a simple and secure manner.

SD-Access starts with providing workflows for automating the physical network underlay using the LAN automation capabilities in Cisco DNA Center. Lan automation simplifies network operations and provides a zero-touch plug-and-play workflow. LAN automation can also quickly expand the network using extended nodes to spaces such as parking lots and warehouses.

HIT can build an automated network fabric and seamlessly connect external networks to the fabric borders. The network fabric also provides capabilities for HIT to connect their current networks to the fabric edges and extend security and segmentation benefits. SD-Access enables the creation of new branch and remote sites on-demand―from small fabric in a box for branches to extensive deployments with thousands of switches. It provides zero-touch network automation to bring up the routing underlay along with setting up the fabric and managing day-N operations on the network. All of this is possible through an intent-based network interface in Cisco DNA Center.

Built-in Network Fault Tolerance and Service Resiliency


The healthcare network is mission-critical, requiring minimal downtime. The services and the network must be highly resilient to support healthcare workers and patients. Cisco SD-Access is built on a highly fault-tolerant fabric architecture with redundant elements at all critical points. This includes fully redundant network peering points, control plane elements, StackWise Virtual Links (SVLs), and stacking on edge switches. Additionally, the services are always available through a Cisco DNA Center three-node clustered management system and fully distributed multi-node Cisco Identity Service Engine (ISE). The design of the network is flexible to accommodate even the most stringent needs of healthcare networks.

Secure Segmentation Based on Organizational Functions


Healthcare organizations have separate departments performing different and unique functions. HIT has found it highly useful to segment and secure communication among these different organizational entities.

Beyond communications, healthcare systems must safeguard the medical records and financial information of patients. In the U.S., hospitals and medical centers are required to have Health Insurance Portability and Accountability Act (HIPAA)-compliant wired and wireless networks that can provide complete and constant visibility into network traffic. These networks must protect sensitive data and medical devices such as electronic medical records (EMR) servers, vital sign monitors, and nurse workstations from malicious devices that seek to compromise the network. Prescription drug safes should be able to communicate with respective destinations even during a network impact, such as Cisco ISE being temporarily unavailable. Administrators can implement a critical VLAN for fabric edges, where devices like prescription safes reside, when access verification services are unreachable.

Close collaboration between healthcare staff and instantaneous access to a comprehensive view of health-related data, aggregated and collocated from the many disparate segments, is placing increasing demands on the network infrastructure. Cisco SD-Access architecture provides automated network fabric configurations, identity-based policy and segmentation, AI-driven insights, and telemetry services.

Cisco SD-Access addresses the need for complete data and control plane isolation between patient and visitor devices and medical and research facility devices by using macro segmentation. By onboarding devices into different overlay virtual networks (VNs), healthcare facilities can achieve complete data isolation and provide security among different departments and users.

Provide Rich Network Services


One of the biggest demands on the healthcare IT network infrastructure is to handle guest and patient traffic separate from staff and sensitive patient data. Mobility and roaming across campus buildings are therefore key requirements for healthcare networks. Cisco SD-Access has a built-in Fabric Enabled Wireless (FEW) architecture that enables seamless mobility for endpoints and devices connected to the edge of the network.

In a healthcare facility, various medical devices are in different locations but should be managed in a unified manner for proper usage and availability. SD-Access allows IT to place these devices in a separate virtual network and routed to a common border over a tunneled interface. This provides clean and secure segmentation of anchored traffic to a common exit point in the network.

Another important requirement of healthcare networks today is the ability to access medical records, security camera recordings across sites, staff records, and other sensitive data from a central server. In most cases, these data sets need to be accessible on-demand at a subset of branch sites. Cisco SD-Access helps in creating groups of sites that need to receive these types of records through its built-in multicast features.

Improve Network Visibility and Assurance


Network administrators should be able to efficiently manage and monitor their networks to quickly respond to the dynamic needs of healthcare systems. To improve the performance of a network, attached devices, and applications, the deployment should use telemetry to proactively predict performance and security risks.

Cisco DNA Center with Cisco SD-Access Assurance provides a comprehensive solution that addresses not just reactive network monitoring but also enables proactive monitoring with network health and issue dashboards. In addition to the network, client, and application health dashboards, the SD-Access Health Dashboard provides analytics and insights for both network underlay and fabric overlay by correlating actionable insights based on a wide variety of telemetry data ingested from sources throughout the network.

SD-Access provides visibility insights into the fabric, virtual network health, transit, and peer network connectivity health using a health score metric. The health of the fabric is quantified using Key Performance Indicators (KPIs) of the operational state of the fabric. These KPIs are also used to identify issues in the fabric. The operational data is collected from fabric devices using telemetry.

Source: cisco.com

Tuesday, 29 March 2022

Hyperconverged Infrastructure with Harvester: The start of the Journey

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills

Deploying and running data center infrastructure management – compute, networking, and storage – has traditionally been manual, slow, and arduous. Data center staffers are accustomed to doing a lot of command line configuration and spending hours in front of data center terminals. Hyperconverged Infrastructure (HCI) is the way out: It solves the problem of running storage, networking, and compute in a straightforward way by combining the provisioning and management of these resources into one package, and it uses software defined data center technologies to drive automation of these resources. At least in theory.

Recently, a colleague and I have been experimenting with Harvester, an open source project to build a cloud native, Kubernetes-based Hyperconverged Infrastructure tool for running data center and edge compute workloads on bare metal servers.

Harvester brings a modern approach to legacy infrastructure by running all data center and edge compute infrastructure, virtual machines, networking, and storage, on top of Kubernetes. It is designed to run containers and virtual machine workloads side-by-side in a data center, and to lower the total cost of data center and edge infrastructure management.

Why we need hyperconverged infrastructure

Many IT professionals know about HCI concepts from using products from VMWare, or by employing cloud infrastructure like AWS, Azure, and GCP to manage Virtual Machine applications, networking, and storage. The cloud providers have made HCI flexible by giving us APIs to manage these resources with less day-to-day effort, at least once the programming is done. And, of course, cloud providers handle all the hardware – we don’t need to stand up our own hardware in a physical location.

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills
Multi-node Harvester cluster

However, most of the current products that support converged infrastructure tend to lock customers to using their company’s own technology, and they also usually come with licensing fees. Now, there is nothing wrong with paying for a technology when it helps you solve your problem. But single-vendor solutions can wall you off from knowing exactly how these technologies work, limiting your flexibility to innovate or react to issues.

If you could use a technology that combines with other technologies you are already required to know today – like Kubernetes, Linux, containers, and cloud native – then you could theoretically eliminate some of the headaches of managing edge compute / data centers, while also lowering costs.

This is what the people building Harvester are attempting to do.

Adapting to the speed of change


Cloud providers have made it easier to deploy and manage the infrastructure surrounding applications. But this has come at the expense of control, and in some cases performance.

HCI, which the cloud providers support and provide, gets us some control back. However, the recent rise of application containers, over virtual machines, changed again how infrastructure is managed and even thought of, by abstracting layers of application packaging, all while making that packaging lighter weight than last-generation VM application packaging. Containers also provide application environments that are  faster to start up, and easier to distribute because of the decreased image sizes. Kubernetes takes container technologies like Docker to the next level by adding in networking, storage, and resource management between containers, in an environment that connects everything together. Kubernetes allows us to integrate microservice applications with automation and speedy deployments.

Kubernetes offers an improvement on HCI technologies and methodologies. It provides a better way for developers to create cloud agnostic applications, and to spin up workloads in containers more quickly than traditional VM applications. Kubernetes did not aim to replace HCI, but it did make a lot of the goals of software deployment and delivery simpler, from an HCI perspective.

In a lot of environments, Kubernetes runs inside VMs. So you still need external HCI technology to manage the underlying infrastructure for the VMs that are running Kubernetes. The problem now is that if you want to run your application in Kubernetes containers on infrastructure you have control of, you have different layers of HCI to support.  Even if you get better application management with Kubernetes, infrastructure management becomes more complex. You could try to use vanilla Kubernetes for every part of your edge-compute / data center stack and run it as your bare metal operating system instead of traditional HCI technologies, but you have to be ok migrating all workloads to containers, and in some cases that is a high hurdle to clear, not to mention the HCI networking that you will need to migrate over to Kubernetes.

The good news is that there are IoT and Edge Compute projects that can help. The Rancher organization, for example is creating a lightweight version of Kubernetes, k3s, for IoT compute resources like the Raspberry Pi and Intel NUC computers. It helps us push Kubernetes onto more bare metal infrastructure. Other orgs, like KubeVirt, have created technologies to run virtual machines inside containers and on top of Kubernetes, which has helped with the speed of deployment for VMs, which then allow us to use Kubernetes for our virtual networking layers and all application workloads (container and VMs). And other technology projects, like Rook and Longhorn, help with persistent storage for HCI through Kubernetes.

If only these could combine into one neat package, we would be in good shape.

Hyperconverged everything


Knowing where we have come from in the world of Hyperconverged Infrastructure for our Data Centers and our applications, we can now move on to what combines all these technologies together. Harvester packages up k3s (light weight Kubernetes), KubeVirt (VMs in containers), and Longhorn (persistent storage) to provide Hyperconverged Infrastructure for bare metal compute using cloud native technologies, and wraps an API / Web GUI bow on it to for convenience and automation.

Source: cisco.com