Thursday, 16 November 2023

ESG Survey results reinforce the multi-faceted benefits of SSE

ESG Survey results reinforce the multi-faceted benefits of SSE

When it comes to protecting a hybrid workforce, while simultaneously safeguarding internal resources from external threats, cloud-delivered security with Security Service Edge (SSE) is seen as the preferred method.

Enterprise Strategy Group (ESG) recently conducted a study of IT and security practitioners, evaluating their views on a number of topics regarding SSE solutions. Respondents were asked for their views on security complexity, user frustration, remote/hybrid work challenges, and their take on the expectations vs. reality when it came to the benefits of SSE. The results provide critical insights into how to protect a hybrid workforce, streamline security procedures, and enhance end-user satisfaction. Some of the highlights from their report include:

  • Remote/hybrid workers were found to be the biggest source of cyber-attacks with 44% coming from them.
  • Organizations are moving towards cloud-delivered security, as 75% indicated a preference for cloud-delivered cybersecurity products vs. on-premises security tools.
  • SSE is delivering value, with over 70% of respondents stating they achieved at least 10 key benefits involving operational simplicity, improved security, and better user experience.
  • SecOps teams report significantly fewer attacks, with 56% stating they observed over a 20% reduction in security incidences using SSE.

ESG Survey results reinforce the multi-faceted benefits of SSE

Delving further into the report, ESG provides details explaining why organizations have gravitated towards SSE and achieved significant success. SSE simplifies the security stack, substantially improving protection for remote users, while enhancing hybrid worker satisfaction with easier logins and better performance. It helps avert numerous challenges, from stopping malware spread to shrinking the attack surface.

Here’s some of the added benefits that SSE users see.

Overcome cybersecurity complexity


Among the respondents, more than two-thirds describe their current cybersecurity environment as complex or extremely complex. The top cited source (83%) involved the accelerated use of cloud-based resources and the need to secure access, protect data, and prevent threats. The second most common source of complexity was the number of security point products required (78%) with an average of 63 cybersecurity tools in use. Number three on the hit parade was the need for more granular access policies to support zero trust principles (77%) and the need to apply least privilege policies with user, application, and device controls. Other factors mentioned by wide margins include an expanded attack surface from work-from-home employees, use of unsanctioned applications and a growing number of more sophisticated attacks.

Organizations can offset these challenges by deploying SSE. These protective services reside in the cloud, between the end-user and the cloud-based resources they utilize as opposed to on-premises methods that are ‘out of the loop’. SSE consolidates many security features including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS) and Cloud Access Security Broker (CASB) with one dashboard to simply operations. With advanced ZTNA with zero trust access (ZTA) authorized users can only connect to specific, approved apps. Discovery and lateral movement by compromised devices or unauthorized users are prevented.

Enhance end-user experience


The report found current application access processes often result in user frustration. Respondents reported their workforce uses a collective average of 1,533 distinct business applications. As these apps typically reside in the cloud, secure usage is no longer straightforward. To support zero trust, many organizations have shifted to more stringent authentication and verification tasks. While good from a security perspective, 52% of respondents indicated their users were frustrated with this practice. Similarly, 50% mentioned user frustration at the number of steps to get to the application they need and 45% at having to choose the method of connection based on the application.

Performance was also cited as an issue, with 43% indicating user frustration. More than one-third (35%) indicated that latency impacting the end-user experience. In some cases, this leads to users circumventing the VPN, which was cited by 38% of respondents. Such user noncompliance can introduce additional risk and the potential for malicious actors to view traffic flows.

VPNs were found to be poorly suited to supporting zero trust principles. They do not allow for granular access policies to be applied (mentioned by 31% of respondents) and are visible on the public internet, allowing attackers a clear entry point to the network and corporate applications (cited by 22%).

By implementing SSE with ZTA administrators can give remote users the same type of straightforward, performant experience as when they are in the office, without IT teams being forced to make a trade-off between security and user satisfaction. ZTA allows users to access all, not some, of the potentially thousands of apps needed. ZTA provides a transparent and seamless ‘one-click’ process to login. Backed by advanced protocols, users can obtain HTTP3 level speeds with reduced latency and more resilient connections. Ultra-granular access with one user to one app ‘micro tunnels’ ensure security while providing resource obfuscation and preventing lateral movement.

Solve hybrid work security challenges


It’s challenging to secure hybrid workforces that include remote workers, contractors, and partners. This new hyper-distributed landscape results in an expanded attack surface, as well as an increase in device types and inconsistent performance. Respondents cited the need to ensure malware does not spread from remote devices to corporate locations and resources (55%) as their most critical concern. The second biggest issue mentioned is the need to check device posture (51%) consistently and continuously. In third place, IT listed defending an expanding attack surface due to users directly accessing cloud-based apps (50%). Other items of note include the lack of visibility into unsanctioned apps (45%) and protecting users as they access cloud apps (40%).

SSE is tailor-made to address these roadblocks to security. Multiple defense-in-depth features from the cloud ensure malware and other malicious activity is routed out and prevents infection before it starts. Continuous, rich posture checks with contextual insights ensure device compliance. Thorough user identification and authentication procedures combined with granular access control policies prevent unauthorized resource access. CASB provides visibility into what applications are being requested and controls access. Remote Browser Isolation (RBI), DNS-filtering, FWaaS and other features protect end users as they use Internet or public cloud services.

Benefits derived through SSE


The survey clearly demonstrates that many organizations who are utilizing SSE solutions are reaping a broad set of benefits. These can be categorized in three pillars: increased user and resource security, simplified operations, and enhanced user experience. When respondents were asked if they felt their initial expected benefits were subsequently realized once SSE was deployed, over 73% reported achieving at least ten critical advantages. A partial list of these factors include:

  • Simplified security operations/increased efficiency with ease of configuration and management
  • Improved security specifically for remote/hybrid workforce
  • Enacting principles of least privilege by allowing remote access only to approved resources
  • Superior end-user access experience
  • Prevention of malware spread from remote users to corporate resources
  • Increased visibility into remote device posture assessment

Cisco leads the way in SSE


Cisco’s SSE solution goes way beyond standard protection. In addition to the four principal features previously listed (ZTNA, SWG, FWaaS, CASB), our Cisco Secure Access includes RBI, DNS filtering, advanced malware protection, Intrusion Prevention System (IPS), VPN as a Service (VPNaaS), multimode Data Loss Prevention (DLP), sandboxing and digital experience monitoring (DEM). This feature rich array is backed by the industry-leading threat intelligence group, Cisco Talos, giving security teams a distinct advantage in detecting and preventing threats.

ESG Survey results reinforce the multi-faceted benefits of SSE

With Secure Access:

  • Authorized users can access any app, including non-standard or custom, regardless of the underlying protocols involved.
  • Security teams can employ a safer, layered approach to security, with multiple techniques to ensure granular access control.
  • Confidential resources remain hidden from public view with discovery and lateral movement prevented.
  • Performance is optimized with the use of next-gen protocols, MASQUE and QUIC, to realize HTTP3 speeds
  • Administrators can quickly deploy and manage with a unified console, single agent and one policy engine.
  • Compliance is maintained via continuous in-depth user authentication and posture checks.

Source: cisco.com

Wednesday, 8 November 2023

The Evolution of Oil & Gas Industry

The Evolution of Oil & Gas Industry

The Oil & Gas industry has changed a lot. From Upstream through to Downstream, advancements in technology have made operations safer and more productive. Those who work in the industry have a front row seat to these changes but most of us see the industry through mainstream information channels and miss some of the significant changes happening behind the scenes. Below are just a few examples of how the Oil & Gas industry has changed.

Exploration and Drilling:


Past: In the past, oil and gas exploration was largely based on geological surveys, seismic data, and educated guesswork. Drilling technology was less advanced, and there was a higher risk of drilling dry wells.

Now: Modern technology, such as 3D seismic imaging and advanced drilling techniques, has greatly improved the success rate of exploration. Companies now use more data-driven and scientific approaches to identify and extract hydrocarbons.

Reserves Replacement:


Past: Oil and gas companies focused on finding and extracting easily accessible reserves, often in known fields. Reserves replacement was a less pressing concern.

Now: As existing reserves are depleted, companies are increasingly focused on finding and developing new reserves to replace what they extract. This has led to more extensive exploration efforts and investments in unconventional resources like shale oil and gas.

Environmental Awareness:


Past: Environmental concerns and regulations were less prominent. Companies had fewer incentives to minimize their environmental impact, leading to more pollution and ecological damage.

Now: Environmental considerations are paramount. Companies face stricter regulations and public pressure to reduce their environmental footprint. Many are investing in cleaner technologies, carbon capture, and renewable energy as part of their operations.

Technology and Automation:


Past: Manual labor and basic machinery were used for drilling, extraction, and processing. Automation was limited.

Now: Automation and digital technology play a crucial role in optimizing operations. Robotics, AI, and IoT (Internet of Things) devices are used for drilling, monitoring, and maintenance, improving efficiency and safety.

Globalization:


Past: Oil and gas operations were often concentrated in a few key regions, and companies were mainly national or multinational corporations.

Now: The industry has become more globalized. Companies operate in diverse geographic regions, and the supply chain is highly interconnected, with a more significant presence in emerging markets.

Energy Transition:


Past: Oil and gas companies were primarily focused on fossil fuels, with limited diversification into alternative energy sources.

Now: Many oil and gas companies are investing in renewable energy, such as wind, solar, and hydrogen, as they adapt to the energy transition and a growing demand for cleaner energy sources.

Social Responsibility:


Past: Social responsibility was less emphasized, and there was less concern for the social impacts of operations.

The Evolution of Oil & Gas Industry
Now: Companies are increasingly expected to contribute positively to the communities where they operate by adhering to ethical and sustainable business practices.

As the energy sector continues to evolve, from a focus on traditional exploration and drilling to a more technologically advanced, environmentally conscious, and diversified approach that encompasses alternative energy sources, Cisco can be a key partner for customers looking to thrive in this dynamic environment.

Cisco’s technologies play a pivotal role in ensuring that operations are efficient, secure, and sustainable with a portfolio of business outcomes that reflects the evolving demands of society, technology, and the energy market.

The Cisco Portfolio Explorer for Oil & Gas is an interactive tool that builds the bridge between business priorities and technology solutions by showcasing use cases and architectures to solve your greatest business challenges. The tool has four themes that cover primary areas of Oil & Gas operations including: Plant and Field Operations, Secure Connected Workforce, Industrial Safety and Security, and Energy Transition. Within each theme you will find three to five use cases that dive deeper, explaining the business and technical application in the industry. It also provides case studies and partners as well as showcasing demos, financing options and links to industry experts so you can transform your business with security and trust.

Source: cisco.com

Tuesday, 7 November 2023

Bridging the IT Skills Gap Through SASE: A Path to Radical Simplification and Transformation

Bridging the IT Skills Gap Through SASE: A Path to Radical Simplification and Transformation

Imagine a world where IT isn’t a labyrinth of complexity but instead a streamlined highway to innovation. That world isn’t a pipe dream—it’s a SASE-enabled reality.

As we navigate the complexities of a constantly evolving digital world, a telling remark from a customer onstage with me at Cisco Live in June lingers: “We don’t have time to manage management tools.” This sentiment is universal, cutting across sectors and organizations. An overwhelming 82% of U.S. businesses, according to a Deloitte survey, were prevented from pursuing digital transformation projects because of a lack of IT resources and skills. Without the right experts to get the job done, teams are often entangled in complex, disparate systems and tools that require specific skills to operate.

The IT talent crunch


Today’s tech landscape presents a challenge that IT leaders can’t ignore: complex IT needs combined with a fiercely competitive talent market. Internally, teams are overwhelmed, often struggling to keep up with ever-evolving technical demands. In fact, many teams are strapped and rely on early-in-career staff to fill wide gaps left behind by more experienced predecessors. And the problem is only going to get worse.

For experienced IT workers, it’s an attractive time to entertain new opportunities. According to a global Deloitte study, 72% of U.S. tech employees are considering leaving their jobs for better roles. Interestingly, a mere 13% of employers said they were able to hire and retain the tech talent they most needed.

Now more than ever, organizations must rethink their approach to talent management and technology adoption to stay ahead of the curve.

Convergence as a catalyst for transformation


In an era where time is a premium and complexity is the norm, the need for convergence has never been more apparent. Technical skills, while essential, are not enough. The real game-changers are adaptability, cross-functional collaboration, and strategic foresight. And yet, these “soft skills” can’t be optimally used if teams are entangled in complex, disparate systems and tools that require specialized skills to manage and operate.

So how do organizations tackle this dilemma? How do they not just keep the lights on but also innovate, improve, and lead? In a word: convergence. Unifying siloed network and security teams as well as systems and tools with a simplified IT strategy is key to breaking through complexity.

A platform to radically simplify networking and security


Secure access service edge (SASE) is more than just an architecture; it’s a vision for the future where the worlds of networking and security are not siloed and become one. Cisco takes a unified approach to SASE, where industry-leading SD-WAN meets industry-leading cloud security capabilities in one, robust platform to make managing networking and security easy.

Bridging the IT Skills Gap Through SASE: A Path to Radical Simplification and Transformation
Figure 1. SASE architecture converging networking and security domains

Unified SASE converges the two domains into one, streamlining operations across premises and cloud. Admins from both domains gain end-to-end visibility into every connection, making it easier to optimize the application experience for users, providing seamless access to critical resources wherever work happens. This converged approach to secure connectivity through SASE delivers real outcomes that matter to resource-strapped organizations.

Simplify IT operations and increase productivity

◉ Administrators find it easier to manage networking and security when they are consolidated
◉ 73% reduction in application latency improves collaboration and enhances overall productivity
◉ 40% faster performance on Microsoft 365 improves employee experience

Do more with less

◉ 60% lower TCO for zero-trust security enables budget reallocation to strategic initiatives3
◉ 65% reduction in connectivity costs helps ease the burden on IT budgets3

Enhance security without adding complexity

◉ Simplify day-2 operations with centralized policy management, which makes it easier for IT teams to execute
◉ Improve security posture through consistent enforcement—from endpoints and on-premises infrastructure to cloud—across your organization

Scale and adapt

◉ Cloud-native architecture supports scaling and addresses the challenges of rapidly evolving IT landscapes
◉ Prepares your organization for changes, reducing the need for constant upskilling or reskilling in IT teams

Organizations can use SASE architecture to advance their technological frameworks and strategically address the IT skills gap, leading to long-term business success.

Shifting gears: Unifying, simplifying, innovating


SASE is not merely a technological evolution; it’s a paradigm shift in how we approach IT management. This lets IT admins focus less on tool management and more on driving business innovation, enriching user experiences, and evolving in tune with market demands.

Figure 2. Introducing unified SASE with Cisco+ Secure Connect, a better way to manage networking and security

The path ahead with unified SASE from Cisco


Cisco offers a unified, cloud-managed SASE solution, Cisco+ Secure Connect. From on-premises to cloud, this comprehensive SASE solution delivers simplicity and operational consistency, unlocking secure hybrid work for employees wherever they choose to work. The beauty of Cisco’s unified SASE solution lies in the principle of interconnecting everything with security everywhere–if it is connected, it is protected. It’s that easy.

Source: cisco.com

Saturday, 4 November 2023

The myth of the long-tail vulnerability

Modern-day vulnerability management tends to follow a straightforward procedure. From a high level, this can be summed up in the following steps:

  • Identify the vulnerabilities in your environment
  • Prioritize which vulnerabilities to address
  • Remediate the vulnerabilities

When high-profile vulnerabilities are disclosed, they tend to be prioritized due to concerns that your organization will be hammered with exploit attempts. The general impression is that this malicious activity is highest shortly after disclosure, then decreases as workarounds and patches are applied. The idea is that we eventually reach a critical mass, where enough systems are patched that the exploit is no longer worth attempting.

In this scenario, if we were to graph malicious activity and time, we end up with what is often referred to as a long-tail distribution. Most of the activity occurs early on, then drops off over time to form a long tail. This looks something like the following:

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

A long tail distribution of exploit attempts sounds reasonable in theory. The window of usefulness for an exploit is widest right after disclosure, then closes over time until bad actors move on to other, more recent vulnerabilities.

But is this how exploitation attempts really play out? Do attackers abandon exploits after a certain stage, moving on to newer and more fruitful vulnerabilities? And if not, how do attackers approach vulnerability exploitation?

Our approach


To answer these questions, we’ll look at Snort data from Cisco Secure Firewall. Many Snort rules protect against the exploitation of vulnerabilities, making this a good data set to examine as we attempt to answer these questions.

We’ll group Snort rules by the CVEs mentioned in the rule documentation, and then look at CVEs that see frequent exploit attempts. Since CVEs are disclosed on different dates, and we’re looking at alerts over time, the specific time frame will vary. In some cases, the disclosure date is earlier than the range our data set covers. While we won’t be able to examine the initial disclosure period for these, we’ll look at a few of these as well for signs of a long tail.

Finally, looking at a count of rule triggers can be misleading—a few organizations can see many alerts for one rule in a short time frame, making the numbers look larger than they are across all orgs. Instead, we’ll look at the percentage of organizations that saw an alert. We’ll then break this out on a month-to-month basis.

Log4J: The 800-pound gorilla


The Log4J vulnerability has dominated our vulnerability metrics since it was disclosed in December 2021. However, looking at the percentage of exploit attempts each month since, there was neither a spike in use right after disclosure, nor a long tail afterwards.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

That first month, 27 percent of organizations saw alerts for Log4J. Since then, alerts have neither dropped off nor skyrocketed from one month to the next. The percent of organizations seeing alerts range from 25-34 percent through June 2023, averaging out at 28 percent per month.

Perhaps Log4J is an exception to the rule. It’s an extremely common software component and a very popular target. A better approach might be to look at a lesser-known vulnerability to see how the curve looks.

Spring4Shell: The Log4J that wasn’t


Spring4Shell was disclosed at the end of March 2022. This was a vulnerability in the Spring Java framework that managed to resurrect an older vulnerability in JDK9, which had initially been discovered and patched in 2010. At the time of Spring4Shell’s disclosure there was speculation that this could be the next Log4J, hence the similarity in naming. Such predictions failed to materialize.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

We did see a decent amount of Spring4Shell activity immediately after the disclosure, where 23 percent of organizations saw alerts. After this honeymoon period, the percentage did decline. But instead of exhibiting the curve of a long tail, the percentages have remained between 14-19 percent a month.

Keen readers will notice the activity in the graph above that occurs prior to disclosure. These alerts are for rules covering the initial, more-than-a-decade-old Java vulnerability, CVE-2010-1622. This is interesting in two ways:

1. The fact that these rules were still triggering monthly on a 13-year-old vulnerability prior to Spring4Shell’s disclosure provides the first signs of a potential long tail.

2. It turns out that Spring4Shell was so similar to the previous vulnerability that the older Snort rules alerted on it.

Unfortunately, the time frame of our alert data isn’t long enough to say what the initial disclosure phase for CVE-2010-1622 looked like. So since we don’t have enough information here to draw a conclusion, what about other older vulnerabilities that we know were in heavy rotation?

ShellShock: A classic


It’s hard to believe, but the ShellShock vulnerability recently turned nine. By software development standards this qualifies it for senior citizen status, making it a perfect candidate to examine. While we don’t have the initial disclosure phase, activity remains high to this day.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Our data set begins approximately seven years after disclosure, but the percentage of organizations seeing alerts ranges from 12-23 percent. On average across this timeframe, about one in five organizations see ShellShock alerts in a month.

A pattern emerges


While we’ve showcased 3-4 examples here, a pattern does emerge when looking at other vulnerabilities, both old and new. For example, here is CVE-2022-26134, a vulnerability discovered in Atlassian Confluence in June 2022.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Here is ProxyShell, which was initially discovered in August 2021, followed by two more related vulnerabilities in September 2022.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

And here is another older, commonly targeted vulnerability in PHPUnit, originally disclosed in June 2017.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Is the long tail wagging the dog?


What emerges from looking at vulnerability alerts over time is that, while there is sometimes an initial spike in usage, they don’t appear to decline to a negligible level. Instead, vulnerabilities stick around for years after their initial disclosure.

So why do old vulnerabilities remain in use? One reason is that many of these exploitation attempts are automated attacks. Bad actors routinely leverage scripts and applications that allow them to quickly run exploit code against a large swaths of IP addresses in the hopes of finding vulnerable machines.

This is further evidenced by looking at the concentration of alerts by organization. In many cases we see sudden spikes in the total number of alerts seen each month. If we break these months down by organization, we regularly see that alerts at one or two organizations are responsible for the spikes.

For example, take a look at the total number of Snort alerts for an arbitrary vulnerability. In this example, December was in line with the months that preceded it. Then in January, the total number of alerts began to grow, peaking in February, before declining back to average levels.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

The cause of the sudden spike, highlighted in light blue, is one organization that was hammered by alerts for this vulnerability. The organization saw little-to-no alerts in December before a wave hit that lasted from January through March. It then completely disappeared by April.

This is a common phenomenon seen in overall counts (and why we don’t draw trends from this data alone). This could be the result of automated scans by bad actors. These attackers may have found one such vulnerable system at this organization, then proceeded to hammer it with exploit attempts in the months that followed.

So is the long tail a myth when it comes to vulnerabilities? It certainly appears so—at least when it comes to the types of attacks that target the perimeter of an organization. The public facing applications that reside here present a large attack surface. Public proof-of-concept exploits are often readily available and are relatively easy to fold into attacker’s existing automated exploitation frameworks. There’s little risk for an attacker involved in automated exploit attempts, leaving little incentive to remove exploits once they’ve been added to an attack toolkit.

What is left to explore is whether long-tail vulnerabilities exist in other attack surfaces. The fact is that there are different classes of vulnerabilities that can be leveraged in different ways. We’ll explore more of these facets in the future.

It only takes one


Finding that one vulnerable, public-facing system at an organization is a needle-in-a-haystack operation for attackers, requiring regular scanning to find it. But all it takes is one new system without the latest patches applied to give the attackers an opportunity to gain a foothold.

The silver lining here is that a firewall with an intrusion prevention system, like Cisco Secure Firewall, is designed specifically to prevent successful attacks.  Beyond IPS prevention of these attacks, the recently introduced Cisco Secure Firewall 4200 appliance and 7.4 OS bring enterprise-class performance and a host of new features including SD-WAN, ZTNA, and the ability to detect apps and threats in encrypted traffic without decryption.

Also, if you’re looking for a solution to assist you with vulnerability management, Cisco Vulnerability Management has you covered. Cisco Vulnerability Management equips you with the contextual insight and threat intelligence needed to intercept the next exploit and respond with precision.

Source: cisco.com

Thursday, 2 November 2023

Cisco report reveals observability as the new strategic priority for IT leaders

Cisco report reveals observability as the new strategic priority for IT leaders

Research from Cisco AppDynamics, The Age of Application Observability, tells us that teams have reached a tipping point as they look to tackle complexity and deal with fractured IT domains, tool sprawl, and ever-growing demands from customers and end users for flawless, performant, and secure digital experiences.

There is also an emerging consensus that observability is a mandatory component of the solution. The report reveals that 85% of IT leaders now see observability as a strategic priority. Observability allows teams to ask questions about the state of IT systems and to get insights into the applications and supporting systems that keep a business running, derived from the telemetry data they collectively produce.

As we look to a new era of insight, one in which teams are evolving from reactive to proactive by correlating that telemetry with business outcomes, Cisco Full-Stack Observability offers a transformative model. It helps teams identify the root cause of digital-experience disrupting scenarios even before they happen – with or without human intervention – and prioritize prescriptive actions in response.

The age of observability


Observability is the path to a more federated view among teams and processes. But do teams see it as worth the necessary investment in time and resources? The Cisco AppDynamics report tells us that they believe it is.

Cisco report reveals observability as the new strategic priority for IT leaders
While almost half (44%) of report participants say new innovation initiatives are being delivered with cloud native technologies – and they expect this figure to climb to 58% over the next five years – the vast majority (83%) acknowledge that this is leading to increased complexity.

There is agreement among respondents that observability can help resolve this complexity. And there is almost universal consensus that the journey to observability is rooted in common challenges.

  • According to the report, 92% agree that hybrid environments are here to stay.
  • Nearly three-quarters (78%) say the volume of data makes manual
  • Cloud cost is creeping with 81% indicating heightened scrutiny on cloud investments

The recognition of these and other matters outlined in the report has brought with it an important shift in perspective. Business leaders strongly support observability plans and recognize that the journey to observability is well underway.

This is confirmed by the report, with 89% saying their organization’s expectations around observability are increasing. Even for organizations that are early in the observability journey, steps are being taken toward accelerating full stack observability as a foundation for future success.

Invest in the right tools (not more tools)


Monitoring is not observability, yet 64% of those surveyed admit they find it difficult to differentiate between observability and monitoring solutions. They say it is common to adopt more monitoring tools as more hybrid infrastructure is brought online and in support of expanded services and reach.

Many IT departments are deploying separate tools, for example, to monitor on-premises and cloud native applications, which means they don’t have a clear view of the entire application path.

The solution is not another tool, adding additional latency, complexity, and cost. The way forward is to invest in an observability solution that has the power to help consolidate, simplify, transform, and automate.

Essentially, by bridging the gap between business and technology worlds, this changes the conversation.

According to the report, 88% of respondents say observability with business context will enable them to become more strategic and spend more time on innovation. They understand that it’s a revenue-generating, organization-optimizing investment.

Adjust processes accordingly


While success requires the right tools, it also requires a change to the processes that underpin them. How departments are structured, staffed, and resourced. How teams communicate, and when. The fact is that processes and behaviors have always lagged the pace of technology.

According to the report, 36% of respondents believe this is contributing to the loss of IT talent. Nearly half say this trend will continue if leaders don’t find new ways to break down silos between IT and business domains.

Eight out of ten surveyed point to an increase in silos due to managing multi-cloud and hybrid environments, and less than one-third (31%) report ongoing collaboration between their IT operations and security teams. The majority agree that the biggest barrier to collaboration between IT teams is the use of technology and tools which reinforce these silos.

Cisco Full-Stack Observability provides a foundation for digital transformation – which respondents agree will continue to accelerate as the driving force behind every enterprise – in part by streamlining IT operations so teams can collaborate to optimize digital experiences.

The new strategic priority


Observability is now understood as a new way to foster cross-domain collaboration, supporting new ways of working together, and incentivizing better business outcomes. This means, for example, that teams can align digital experiences with profit, compliance, growth, and delivery time.

The Cisco report brings into focus the fast pace of hybrid adoption in the enterprise, and the technical challenges that follow. This is where Cisco Observability Platform really shines. It brings together the rich telemetry data generated by normal business operations, offering solutions that make it understandable and correlating it with business objectives in a usable way.

Teams are acutely aware that the road to digital transformation is paved with new challenges, but they also recognize that managing and mitigating these issues means getting on that path sooner. Cisco Full-Stack Observability is the answer.

Source: cisco.com

Tuesday, 31 October 2023

How to Begin Observability at the Data Source

More data does not mean better observability


If you’re familiar with observability, you know most teams have a “data problem.” That is, observability data has exploded as teams have modernized their application stacks and embraced microservices architectures.

If you had unlimited storage, it’d be feasible to ingest all your metrics, events, logs, and traces (MELT data) in a centralized observability platform. However, that is simply not the case. Instead, teams index large volumes of data – some portions being regularly used and others not. Then, teams have to decide whether datasets are worth keeping or should be discarded altogether.

For the past few months I’ve been playing with a tool called Edge Delta to see how it might help IT and DevOps teams to solve this problem by providing a new way to collect, transform, and route your data before it is indexed in a downstream platform, like AppDynamics or Cisco Full-Stack Observability.

What is Edge Delta?


You can use Edge Delta to create observability pipelines or analyze your data from their backend. Typically, observability starts by shipping all your raw data to central service before you begin analysis. In essence, Edge Delta helps you flip this model on its head. Said another way, Edge Delta analyzes your data as it’s created at the source. From there, you can create observability pipelines that route processed data and lightweight analytics to your observability platform.

Why might this approach be advantageous? Today, teams don’t have a ton of clarity into their data before it’s ingested in an observability platform. Nor do they have control over how that data is treated or flexibility over where the data lives.

By pushing data processing upstream, Edge Delta enables a new kind of architecture where teams can have…

◉ Transparency into their data: “How valuable is this dataset, and how do we use it?”
◉ Controls to drive usability: “What is the ideal shape of that data?”
◉ Flexibility to route processed data anywhere: “Do we need this data in our observability platform for real-time analysis, or archive storage for compliance?”

The net benefit here is that you’re allocating your resources towards the right data in its optimal shape and location based on your use case.

How I used Edge Delta


Over the past few weeks, I’ve explored a couple different use cases with Edge Delta.

Analyzing NGINX log data from the Edge Delta interface

First, I wanted to use the Edge Delta console to analyze my log data. To do so, deployed the Edge Delta agent on a Kubernetes cluster running NGINX. From here, I sent both valid and invalid http requests to generate log data and observed the output via Edge Delta’s pre-built dashboards.

Among the most useful screens was “Patterns.” This feature clusters together repetitive loglines, so I can easily interpret each unique log message, understand how frequently it occurs, and whether I should investigate it further.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Tutorial and Materials
Edge Delta’s Patterns feature makes it easy to interpret data by clustering together repetitive log messages and provides analytics around each event.

Creating pipelines with Syslog data

Second, I wanted to manipulate data in flight using Edge Delta observability pipelines. Here, I installed the Edge Delta agent on my Mac OS. Then I exported Syslog data from my Cisco ISR1100 to my Mac.

From within the Edge Delta interface, I configured the agent to listen on the appropriate TCP and UDP ports. Now, I can apply processor nodes to transform (and otherwise manipulate) my data before it hits my downstream analytics platform.

Specifically, I applied the following processors:

◉ Mask node to obfuscate sensitive data. Here, I replaced social security numbers in my log data with the string ‘REDACTED’.
◉ Regex filter node which passes along or discards data based on the regex pattern. For this example, I wanted to exclude DEBUG level logs from downstream storage.
◉ Log to metric node for extracting metrics from my log data. The metrics can be ingested downstream in lieu of raw data to support real-time monitoring use cases. I captured metrics to track the rate of errors, exceptions, and negative sentiment logs.
◉ Log to pattern node which I alluded to in the section above. This creates “patterns” from my data by grouping together similar loglines for easier interpretation and less noise.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Tutorial and Materials
Through Edge Delta’s Pipelines interface, you can apply processors to your data and route it to different destinations.

For now all of this is being routed to the Edge Delta backend. However, Edge Delta is vendor-agnostic and I can route processed data to different destinations – like AppDynamics or Cisco Full-Stack Observability – in a matter of clicks.

Source: cisco.com

Saturday, 28 October 2023

SD WAN solutions for utility Distribution Automation

Networks are expanding outside traditional office buildings and into industrial fixed and mobile use cases. This results in more devices being connected to the Internet and data centers as well as increased security exposure. IoT has moved traditional networking far beyond the carpeted spaces and into industries like Fleets, Oil & Gas, Energy & Water Utilities, Remote Condition Monitoring and Control — basically anything that can establish a wide area connection. Moreover, these industrial networks are increasingly being considered critical infrastructure. In response to this expansion, Cisco has on-going innovations advancing the ways networks operate – and at the forefront of these trends is the way that SD WAN solutions enable and support industrial use cases.

Cisco Catalyst SD-WAN today is already an industry-leading wide area network solution offering a software-defined WAN solution that enables enterprises and organizations to connect users to their applications securely. It provides a software overlay that runs over standard network transports, including MPLS, broadband, and Internet, to deliver applications and services. The overlay network supports on-premises solutions but also extends the organization’s network to Infrastructure as a Service (IaaS) and multi-cloud environments, thereby accelerating their shift to the cloud.

Most utilities are used to building large networks utilizing technologies such as Internet Protocol Security (IPsec) and Dynamic Multipoint Virtual Private Network (DMVPN) to encrypt critical communications, Multiprotocol Label Switching (MPLS) for the underlying transport network, and public or private cellular for remote sites with no other WAN connectivity. Catalyst SD-WAN brings these technologies together and enables automation to greatly simplify deployments.

Automation benefits:

  • Secure Zero Touch deployment of field gateways (i.e., no field staff required to configure a gateway)
  • Simple provisioning of end-to-end service VPNs to segment traffic (SCADA, CCTV, PMU, IP Telephony, etc.)
  • Templated configurations making it easy to change configurations at scale and push it to gateways in the field.
  • Application of unified security policies across a diverse range of remote sites and equipment
  • Managing multiple backhaul connectivity options at the gateway including private MPLS for critical SCADA traffic and cellular for backup and even internet-based connections for non-critical traffic, where appropriate
  • Lifecycle management of gateways (e.g., firmware updates, alarm monitoring and statistics)

Cisco SD-WAN Validated Design for Distribution Automation (DA)


SD-WAN has origins as an enterprise solution using fixed edge routers of various performance capabilities and predictable enterprise traffic patterns. Utility networks present new challenges with especially when applied to Distribution network use cases:

  • Connectivity to legacy serial devices not supporting Ethernet/IP
  • communications (g., Modbus RTU, DNP3 over serial, IEC101 or vendor proprietary)
  • Mobility needs for mobile assets to ensure resilient wide area connectivity
  • New WAN interfaces including dual 4G or 5G cellular, DSL, fiber or Ethernet
  • The use of NAT to allow fixed privately addressed equipment to communicate
  • Requirement to encrypt SCADA traffic across the wide area network
  • Applicable to both distribution substations and field area networks
  • Segregation of services via VPNs in flexible topologies (Hub & Spoke, or Meshed [Fully or Partial])
  • Intelligent traffic steering across multiple backhaul interfaces when needed (critical vs. non-critical traffic)

SD WAN Solutions, Cisco Certification, Cisco Exam, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning

Key use Distribution Network use cases that the Cisco SD-WAN solution can address are:

SD WAN Solutions, Cisco Certification, Cisco Exam, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning

Cisco IoT Solutions have introduced a new Cisco Validated Design to address an SD-WAN architecture for Distribution Automation use cases. Leveraging the Cisco Catalyst IR1100 Rugged Series Routers as an SD-WAN router with flexible modular backhaul capabilities (DSL, Fiber, Ethernet, 4/5G, 450MHz LTE) and operating as an SD-WAN controlled edge router.

SD WAN Solutions, Cisco Certification, Cisco Exam, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning

Along the distribution network feeders, the IR1101 should be positioned as a Distribution Automation gateway. It can be easily mounted within a DA device cabinet (e.g. Recloser, Cap bank controller etc) and can be powered by the same DC supply (flexible 9-36VDC input). It also has extended environmental capabilities to cope with the variations in temperature, humidity, and vibration.

The new SD-WAN for Utility Distributed Automation Design Guide builds on other existing documents that describe in detail Cisco’s SD-WAN architecture and industrial IoT hardware offerings and shows how they can be combined to provide a scalable, secure network. The new Design Guide is focused on areas that are unique or at least emphasized by DA use cases in general. This document also has detailed configuration examples for many of the DA features.

Source: cisco.com