Saturday, 20 July 2024

Maintaining Digital Compliance with the PCI DSS 4.0

Maintaining Digital Compliance with the PCI DSS 4.0

The Payment Card Industry data security standards have evolved since 2002 when the first version was released. The most recent update, version 4.0.1, was released in June 2024. This updates the PCI 4.0 standard, which  has significant updates to both scope and requirements. These requirements are being phased now and through March 2025.

Cisco has been involved with PCI since the outset, having a seat on the board of advisors and helping craft the development of PCI standards through different evolutions. Cisco has consulted extensively with customers to help meet the requirements and provided extensive user friendly documentation on how customers can meet the requirements, both in minimizing the scope of the assessment as well as in ensuring security controls are present. We have released systems that are PCI compliant in control aspects as well as data plane aspects, and have built-in out-of-the box audit capabilities in a number of infrastructure based, and security based, solutions.

The purpose of this blog is to walk into the PCI DSS 4.0 with a focus on architects, leaders, and partners who have to navigate this transition. We will discuss what is new and relevant with PCI DSS 4.0, its goals and changes. We will then explore products and solution that customers are actively using in meeting these requirements, and how our products are evolving to meet the new requirements. This will be targeted to teams who already have been on the PCI journey. We’ll transition to an expansion into PCI DSS in more detail, for teams that are newer to the requirements framework.

One thing that is important to note about the 4.0 update, is it will be a phased rollout. Phase 1 items (13 requirements) had a deadline of March 31, 2024. The second phase is much larger and more time has been given, but it is coming up soon. Phase 2 has 51 technical requirements, and is due May of 2025.

Maintaining Digital Compliance with the PCI DSS 4.0
Implementation timelines as per PCI At a Glance

What’s new in PCI DSS 4.0, and what are its goals?


There are many changes in PCI DSS 4.0. these were guided by four overarching goals and themes:

Continue to meet the security needs of the payments industry.

Security is evolving at a rapid clip, the amount of public CVE’s published has doubled in the past 7 years (source: Statista). The evolving attack landscape is pushing security controls, and new  types of attack require new standards. Examples of this evolution are new requirements around Multi-Factor authentication, new password requirements, and new e-commerce and phishing controls.

Promote security as a continuous process

Point in time audits are useful but do not speak to the ongoing rigor and operational hygiene needed to ensure the proper level of security controls are in place in a changing security environment. This step is an important step in recognizing the need for continual service improvement vis-a-vis an audit. This means that process will be have additional audit criteria in addition to the application of a security control.

Provide flexibility in maintaining payment security

The standard now allows for risk based customized approaches to solving security challenges which is reflective to both the changing security environment, and the changing financial application environments. If the intent of the security control is able to be met with a novel approach, it can be considered as fulfilling a PCI requirement.

Enhance validation methods and procedures for compliance

“Clear validation and reporting options support transparency and granularity.” (PCI 4.0 at a glance).  Clarity in the measurements and reporting is articulated. This is important for a number of factors, you can’t improve what you don’t measure, and if you’re not systematically tracking it in well-defined language, it is cumbersome to reconcile. This focus will make reports such as the attestation report more closely aligned to reports on compliance and self-assessment questionnaires.

How Cisco helps customers meet their PCI Requirements.


Below is a table that briefly summarizes the requirements and technology solutions that customers can leverage to satisfy these requirements. We will go deeper into all of the requirements and the technical solutions to these.

PCI DSS 4.0 Requirement Cisco Technology/Solution 
1. Install and Maintain network security control.   Cisco Firepower Next-Generation Firewall (NGFW), ACI, SDA, Cisco SDWan, Hypershield, Panoptica, Cisco Secure Workload
2. Apply secure configurations to all system components.   Catalyst center, Meraki, Cisco SDWan, Cisco ACI, Cisco CX Best Practice configuration report 
3. Protect stored cardholder data   Cisco Advanced Malware Protection (AMP) for Endpoints
4. Protect cardholder data with strong cryptography during transmission over open, public networks   Wireless Security requirements satisfied with Catalyst Center and Meraki 
5. Protect all systems and networks from malicious software   Cisco AMP for Endpoints 
6. Develop and Maintain secure systems and software   Meraki, Catalyst Center, ACI, Firepower, SDWan. Cisco Vulnerability Manager 
7. Restrict access to cardholder data by business need-to-know   Cisco ISE, Cisco Duo, Trustsec, SDA, Firepower 
8. Identify users and authenticate access to system components   Cisco Duo for Multi-Factor Authentication (MFA), Cisco ISE, Splunk 
9. Restrict physical access to cardholder data   Cisco Video Surveillance Manager, Meraki MV, Cisco IOT product suite 
10. Log and monitor all access to system components and cardholder data   Thousand Eyes, Accedian, Splunk 
11. Test security of systems and networks regularly   Cisco Secure Network Analytics (Stealthwatch), Cisco Advanced Malware Protection, Cisco Catalyst Center, Cisco Splunk 
12. Support information security with organizational policies and programs Cisco CX Consulting and Incident Response, Cisco U

A more detailed look at the requirements and solutions is below:

Requirement 1: Install and Maintain network security control.

This requirement is will ensure that appropriate network security controls are in place to protect the cardholder data environment (CDE) from malicious devices, actors, and connectivity from the rest of the network. For network and security architects, this is a major focus of applying security controls. Quite simply this is all the technology and process to ensure “Network connections between trusted and untrusted networks are controlled.” This includes physical and logical segments, networks, cloud, and compute controls for use cases of dual attached servers.

Cisco helps customers meet this requirement through a number of different technologies. We have traditional controls include Firepower security, network segmentation via ACI, IPS, SD-Wan, and other network segmentation items. Newer technologies such as cloud security, multi cloud defense, hypershield, Panoptica and Cisco Secure Workload are helping meet the virtual requirements. Given the relevance of this control to network security, and the breadth of Cisco products, that list is not exhaustive, and there are a number of other products that can help meet this control that are beyond the scope of this blog.

Requirement 2: Apply secure configurations to all system components.

This requirement is to ensure processes for components are in place to have proper hardening and best practice configurations applied to minimize attack surfaces. This includes ensuring unused services are disabled, passwords have a level of complexity, and best practice hardening is applied to all system components.

This requirement is met with a number of controller based assessments of infrastructure, such as Catalyst center being able to report on configuration drift and best practices not being followed, Meraki, and SDWan as well. Multivendor solutions such as Cisco NSO can also help ensure configuration compliance is maintained. There are also numerous CX advanced services reports that can be run across the infrastructure to ensure Cisco best practices are being followed, with a corresponding report and artifact that can be used.

Requirement 3: Protect stored account data.

This requirement is application and database settings, and there isn’t a direct linkage to infrastructure. Analysis of how account data is stored, what is stored, and where it is stored, as well as cursory encryption for data at rest and the process for managing these, are covered in this requirement.

Requirement 4: Protect cardholder data with strong cryptography during transmission over open, public networks

This requirement is to ensure encryption of the primary account number when transmitted over open and public networks. Ideally this should be encrypted prior to transmission, but the scope applies also to wireless network encryption and authentication protocols as these have been attacked to attempt to enter the cardholder data environment. Ensuring appropriate security of the wireless networks can be done by the Catalyst Center and Meraki in ensuring appropriate settings are enabled.

Requirement 5: Protect all systems and networks from malicious software

Prevention of malware is a critical function for security teams in ensuring the integrity of the financial systems. This requirement focuses on malware and phishing, security and controls, across the breadth of devices that can make up the IT infrastructure.

This requirement is met with a number of Cisco security controls, Email security, Advanced malware protection for networks and for endpoints, NGFW, Cisco Umbrella, secure network analytics, and encrypted traffic analytics are just some of the solutions that must be brought to bear to adequately address this requirement.

Requirement 6: Develop and Maintain secure systems and software

Security vulnerabilities are a clear and present danger to the integrity of the entire payments platform. PCI recognizes the need for having the proper people, process, and technologies to update and maintain systems in an ongoing basis. Having a process for monitoring and applying vendor security patches, and maintaining strong development practices for bespoke software, is critical for protecting cardholder information.

This requirement is met with a number of controller based capabilities to assess and deploy software consistently and at speed, Meraki, Catalyst Center, ACI, Firepower and SD-Wan, all have the ability to monitor and maintain software. In addition, Cisco vulnerability manager is a unique capability to take into account real world metrics of publicly disclosed CVE’s in order to prioritize the most important and impactful patches to apply. Given the breadth of an IT environments software, attempting to do everything at equal priority means you are systematically not addressing the critical risks as quickly as possible. In order to address your priorities you must first prioritize, and Cisco vulnerability manager software helps financials solve this problem.

Requirement 7: Restrict access to cardholder data by business need-to-know

Authorization and application of least privilege access is a best practice, and enforced with this requirement. Applied at the network, application, and data level, access to critical systems must be limited to authorized people and systems based on need to know and according to job responsibilities.

The systems used to meet this requirement are in many cases, shared with requirement 8. With zero trust and context based access controls we include identification in with authorization, using role based access controls and context based access controls. Some of these can be provided via Cisco identity services engine, which has the ability to take into account a number of factors outside of identity (geography, VPN status, time of day), when making an authorization decision. Cisco DUO is also used extensively by financial institutions for context based capabilities for zero trust. For network security enforcement of job roles accessing the cardholder data environment, Cisco firepower and Software Defined access have the capabilities to make context and role based access decisions to help satisfy this requirement. For monitoring the required admin level controls to prevent privilege escalation and usage of root or system level accounts, Cisco Splunk can help teams ensure they are monitoring and able to satisfy these requirements.

Requirement 8: Identify users and authenticate access to system components

Identification of a user is critical to ensuring the authorization components are working. Ensuring a lifecycle for accounts and authentication controls are strictly managed are required. To satisfy this requirement, strong authentication controls must be in place, and teams must ensure Multi-factor authentication is in place for the cardholder data environments. They also must have strong processes around user identification are in place.

Cisco ISE and Cisco Duo can help teams satisfy the security controls around authentication controls and MFA. Coupled with that, Cisco Splunk can help meet the logging and auditing requirements of ensuring this security control is acting as expected.

Requirement 9: Restrict physical access to cardholder data

“Physical access to cardholder data or systems that store, process, or transmit cardholder data should be restricted so that unauthorized individuals cannot access or remove systems or hardcopies containing this data.” (PCI QRG). This affects security and access controls for facilities and systems, for personnel and visitors. It also contains guidance for how to manage media with cardholder data.

Outside the typical remit of traditional Cisco switches and routers, these devices play a supporting role in supporting the infrastructure of cameras and IOT devices used for access controls.  Some financials have deployed separate air gapped IOT networks with the cost efficiencies and simplified stack Meraki devices, which simplifies audit and administration of these environments. The legacy proprietary camera networks have been IP enabled, and support wired and wireless, and Meraki MV cameras offer cost affordable ways to scale out physical security controls securely and at speed. For building management systems, Cisco has a suite of IOT devices that support building physical interface capabilities, hardened environmental capabilities, and support for IOT protocols used in building management (BACNET). These can integrate together and log to Cisco Splunk for consolidated logging of physical access across all vendors and all access types.

Requirement 10: Log and monitor all access to system components and cardholder data
Financial institutions must be able to validate the fidelity of their financial transaction systems and all supporting infrastructure. Basic security hygiene includes logging and monitoring of all access to systems. This requirement spells out the best practice processes for how to conduct and manage logging of infrastructure devices that allow for forensic analysis, early detection, alarming, and root cause of issues.

Cisco and Splunk are the world leader in infrastructure log analytics for both infrastructure and security teams. It is deployed at the majority of large financials today to meet these requirements. To compliment this, active synthetic traffic such as Cisco Thousand Eyes and Accedian help financials detect failures in critical security control systems faster to satisfy requirement 10.7.

Requirement 11: Test security of systems and networks regularly

“Vulnerabilities are being discovered continually by malicious individuals and researchers, and being introduced by new software. System components, processes, and bespoke and custom software should be tested frequently to ensure security controls continue to reflect a changing environment.” (PCI QRG)

One of the largest pain points financials face is the management of applying regular security patching across their entire fleet. The rate of CVE’s released has doubled in the past 7 years, and tools like Cisco Vulnerability management is critical prioritizing an infinite security need against a finite amount of resources. Additional Cisco tools that can help satisfy this requirement is: Cisco Secure Network Analytics (11.5), Cisco Advanced Malware protection (11.5), Cisco Catalyst Center (11.2), Cisco Splunk (11.6).

Requirement 12: Support information security with organizational policies and programs

People, process, and technology all need to be addressed for a robust security program that can satisfy PCI requirements. This requirement focuses on the people and process that are instrumental in supporting the secure PCI environment. Items like security awareness training, which can be addressed with Cisco U, are included. Cisco CX has extensive experience consulting with security organizations and can help review and create policies that can help the organization stay secure. Finally, having a Cisco Incident Response program already lined up can help satisfy requirement 12.10 for being able to immediately respond to incidents.

Source: cisco.com

Thursday, 11 July 2024

The Trifecta Effect of Integrating XDR, SIEM, and SOAR

The Trifecta Effect of Integrating XDR, SIEM, and SOAR

In the ever-evolving landscape of cybersecurity, the integration of cutting-edge technologies has become paramount to stay ahead of sophisticated threats. One such powerful combination that is revolutionizing security operations is the integration of Extended Detection and Response (XDR), Security Information and Event Management (SIEM), and Security Orchestration, Automation, and Response (SOAR). Let’s delve into the trifecta effect of integrating these technologies and how they can enhance your organization’s security posture.

Security Information and Event Management (SIEM)


SIEM solutions play a crucial role in centralizing and analyzing security event data from various sources within an organization. They provide real-time monitoring, threat detection, and incident response capabilities. By aggregating logs and data from security and non-security disparate systems, SIEM enables security teams to detect anomalies, investigate security incidents, and comply with regulatory requirements.

Extended Detection and Response (XDR)


XDR represents a holistic approach to threat detection and response by consolidating multiple security layers into a unified platform. It provides enhanced visibility across endpoints, networks, and cloud environments, enabling security teams to detect and respond to threats more effectively. By leveraging advanced analytics and machine learning, XDR can correlate and analyze vast amounts of data to identify complex threats in real-time.

Security Orchestration, Automation, and Response (SOAR)


SOAR platforms empower security teams to automate repetitive tasks, orchestrate incident response workflows, and streamline security operations. By integrating with XDR and SIEM, SOAR can enhance the efficiency and effectiveness of incident response processes. It enables teams to respond to security incidents rapidly, reduce manual errors, and improve overall response times.

How XDR, SIEM, and SOAR Complement Each Other


The trifecta effect of integrating XDR, SIEM, and SOAR brings together the best of all three worlds, creating a comprehensive and synergistic security solution. Here’s how the components of each technology complement each other:

  • XDR and SIEM: XDR’s advanced analytics, machine learning, and threat detection capabilities are integrated with SIEM’s centralized log management and real-time monitoring. This combination enables organizations to detect and respond to both known and unknown threats more effectively, as well as comply with regulatory requirements. SIEM’s pattern recognition capabilities can help XDR identify threats through pattern recognition, while XDR’s API data access and stealth threat detection capabilities can enhance SIEM’s detection capabilities. XDR and SIEM can work together in a security architecture to provide a more robust and mature security posture. For instance, XDR can provide real-time visibility, and SIEM can provide forensic search, data archival, and customization. XDR can reduce the number of contextualized alerts sent to the SIEM for prioritized investigations, enabling security teams to respond to security incidents more efficiently.
  • XDR and SOAR: XDR’s response integrations can have similar functionality to SOAR platforms, with the potential to make SOAR a native part of XDR platforms in the future. This integration allows for automated threat response, enabling security teams to automatically remediate threats in their environment without human intervention. SOAR’s orchestration and automation capabilities can also enhance XDR’s response capabilities, providing a more proactive defense posture.
  • SIEM and SOAR: SIEM and SOAR can integrate best-of-breed components without vendor lock-in, allowing for more flexibility in security operations. SOAR’s incident response capabilities, such as use-case-based playbooks, can orchestrate response actions across the environment, assign tasks to personnel, and incorporate user inputs to augment automated actions. This integration can help SOAR platforms focus on incident response, while SIEM solutions can focus on data collection and analysis.

Case Study: Credential Stuffing Attack


Let’s walk through a scenario of a credential stuffing aAttack and model how this trifecta could come into play:

Phase 1: Attack Initiation and Initial Detection

An attacker begins a credential stuffing attack by using previously breached username and password pairs to gain unauthorized access to the organization’s web applications.

  • XDR Role: XDR monitors the endpoints and detects a high volume of failed login attempts from various IP addresses, which is unusual and indicative of a credential-stuffing attack. XDR can also identify successful logins from suspicious locations or devices, adding this information to the incident details.
  • SIEM Role: The SIEM system, collecting logs from web application firewalls (WAF), authentication servers, and user databases, notices an abnormal spike in authentication requests and login failures. This complements the XDR’s endpoint visibility by providing a network-wide perspective and helps to confirm the scale of the attack.

Phase 2: Alert Correlation and Confirmation of the Attack

The attack continues as the attacker tries to automate login requests to bypass security controls.

  • XDR Role: XDR correlates the failed authentication attempts with geographic anomalies (such as logins from countries where the company does not operate) and reports these findings to the SIEM.
  • SIEM Role: SIEM cross-references the XDR alerts with its log data, confirming the attack pattern. It leverages its correlation rules to identify legitimate accounts that may have been compromised during the attack, which XDR might not be able to determine on its own.

Phase 3: Automated Response and Mitigation

With the attack confirmed, rapid response is necessary to minimize damage.

  • SOAR Role: Upon receiving alerts from both XDR and SIEM, the SOAR platform triggers a predefined response playbook that automatically enforces additional authentication requirements for the affected accounts, such as multi-factor authentication (MFA), and blocks IP addresses associated with the attack.
  • XDR Role: XDR can automatically enforce endpoint-based security controls, like updating access policies or locking down accounts that have shown suspicious login activities.
  • SIEM Role: SIEM supports the response by providing additional context for the SOAR to execute its playbooks effectively, such as lists of affected user accounts and their associated devices.

Phase 4: Post-Attack Analysis and Strengthening Defenses

After blocking the immediate threat, a more in-depth analysis is conducted to ensure all compromised accounts are secured.

  • SIEM Role: SIEM facilitates a detailed investigation by querying historical data to uncover the full scope of the attack, identifying compromised accounts, and understanding the methods used by attackers.
  • SOAR Role: SOAR provides workflows and playbooks to automatically reset passwords and notify affected users, while also updating security policies based on the attack vectors used.
  • XDR Role: The XDR platform assists with forensic analysis by leveraging its integrated view across endpoints, network, and cloud to pinpoint how the attacker could bypass existing security measures.

Phase 5: Continuous Improvement and Monitoring

To prevent future attacks, the organization needs to refine its security posture and implement new controls.

  • SOAR Role: SOAR can automate the rollout of new security policies across the organization and conduct simulated phishing exercises to educate employees about security best practices.
  • SIEM Role: SIEM takes charge of long-term data collection and analysis to monitor for new patterns that may indicate a repeat of the attack, ensuring continuous improvement in the organization’s security monitoring capabilities.
  • XDR Role: XDR continuously monitors for any signs of a resurgence of the attack or similar tactics being used, ensuring ongoing vigilance and quick detection of any new threats.

In this scenario, XDR and SIEM play complementary roles where XDR’s real-time analysis and endpoint visibility are enhanced by SIEM’s ability to provide a broader view of the network and historical non-security context. The SOAR platform bridges the gap between detection and response, allowing for quick and efficient mitigation of the attack. This integrated approach ensures that no aspect of the attack goes unnoticed and that the organization can rapidly adapt to and defend against such sophisticated cyber threats.

Impact of Non-Integrated Approach


Removing either SIEM or XDR from the scenario would significantly affect the organization’s ability to effectively detect, respond to, and recover from a credential-stuffing attack. Let’s consider the impact of removing each one individually:

Removing SIEM

  • Loss of Centralized Log Management: Without SIEM, the organization loses centralized visibility into the security data generated by various devices and systems across the network. This makes it more challenging to detect patterns and anomalies that are indicative of a credential stuffing attack, especially when they span across multiple systems and applications.
  • Reduced Correlation and Contextualization: SIEM’s strength lies in its ability to correlate disparate events and provide context, such as flagging simultaneous login failures across different systems. Without SIEM, the organization may not connect related events that could indicate a coordinated attack.
  • Inefficient Incident Management: SIEM platforms often serve as the hub for incident management, providing tools for tracking, investigating, and documenting security incidents. Without it, the organization may struggle with managing incidents effectively, potentially leading to slower response times and less organized remediation efforts.
  • Difficulty in Compliance Reporting: Many organizations rely on SIEM for compliance reporting and audit trails. Without SIEM, they may find it more challenging to demonstrate compliance with various regulations, potentially leading to legal and financial consequences.

Removing XDR

  • Reduced Endpoint and Network Visibility: XDR provides a detailed view of activities on endpoints and across the network. Removing XDR would leave a blind spot in detecting malicious actions occurring on individual devices, which are often the entry points for credential-stuffing attacks.
  • Weakened Real-time Detection: XDR platforms are designed for real-time detection and response. Without XDR, the organization might not be able to detect and respond to threats as quickly, allowing attackers more time to exploit compromised credentials.
  • Limited Automated Response: XDR can automate immediate response actions, such as isolating a compromised endpoint or terminating a malicious process. Without XDR, the organization would have to rely more heavily on manual intervention, potentially allowing the attack to spread further.
  • Loss of Integrated Response Capabilities: XDR often integrates with other security tools to provide a coordinated response to detected threats. Without XDR, the organization may find it more difficult to execute a synchronized response across different security layers.

The Case for an Integrated Approach


The conversation should not be framed as “XDR vs. SIEM & SOAR” but rather as “XDR, SIEM and SOAR.” These three technologies are not mutually exclusive anymore; instead, they complement each other and serve to strengthen an organization’s security posture when integrated effectively.

In essence, the integration of XDR, SIEM, and SOAR technologies is not a competition but a collaboration that brings together the best features of all three worlds.

Source: cisco.com

Tuesday, 9 July 2024

Cisco at NAB 2024: Committed to Delivering Next-Level Experiences That ‘Wow’

Cisco at NAB 2024: Committed to Delivering Next-Level Experiences That ‘Wow’

We are less than two weeks away from the National Association of Broadcasters (NAB) 101st Show happening once again in Las Vegas, Nevada. As a decade-long returning attendee, I am always excited to be with customers, partners, and colleagues, shaking hands and engaging in stimulating conversations about the next wave of innovation hitting the media and entertainment industry. With new technology features, expanded partnerships, and growing customer momentum, I am looking forward to highlighting Cisco’s cutting-edge solutions and services to our booth visitors this year.

Our leadership in this space continues to grow as we innovate across our portfolio to transform the way broadcasters, content providers, venues, sports teams and leagues are harnessing the power of AI and other emerging technologies and taking audience experiences to the next level.

This year, we will showcase our comprehensive portfolio and demonstrate Cisco’s strategy and innovation across three key areas:

  • Enabling dynamic IP production and workflows   
  • Transforming content delivery, devices, and network assurance   
  • Operationalizing the fan experience with cutting-edge, technology-centric venues

Enabling Dynamic IP Production  


The broadcast industry’s evolution to IP requires sustainable technology to provide great multicast visibility and flexibility to allow integration of multiple multicast and/or unicast, on-premises and cloud domains. Visitors to the Cisco booth can expect to see our flagship solution, Cisco IP Fabric for Media (IPFM), on display at the Cloud Native Media Production- Anywhere demo to show how we are enabling these transitions to IP aligned with SMPTE 2110. With several hundred deployments across the globe, Cisco IP Fabric for Media continues to serve as the foundation to address broadcaster’s audio and video requirements. 

Cisco IPFM includes features for Non-Blocking Multicast (NBM), Network Address Translation (NAT), and now, Protocol Independent Multicast flooding mechanism with Source Discovery (PFM-SD). These components provide end-to-end multicast, live traffic visibility, simplified and flexible deployment at scale, innovations in NBM active, and both PTP and RTP flow monitoring. With our simplified network management and operations tool for provisioning IP-based media fabrics, Cisco Nexus Dashboard Fabric Controller (NDFC), users can now take advantage of an expansive view of the network with 2022-7 visibility and enhanced Events UI Tab to capture critical fault notifications. 

Together with Intel, we will be highlighting our innovative solution for Cloud Native Media Production which uses cloud-native architectures with open standards and open-source software to power new media workflows and help speed digital transformation. 

Transforming Content Delivery, Devices, and Network Assurance 


Together with our partner Qwilt, we will be showcasing Global Edge Cloud for Content Delivery, highlighting how together we are creating a more efficient way to deliver highly distributed live and on-demand content. Our joint solution, serving over one billion unique subscribers globally, provides Quality-as-a-Service by pushing content caching and delivery out to the embedded edge of the carrier network. 

Comprised of Cisco’s edge infrastructure and Qwilt’s Open Edge platform, the solution based on open caching is helping meet the ever-increasing demand for content delivery and edge cloud services and improving re-buffering time, time to first frame (TTFF), average bitrate (ABR) and error rate for live-streamed content delivery.  

We’ve seen a growing need for trusted collaboration tools and devices to help drive engaging virtual and hybrid entertainment experiences, and to connect teams across the globe. In the Cisco Devices: Be there, from Anywhere demo, we will highlight intuitive and interoperable device experiences that are designed to make remote and hybrid collaboration seamless and distraction-free, on any meeting platform. The native Microsoft Teams experience on certified Cisco collaboration devices is designed for frictionless employee experiences. 

Accedian will take center stage in our Cisco Critical Network Assurance demo, highlighting how the most recent addition to Cisco’s observability portfolio is providing industry-leading network performance monitoring and assurance to help enable seamless operations for content providers, service providers, and more.  

Operationalizing the Fan Experience 


One of my personal favorites, and arguably the cornerstone of our Sports, Media and Entertainment portfolio, is our best-in-class VisionEDGE solution for IPTV & Dynamic Digital Signage in partnership with Wipro. A solution that truly “wow’s”, VisionEDGE provides dynamic content management solutions and high quality experiences to every fan, in every seat at venues all over the world including (but not limited to) Allegiant Stadium here in Las Vegas, CITYPARK in St. Louis, Santiago Bernabéu Stadium in Madrid, GEODIS Park in Nashville, and SoFi Stadium and Hollywood Park in Los Angeles (host of Super Bowl LVI). 

As you make your way across the show floor at the Las Vegas Convention Center during NAB be sure to check out the Cisco Booth, #W2743 in the West Hall, and see for yourself how Cisco and our impressive ecosystem of partners are forging a path of innovative new experiences for the media and entertainment industry. 

Source: cisco.com

Saturday, 6 July 2024

The AI Revolution and Critical Infrastructure

The AI Revolution and Critical Infrastructure

Artificial intelligence was a central theme at Cisco Live US 2024, and it’s clear AI has already made significant strides in reshaping our world. Cisco’s AI-powered innovations build digital resilience by uniquely combining the power of the network with industry-leading security, observability, and data. They simplify adoption and offer visibility and insight across the entire digital footprint, and for those overseeing critical infrastructure, the potential benefits are clear. Undoubtedly, the latest technology offers the promise of enhanced operations. However, the unpredictability of AI’s outcomes can understandably give pause.

Different Kinds of AI 


There are multiple kinds of AI, and each plays a role in different operational situations. Some AI models produce consistent and predictable results, while others are well suited to identifying relevant information within huge mountains of unstructured data. Choosing the right AI model to address each operational need can be challenging. Cisco’s acquisition of Splunk provides an increasing number of security AI tools to address operational security needs. The vast ecosystem of Cisco’s partners enables a selection of AI tools for various operational use cases.  

Cisco’s Role in AI Solutions


At the heart of every AI solution is data movement and processing. This is where Cisco excels. Cisco’s infrastructure is designed to receive data from sensors and ensure its secure and reliable transport to the applications that require it, making it a key player in the AI landscape. Examples of AI solutions in critical infrastructure include failure detection, failure prediction, pothole detection, process optimization, and analysis queries. The video below of Roland Plett’s Cisco Live Session takes a deeper look at each of these examples.  

The AI Revolution and Critical Infrastructure


Summary

AI is changing the way we engage data in industrial operations. There are multiple kinds of AI models, and the combination of models you need depends on the problem you’re trying to solve. It’s essential to recognize that deep learning AI models, like generative AI, are based on probabilities and don’t have deterministic or repeatable outcomes. This is why choosing the right model for your desired result is critically important.

Source: cisco.com

Friday, 5 July 2024

Mastering Nutanix Hyperconverged Infrastructure on Cisco’s Black Belt Academy

The digital landscape witnessed a significant milestone on August 28, 2023, when Cisco and Nutanix unveiled a global strategic partnership that promises to be a game-changer in the realm of hybrid multi-cloud computing. This alliance is set to fast-track and streamline the hybrid multi-cloud expedition for customers, all while redefining the core principles of data center modernization. With the integration of Nutanix’s premier software platform and Cisco’s cutting-edge server portfolio, this collaboration has produced what is arguably the industry’s most robust and comprehensive hyperconverged infrastructure solution to date.

When Cisco announced the End-of-Life for Cisco HyperFlex Data Platform (HXDP) on September 12, 2023; it left our customers and partners equally overwhelmed. So, when Cisco proposed that the Nutanix Software, running on Cisco’s Hardware will be a direct replacement of HyperFlex; we at Cisco Black Belt Academy made sure that our partners get prompt guidance on the new solution with thorough details on its enhanced HCI capabilities topped with a direction on transitioning or migrating from HyperFlex to Cisco’s HCI Solution with Nutanix.

The ”Chronicle” of Nutanix on Cisco Black Belt Academy

The Nutanix Stage 1 & 2 tracks on Cisco Black Belt Academy are launched for both Presales and Deployment roles. These tracks cover:

1. Cisco’s Hyperconverged Strategy

Explains how the partnership of Cisco and Nutanix is forged on their combined edge on application, data and infrastructure management.

2. Introduction to the Hyper Converged Infrastructure

Makes our partners acquainted with Next Generation HCI, Benefits of HCI, Sustained innovations and HCI Vendor Landscape.

3. Solution Overview

Details Cisco Compute Hyperconverged solution with Nutanix and covers it’s cloud infrastructure, broad Data Services portfolio, Nutanix Cloud Manager, Nutanix Unified Storage and Prism, the Cloud Management interface.

4. Architecture Deep Dive

Elucidates why Nutanix Architecture is different from other HCI solutions; Shows how Cisco servers, storage, networking, and SaaS operations are combined with the Nutanix Cloud Platform; explains how the controller VM’s are working to get a high performance structure; introduces resiliency and functioning of the storage layer.

5. Configuration & Deployment

In-depth demonstrations and techniques surrounding Nutanix Deployment, covering:

  • Initial Configurations/Guide to getting started
  • Deploying the Nutanix Foundation installer VM to a VMware ESX 7 cluster.
  • Deploying a Nutanix cluster on UCS servers managed by UCS Manager and adding those hosts to vCenter.
  • Expanding a Nutanix cluster with a node that has been previously provisioned.
  • Deploying Prism Central from Prism Element and registering the cluster with the newly created Prism Central.
  • Updating the UCS Server Firmware with Nutanix Life Cycle Manager (LCM) without disrupting overall cluster operations.

6. Migration from HyperFlex

Learn the various options to migrate existing HyperFlex platforms to the new Nutanix Platform. Gain an understanding of how Migrations of virtual machines between clusters of VMware ESXi servers is most easily accomplished via “shared nothing” vMotion. In addition, learn about the a free software tool called Move that Nutanix offers, which acts as an intermediary agent and coordinator to move VMs between two systems.

7. Sizing Cisco HCI with Nutanix

Discusses the Cisco HCI with Nutanix sizing based on:

  • Output files from RVTools and Nutanix Collector tools
  • Existing HyperFlex and Nutanix Bill Of Materials (BOM)
  • VM-based and Capacity-based sizing of the Cisco HCI with Nutanix using Nutanix Sizer tool.

8. Winning with Nutanix

Acquire a knowledge of Nutanix differentiators in the market, Insights on the competitive environment and edge over VMware, HPE & Lenovo whilst covering the ways to successfully navigate CI and HCI customer conversations.

9. Dcloud/Capture the Flag (CTF)

Hands on demo with access to a simple Nutanix deployment on Cisco UCS, with Cisco Intersight, Nutanix Prism Element, Nutanix AHV, Prism Central and Cisco UCS Manager. The Capture the Flag (CTF) missions provide a gamified way of understanding what the new Cisco & Nutanix Partnership brings to the table.

Mastering Nutanix Hyperconverged Infrastructure on Cisco’s Black Belt Academy

Where to learn more?


With the modern business challenges and the ever-changing Market Dynamics, applications have become the center for every customer and these applications are growing at a fast pace. IT teams are required to deploy these applications faster and that too with a cloud operating model in place. Hence, it becomes vital to learn and understand how the partnership of Cisco and Nutanix can help deliver the infrastructure and applications globally while using the best-in-class cloud operating models, that too with added resiliency and flexibility. The curriculum of “Cisco HCI solution with Nutanix” on Cisco Black Belt Academy, can instill confidence to handle customer conversations and perform a successful PoC/PoV as a Presales SE or navigate thorough deployments of the Nutanix Solution as a field engineer while migrating from the older HyperFlex base.

Source: cisco.com

Thursday, 4 July 2024

Digital Forensics for Investigating the Metaverse

The intriguing realm of the metaverse should not make us overlook its cybersecurity hazards.

Metaverse adoption has been steadily increasing globally, with various adoption use cases such as virtual weddings, auctions, and the establishment of government offices and law enforcement agencies. Prominent organizations such as INTERPOL and others are investing considerable time and resources researching space, underscoring the importance of the metaverse. While the growth of the metaverse has been accelerating, its full potential has not yet been realized due to the slow development of computing systems and accessories necessary for users to fully immerse themselves in virtual environments, which is gradually improving with the production of augmented reality and visual reality solutions such as HoloLens, Valve Index and Haptx Gloves.

As virtual reality tools and hardware evolve, enabling deeper immersion in virtual environments, we anticipate a broader embrace and utilization of the metaverse.

Significant concerns have risen regarding criminal activity within this virtual realm. The World Economic Forum, INTERPOL and EUROPOL have highlighted the fact that criminals have already begun exploiting the metaverse. However, due to the early stage of the metaverse’s development, forensic science has not yet caught up, lacking practical methodologies and tools for analyzing adversarial activity within this realm.

Digital Forensics for Investigating the Metaverse

Unlike conventional forensic investigations that primarily rely on physical evidence, investigations within the metaverse revolve entirely around digital and virtual evidence. This includes aspects such as user interactions, transactions and behaviors occurring within the virtual world. Complicating matters further, metaverse environments are characterized by decentralization and interoperability across diverse virtual landscapes. There are unique challenges related to the ownership and origin of digital assets as users can join metaverse platforms with their anonymous wallets and interact with them pseudonymously without revealing their real identity. Such analysis requires advanced blockchain analytics capabilities and large attribution databases linking wallets and addresses to actual users and treat actors. As a result, this new digital realm necessitates the development of innovative methodologies and tools designed for tracking and analyzing digital footprints, which play a crucial role in addressing virtual crime and ensuring security and virtual safety in the metaverse.

The security community needs a practical, real-world forensic framework model and a close examination of the intricacies involved in metaverse forensics.

Digital Forensics for Investigating the Metaverse

Case studies


User activity in the metaverse is immersed in digital environments where interactions and transactions are exclusively digital, encompassing different moving parts such as chatting, user movements, item exchanges, blockchain backend operations, non-fungible tokens (NFT), and more. The diverse and multifaceted nature of these environments presents adversaries with numerous opportunities for malicious activities such as virtual theft, harassment, fraud, and virtual violence, which will only be exemplified with the development of more realistic metaverse environments (Figure 1). The distinct aspect of these crimes is that they often lack any physical real-world connection, presenting unique challenges in investigating and understanding the underlying tactics, techniques and procedures leveraged by adversaries.

Occurrences of threats in metaverse platforms already exist, with the most notable to date involving the British police launching its first ever investigation into a virtual sexual harassment in the metaverse, stating that although there are no physical injuries, there is an emotional and psychological impact on the victim.
Digital Forensics for Investigating the Metaverse

Figure 1. INTERPOL’s outline of potential threats in metaverse.
Here are two other theoretical scenarios that exemplify the importance of metaverse forensics, and the need to distinguish their differences from contemporary forensics.

Scenario 1 – Robbery from an avatar (a metaverse gift): In the metaverse, a character approaches another avatar to present virtual shoes as a gift. The avatar accepts the gift, but a few hours later discovers that all digital assets associated with their metaverse account and digital wallet have disappeared. This incident involving stealing digital assets occurred because the seemingly innocent gift of virtual shoes was, in fact, a malicious NFT embedded with adversarial code that facilitated the theft of the avatar’s digital assets.

Scenario 2 – A metaverse conference: A user attends a cybersecurity conference in the metaverse, not knowing it is organized by cybercriminals. Their aim is to lure high-value stakeholders from the industry to steal their data and digital assets. This event takes place in a well-known conference hall in the metaverse. The registration form for the event includes a smart contract designed to extract personal information from all attendees. Additionally, it embeds a time-triggered malicious code set to steal digital assets from each avatar at random intervals after the conference ends. Investigating such incidents requires a comprehensive multi-dimensional analysis that encompasses marketplaces, metaverse bridges, blockchain activities, individual user behavior in the metaverse, data logs of the conference hall and the platform hosting the event, as well as data from any supporting hardware.

Challenges for forensic investigators and law enforcement


Several challenges exist for metaverse investigators. And as the metaverse evolves, additional challenges are expected to surface. Here are some potential issues law enforcement and cybersecurity investigators may run into.

Decentralization and jurisdictions: The decentralized nature of many metaverse platforms can lead to jurisdictional complexities. Determining which laws apply and which legal authority has jurisdiction over a particular incident can be challenging, especially when the involved parties are spread across different countries. As such, it will be exponentially complex or even impossible in some cases for law enforcement to subpoena criminals or metaverse facilitators.

Anonymity and identity verification: Users in the metaverse often operate in an anonymous or pseudonymous manner with avatars with random nicknames, making it difficult to identify their real-world identities. This anonymity can be a significant hurdle in linking virtual actions to criminals. Only few options for unmasking adversarial activity exist, including tracing IP addresses and analyzing platform logs which can be a complex undertake when dealing with truly decentralized metaverse platforms, often leaving blockchain analytics as the only viable analysis methodology.

Complexity and interpolarity of virtual environments: The metaverse can contain a myriad virtual spaces, each with its own set of rules, protocols and types of interactions. Understanding the nuances of these environments is crucial for effective investigation. To compound on the complexity of virtual environments, many metaverse platforms are interconnected, and an investigation may need to span multiple platforms, each with its own set of data formats and access protocols.

Digital asset tracking: Tracking the movement of digital assets, such as cryptocurrencies or NFTs, across different platforms and wallets through blockchain transactions requires specialized knowledge and tools. Without such dedicated tools, tracing digital assets is impossible as such tools contain millions of walled address attributions, ensuring the effective tracing of funds and assets.

Lack of international standards: The absence of global standards for metaverse technology development allows for a wide variety of approaches by developers. This diversity significantly affects the investigation of metaverse platforms, as each requires unique methods, tools and approaches for forensic analysis. This situation makes forensic processes time-consuming and difficult to scale. Establishing international standards would aid forensic investigators in creating tools and methodologies that are applicable across various metaverse platforms, streamlining forensic examinations.

Blockchain immutability: The immutable nature of blockchain ensures that all recorded data remain unaltered, preserving evidence integrity. However, this same feature can also limit certain corrective actions, such as removing online leaks or inappropriate data and reversing transactions involving stolen funds or NFTs.

Correlation of diverse data sources: Data correlation plays a crucial role in investigations, aiming to merge various data types from disparate sources to provide a more comprehensive insight into an incident. Examples of that can be correlating the events of different systems or combining end-host data with associated network data or the correlation between different user accounts. In the context of the metaverse, the challenge lies in the sheer volume of data sources associated with metaverse technologies. This abundance makes data correlation a complex task, necessitating an in-depth understanding of diverse technologies supporting metaverse platforms and the ability to link disparate data sets meaningfully.

Lack of forensic automation: Investigators commonly use various automated tools in the initial stages of their forensic analysis to automate various pedantic operations. These tools are crucial to identify signs of compromise efficiently and accurately. Without these tools, the scope, efficiency, and depth of the analysis can be greatly impacted. Manual analysis requires more time and heightens the risk of overlooking critical signs of compromise or other malicious activities. The emerging and complex nature of metaverse environments currently lacks these tools, and there is no anticipation of their availability soon.

Metaverse investigation approach


The forensic approach for the metaverse is distinct from traditional approaches, which typically begin with investigations focusing on physical devices for telemetry extraction. Investigating the metaverse is a challenging task because it involves more than just examining various files across multiple systems. Instead, it requires the analysis of diverse systems within different environments and the correlation of such data to draw meaningful conclusions.

An example illustrating metaverse forensic complexities is, a rare digital painting, goes missing from a virtual museum. A forensic system should undertake a comprehensive investigation that includes reviewing security logs in the virtual museum, tracing blockchain transactions, and examining interactions within interconnected virtual worlds and marketplaces. The investigation should also analyze recent data from devices like haptic gloves and virtual reality goggles to confirm any malicious related user activities. The analysis of virtual logs or hardware is dependent on the logs recorded by providers or vendors and whether such logs are made available for analysis. If such information is not present, there is little that can be done in terms of forensic analysis.

In this example, if the metaverse platform and virtual museum did not maintain logs it would be impossible to verify the activities preceding the theft, including information about the adversary. If logs from haptic gloves or reality googles are also not present, the activities described by the user during the adversarial activity would have been impossible to verify. This leaves a forensic investigator unable to perform in-depth analysis apart from monitoring on-chain data and the transfer of the painting between the museum wallet and adversarial wallet addresses.

Digital Forensics for Investigating the Metaverse

Metaverse platforms vary in their approach to logging and data capture, significantly influenced by the method through which users access these environments. There are primarily two access methods: through a web browser and via client-based software. Web browser-based access to metaverse platforms, like Roblox and Sandbox, requires users to navigate to the platform using a browser. In contrast, client-based platforms such as Decentraland necessitate downloading and installing a software application to enter the metaverse. This distinction has profound implications for forensic analysis. For browser-based platforms, analysis is generally limited to network-based approaches, such as capturing network traffic, which may only be feasible when the traffic is not encrypted. On the other hand, client-based platforms can provide a richer set of data for forensic scrutiny. The software client may generate additional log files that record user activities, which, alongside conventional forensic methods like analyzing the registry or Master File Table (MFT), can offer deeper insights into the application’s use and user interactions within the metaverse. Regardless of the access method, the potential for forensic analysis can be further expanded based on the types of logs and data recorded by the metaverse environment itself and made available by the provider. This means that within each metaverse platform, the scope and depth of forensic analysis can vary based on the specific logs kept by the environment, offering a range of analytical possibilities.

Forensic systems suited for metaverse environments should start their investigation in the digital realm and use physical devices for their supporting data. These forensic systems must connect to user avatars, their accounts, and related data to facilitate initial triage and investigation. Forensic solutions for the metaverse should be capable of conducting triage, data collection, analysis and data enrichment, paralleling the requirements for examining current software and systems. The following three features would greatly benefit forensic investigators when analyzing the metaverse:

1. Triage collection: Collection of forensic artefacts start within the metaverse environment or platform, extending to other supporting software and hardware devices enabling users to interface with the metaverse.
2. Analysis: Processing the captured data to link relevant data and activity based on the reported incident aiming to identify anomalies and indicators of compromise (IOCs). Machine learning can be leveraged to automate the investigation by analyzing relevant telemetry based on the reported indicators of compromise or incident outcomes according to similar past incidences and the analysis and resolution provided by forensic analysts.
3. Data enrichment: Based on the IOCs identified, forensic systems must be capable of searching diverse sources such as blockchains, metaverse platforms and other associated information to identify relevant data for added context.

Forensic systems for the metaverse should be able to directly interact with a user’s avatar (Figure 2), which may adopt a non-player character (NPC) for assistance. When activated, the NPC avatar should be able to engage with the user’s avatar, requesting access to the avatar’s data, the metaverse platform, and all associated software and hardware implicated in an incident. This includes the metaverse console, IoT devices, networking devices and blockchain addresses. To ensure enhanced privacy and security, NPC forensic analysts should only be able to access user data if they are only activated or requested by a user and should only obtain read-only access.

The forensic NPC avatar should meticulously record relevant logs and document any detected indicators of compromise (e.g., suspicious metaverse interactions) along with the observed impact (e.g., NFT or crypto token theft) and the estimated timeframe of the incident from the user’s avatar. Given the inherent complexity of metaverse environments, these forensic systems should possess the ability to operate on multiple layers to gather data, among others:

1. Blockchain to analyze transactions and exchanges performed on-chain.
2. Metaverse Bridges to analyze activities across linked metaverse environments.
3. Metaverse Platforms, including different apps and digital assets in the metaverse.
4. Networking, including connections related to the metaverse platform as well as supporting sensors and devices. Supporting devices (haptic gloves, body sensors, computational unit, etc.).

Digital Forensics for Investigating the Metaverse
Figure 2. Metaverse forensics framework outline

During analysis, malicious or anomalous activities should, optimally, be reported in an automated manner to guide the forensic analysts and speed up investigations. After analysis, any detected signs of compromise, such as cryptocurrency addresses, user activities, or files, should undergo data enrichment. This involves conducting searches across different data sources to find relevant information, which helps provide more detail and context for the analyst.

In the following sections of the blog, we provide a deeper view of how each of the three phases proposed operate, providing the data sources that can be leveraged for each, where applicable.

Triage and artefact collection


Forensic systems can analyze various threat types using multiple data sources. As the fields of forensics and the metaverse develop, the demand for new data sources will grow. It’s important to acknowledge that the available telemetry data can vary based on the platform and hardware in use. The absence of international standards and protocols for the metaverse compounds this complexity. With this in mind, we identify the following data sources as potential telemetry that should be logged to allow the effective analysis of metaverse environments. In addition to the telemetry presented below, forensic triage collection should be performed by capturing the memory and disk image from systems involved in an incident.

Authentication and access data:

◉ User login history, IP addresses, timestamps and successful/failed login attempts.
◉ Session tokens and authentication tokens used for access.

Third-party integration data:

◉ Data from third-party integrations or APIs used in the metaverse platform.
◉ Permissions and authorizations granted to third-party apps.

Error and debug logs:

◉ Logs of software errors, crashes or debugging information.
◉ Error messages, stack traces and core dumps.

Script and code data:

◉ Source code or scripts used within the virtual environment.
◉ Execution logs and debug information.
◉ Smart contracts in relevant blockchain wallets.

Marketplace, commerce data and blockchain:

◉ Records of virtual goods or services bought and sold on the platform’s marketplace.
◉ Payment information, such as credit card transactions or cryptocurrency payments.

User account and user behavior:

◉ Profile username, avatar image, account creation time, account status, blockchain address used to open the metaverse account.
◉ User interactions, friendships, groups, locations, and social networks, while preserving privacy.
◉ User activity logs, including participation in events and in-world gatherings.

User device forensics:

◉ User devices for the extraction of supporting data, such as device activity, configuration files, locally stored chat logs, images, etc.
◉ All ingoing and outgoing network activity reaching devices relevant to a metaverse incident.

Asset provenance data:

◉ Detailed asset provenance information with the complete history of ownership and modifications.
◉ Blockchain addresses and wallets, including a copy of their transaction history. Verification of the “from” address (creator or previous owner) and the “to” address (current owner) is required.
◉ If the asset is digital or represented as a token (e.g., an NFT), examine the smart contract that created it. Smart contracts contain rules and history about the asset.
◉ Ensure the asset is not a copy or fake by verifying that the smart contract and token ID are recognized by the creator or issuing authority.

System and platform configuration:

◉ Details of the platform’s architecture, configurations and version history.

Behavioral biometrics:

◉ Behavioral patterns of user interactions and in-game actions to help identify users based on unique behavior. Although such activity can be useful to identify adversaries in the case where very little is known for their activities, such information is not expected to be widely available.

Telemetry analysis


The goal of the telemetry analysis process is to detect unusual or potentially malicious behavior through a semi- or fully automated processing of data and logs, thereby aiding forensic experts and expediting the investigation process.

This can be accelerated by leveraging deep learning techniques to identify harmful patterns using a database of historically analyzed events. Additionally, incorporating reinforcement learning, refined by forensic experts, could enhance the system’s ability to offer better incident response suggestions. For effective training, these machine-learning algorithms would need access to a large repository of forensic strategies and actions taken by professionals in various investigative scenarios, including those spanning across different metaverse environments and artefacts. Utilizing this data allows the algorithms to match current incidents with similar past cases based on the user input provided.

Given the diverse range of threats and types of incidents, along with the emerging state of the metaverse and its insufficient logging features, devising a comprehensive forensic methodology that is universally applicable to all metaverse platforms or systems presents significant challenges. Should metaverse operators provide telemetry data, the analytical process can be simplified by focusing on artifacts that are most pertinent to a specific incident. Nonetheless, the presence of such artifacts in existing metaverse platforms cannot be assured. To overcome this issue and offer practical guidance, we suggest a hybrid forensic strategy that integrates traditional operating system forensics emphasizing Windows-based platforms due to their prevalent use for client-side metaverse platforms, along with specialized analyses that address the unique aspects of the metaverse and blockchain technologies. For better understanding, we categorize each analytical technique as per the divisions used in the triage and artifact collection section of this blog.

Authentication and access data

Metaverse platforms often store records of successful authentication attempts, including the dates, in local log files. If these logs are unavailable, analyzing DNS records and process executions associated with the metaverse platform can provide insights into when a user accessed it.

One approach to uncover such information involves examining browser records (e.g. Chrome) and the history of visited URLs to identify when a user visited and connected to a specific metaverse platform via a web browser. Additionally, routers may maintain by default traffic logs offering further insight into DNS activity.

For process-related investigation, resources like Amcache and Prefetch are valuable for determining the timing of executions for the metaverse platform client. These tools can help trace the usage patterns and activities associated with user interactions with the metaverse.

Third-party integration data

Acquiring such data can be challenging because these operations occur usually on the backend of servers, and logs related to this activity are typically not accessible to users. To obtain this information, which depends on the architecture and API usage of a metaverse platform, one could use network capture tools like Wireshark. This method allows users to monitor any API requests made while using a metaverse platform, and inspect the contents of these communications, provided they are not encrypted. This approach helps in understanding the interaction between the client and the server during the operation of metaverse platforms.

Error and debug logs

Metaverse platforms commonly record client and connectivity issues in local log files. When these logs are not accessible, one can analyze the Windows Application log to identify any errors issued by the application and any software problems that prevent it from either logging in or functioning properly. However, it is important to note that errors occurring specifically within the metaverse environment are not captured by Windows’ native logs, thus remaining invisible to analysts using these tools.

Script and code data

In certain environments, snippets of scripts and other code that serve various functionalities can be accessed through reverse engineering, allowing analysts to determine if a metaverse feature is functioning properly and safely. However, it’s important to note that reverse engineering software may be illegal and is generally advised against.

Despite these limitations in directly analyzing metaverse code, it is still feasible to examine publicly available smart contract code. This code governs on-chain transactions and facilitates exchanges of value between players in metaverse environments. To analyze the smart contract associated with a specific metaverse, one must first identify the blockchain it utilizes. Then, by finding the smart contract’s address, one can inspect its code using a blockchain explorer. For instance, to review the smart contract of UNI (a decentralized exchange) which operates on the Ethereum blockchain, one would use an Ethereum blockchain explorer to locate and examine the contract’s code at the Ethereum address (0x1f9840a85d5aF5bf1D1762F925BDADdC4201F984) used by UNI.

Marketplace, commerce data and blockchain

Transaction records of virtual goods or services exchanged on a metaverse platform can be tracked by examining a user’s account to review the NFTs and other items they possess. Additionally, by conducting on-chain transaction analysis, one can retrieve a complete history of item ownership, including details of items or NFTs bought and sold by users. Thanks to the transparency of public blockchains, this process is straightforward. It only requires the wallet address used by the user to access the metaverse platform. This address can be searched in the relevant blockchain explorer to analyze the user’s historical transactions and items purchased or sold.

User accounts and behavior

Currently, the logging and analytics of user behavior within metaverse environments are largely undeveloped. Basic information like profile usernames and avatar images are stored locally in the metaverse client’s directory. More detailed information about user interactions, friendships, groups, and visited locations can be retrieved from a user’s account, provided the data has not been deleted by the user. Analyzing a user’s social networks may offer deeper insights into their participation in metaverse events and related in-world gatherings.

User device forensics

Various devices enable interaction with the metaverse, including VR headsets, smartphones, gaming consoles and haptic gloves. The extent of data logging varies by device. For example, VR headsets may record details such as connected social networks, usernames, profile pictures and chat logs. It is essential to analyze the specific vendor and device to determine the availability of such logs. As the technology landscape evolves, it is anticipated that more vendors and devices will emerge, further complicating the environment. This dynamic nature will necessitate more sophisticated tools and greater expertise for effective forensic analysis in the future.

Asset provenance data

Detailed information about the provenance of assets in the metaverse, including the complete history of ownership and modifications, can be obtained through on-chain analysis. This process involves examining transactions between blockchain addresses of interest, the non-fungible tokens (NFTs) and other tokens they possess, and their interactions with smart contracts. Because public blockchains are immutable — meaning that once data is recorded, it cannot be deleted or changed — it is relatively straightforward to track asset provenance. By searching for a known wallet address in the appropriate blockchain explorer, one can easily trace the history associated with that address.

When analyzing blockchain data for provenance, it is critical to verify that the addresses interacting with the target address are legitimate. This includes ensuring that entities like metaverse providers or NFT issuers are not misrepresented by posing as the official addresses. Verification can be achieved by visiting the official website of the token or metaverse provider to find and confirm their official blockchain addresses. This step is crucial to ensure that the address in question belongs to the entity it claims to represent. An illustrative case would be investigating the purchase of an expensive plot in the metaverse. Suppose an analysis of a user’s blockchain address reveals an NFT transaction from another address, which purportedly represents a plot identical to the one purchased. However, the source address sending the NFT is not the official one used by the metaverse provider for NFTs. If this discrepancy goes unchecked, it could obscure potential fraud or suspicious activities.

Another key factor in asset provenance is linking blockchain addresses to actual user identities. While blockchain technology typically provides pseudonymity, there are services that offer extensive databases capable of associating specific addresses with various entities and exchanges. This capability enhances an investigator’s ability to trace asset flows more effectively. For instance, WalletExplorer is a website that provides free services for attributing addresses on the Bitcoin network.

System and platform configuration

To effectively investigate a metaverse platform, it’s essential to gather detailed information about its system, architecture, and configuration. However, obtaining this information can be challenging as it is often limited. When available, key sources include official websites, developer documentation, user forums, and community pages. Additionally, valuable insights into the platform’s configuration can often be gleaned from debug and error logs, where these are accessible.

Behavioral biometrics

Behavioral patterns, such as user interactions and in-game actions, are key in identifying users based on their unique behaviors and detecting potential account hijacks. These behaviors can include movement and gesture recognition, voice recognition and the patterns of typing and communication. Additional metrics may involve how users interact with in-game items and other participants.

Currently, most systems used to interact with the metaverse do not extensively log such information, which limits the capacity for in-depth behavioral analysis. What is typically available for analysis includes communication patterns derived from chat logs and basic interaction patterns. These interactions are often analyzed through chats, the groups users join, events they attend, and on-chain analytics for transactions and engagements within the virtual space. This level of analysis, while helpful, only scratches the surface of what could potentially be achieved with more comprehensive behavioral data collection and analysis.

Data enrichment


Following analysis, it is crucial to correlate and analyze diverse data types from multiple sources, including blockchain transactions, IPFS storage, internet-of-things (IoT) devices and activities within the metaverse. Drawing from research, a forensic framework could use APIs from diverse data repositories to aggregate pertinent information. Such information can be retrieved from blockchain analytics vendors for the identification of malicious wallet addresses or traditional databases containing threat intelligence for malicious IP addresses and file hashes. The gathered data can then be processed through Named Entity Recognition (NER) to cleanse the data to extract relevant information and diminish data clutter in larger datasets, ensuring analysts receive concise and clear insights. Enriching threat intelligence demands considerably more effort beyond conventional practices, extending beyond mere checks of IPs, URLs, file hashes and online adversarial behavior. It also encompasses the analysis of blockchain transactions, provenance of digital assets, and the scrutiny of entities within the metaverse, such as casinos and conference venues, given that logs are available for analysis.

The insights gained from each case should be meticulously documented in public databases, outlining the tactics, techniques and procedure employed by adversaries within the metaverse. This documentation aids in refining the forensic capabilities of metaverse systems and provides forensic examinators intelligence for more effective and precise attributions. The selection of data sources for threat intelligence augmentation can be tailored based on investigative needs and emerging developments in the field. While it’s crucial to continue employing conventional threat intelligence strategies to address more traditional and legacy aspects of investigations, for metaverse-specific inquiries, relevant data sources might include:

  • The source code of blockchains or smart contracts (e.g., from GitHub).
  • IPFS (Interplanetary File System) frameworks.
  • Blockchain analytics tools.
  • Social media and community monitoring for discussions and trends on social media.

Source: cisco.com

Tuesday, 2 July 2024

Security Is Essential (Especially in the Cloud)

In an era where cloud computing has become the backbone of enterprise IT infrastructure, we cannot overstate the significance of a robust security posture that evolves with emerging technologies.

Cisco recognizes the multifaceted nature of today’s cloud environments and has taken a step forward with three new certifications designed to empower IT professionals across the full lifecycle of multicloud ecosystems.

Security Is Essential (Especially in the Cloud)

These groundbreaking certifications are created to address the three pillars of cloud mastery: connecting to the cloud, securing the cloud, and monitoring the cloud. In this blog, I’ll focus on the certification that involves securing the cloud.

Securing the cloud


The new Cisco Secure Cloud Access (SCAZT) Specialist Certification dives into the heart of cloud security. As threats become more sophisticated and regulatory demands become stricter, this certification underscores the importance of a security-first approach.

As Cisco’s first-ever Professional-level cloud security certification, this certification is aimed at network engineers, cloud administrators, security analysts, and other IT professionals. And it validates the skills necessary to secure cloud environments effectively.

While the SCAZT exam contains the basics of cloud architecture (you can find its concepts in most cloud deployments), the thing that makes this certification unique is it uses the Cisco equipment and portfolio that some infrastructures already have in their network to secure their cloud.

Plus, the certification is part of the cloud lifecycle—connecting, securing, and monitoring the infrastructure. Most companies cover a single component. But Cisco covers all three elements. So, when you are certified in the security aspect in conjunction with the other two cloud certifications, you can be assured you’re covering the whole cloud lifecycle.

CCNP Security certification alignment


This new cloud security certification is also part of the CCNP Security certification track. This means you can receive a standalone Specialist certification, or combine this cert with the Implementing and Operating Cisco Security Core Technologies (SCOR) exam to earn the CCNP Security certification, which also counts toward recertification and Continuing Education (CE) credits.

Security Is Essential (Especially in the Cloud)

Inside the 300-740 SCAZT exam 


Cisco certification exam topics are designed to group topics logically. When you follow the domains and tasks during your studies, you’ll get a comprehensive understanding, plus it connects the chapters you need to study.

The SCAZT 300-740 exam covers cloud security architecture, user and device security, network and cloud security, application and data security, visibility and assurance, and threat response.

Security Is Essential (Especially in the Cloud)

Cisco exam topics emphasize hands-on technical questions, theoretical concepts, and critical thinking, always from a job role perspective. The certification focuses primarily on the following protocols, architectures, technologies, and platforms:

Security Is Essential (Especially in the Cloud)

Training from Cisco U.


Cisco U. has launched a new Learning Path that’s designed to match the SCAZT exam and provide you with the best possible experience. It requires around 48 hours to complete, eligible for 40 CE credits.

Security Is Essential (Especially in the Cloud)

You can watch presentations about concepts, complete hands-on labs, and review designs and examples. At the end of each topic, an assessment is available to test your knowledge.

Cloud Security job roles


Since most applications and infrastructures are moving to the cloud, if you’re working in a role where cloud concepts are included (whether in an on-premises or hybrid environment), you’re going to need security in every shape and form.

Network security engineers will especially find this certification valuable because it focuses on protocols, architectures, technologies, and platforms relevant to their jobs.

Possible job roles where this certification applies are:

◉ Cloud Security Architect
◉ Cloud Security Engineer
◉ Cloud Security Advisor
◉ Cloud Solutions Architect
◉ Cloud Architect
◉ Cloud Associate
◉ Cloud Engineer
◉ Security Administrator
◉ Security Architect
◉ Security Consultant
◉ Security Engineer
◉ Security Manager
◉ Systems Architect
◉ Systems Engineer
◉ Network Security Engineer
◉ Security Project Manager

Source: cisco.com