Thursday, 4 July 2024

Digital Forensics for Investigating the Metaverse

The intriguing realm of the metaverse should not make us overlook its cybersecurity hazards.

Metaverse adoption has been steadily increasing globally, with various adoption use cases such as virtual weddings, auctions, and the establishment of government offices and law enforcement agencies. Prominent organizations such as INTERPOL and others are investing considerable time and resources researching space, underscoring the importance of the metaverse. While the growth of the metaverse has been accelerating, its full potential has not yet been realized due to the slow development of computing systems and accessories necessary for users to fully immerse themselves in virtual environments, which is gradually improving with the production of augmented reality and visual reality solutions such as HoloLens, Valve Index and Haptx Gloves.

As virtual reality tools and hardware evolve, enabling deeper immersion in virtual environments, we anticipate a broader embrace and utilization of the metaverse.

Significant concerns have risen regarding criminal activity within this virtual realm. The World Economic Forum, INTERPOL and EUROPOL have highlighted the fact that criminals have already begun exploiting the metaverse. However, due to the early stage of the metaverse’s development, forensic science has not yet caught up, lacking practical methodologies and tools for analyzing adversarial activity within this realm.

Digital Forensics for Investigating the Metaverse

Unlike conventional forensic investigations that primarily rely on physical evidence, investigations within the metaverse revolve entirely around digital and virtual evidence. This includes aspects such as user interactions, transactions and behaviors occurring within the virtual world. Complicating matters further, metaverse environments are characterized by decentralization and interoperability across diverse virtual landscapes. There are unique challenges related to the ownership and origin of digital assets as users can join metaverse platforms with their anonymous wallets and interact with them pseudonymously without revealing their real identity. Such analysis requires advanced blockchain analytics capabilities and large attribution databases linking wallets and addresses to actual users and treat actors. As a result, this new digital realm necessitates the development of innovative methodologies and tools designed for tracking and analyzing digital footprints, which play a crucial role in addressing virtual crime and ensuring security and virtual safety in the metaverse.

The security community needs a practical, real-world forensic framework model and a close examination of the intricacies involved in metaverse forensics.

Digital Forensics for Investigating the Metaverse

Case studies


User activity in the metaverse is immersed in digital environments where interactions and transactions are exclusively digital, encompassing different moving parts such as chatting, user movements, item exchanges, blockchain backend operations, non-fungible tokens (NFT), and more. The diverse and multifaceted nature of these environments presents adversaries with numerous opportunities for malicious activities such as virtual theft, harassment, fraud, and virtual violence, which will only be exemplified with the development of more realistic metaverse environments (Figure 1). The distinct aspect of these crimes is that they often lack any physical real-world connection, presenting unique challenges in investigating and understanding the underlying tactics, techniques and procedures leveraged by adversaries.

Occurrences of threats in metaverse platforms already exist, with the most notable to date involving the British police launching its first ever investigation into a virtual sexual harassment in the metaverse, stating that although there are no physical injuries, there is an emotional and psychological impact on the victim.
Digital Forensics for Investigating the Metaverse

Figure 1. INTERPOL’s outline of potential threats in metaverse.
Here are two other theoretical scenarios that exemplify the importance of metaverse forensics, and the need to distinguish their differences from contemporary forensics.

Scenario 1 – Robbery from an avatar (a metaverse gift): In the metaverse, a character approaches another avatar to present virtual shoes as a gift. The avatar accepts the gift, but a few hours later discovers that all digital assets associated with their metaverse account and digital wallet have disappeared. This incident involving stealing digital assets occurred because the seemingly innocent gift of virtual shoes was, in fact, a malicious NFT embedded with adversarial code that facilitated the theft of the avatar’s digital assets.

Scenario 2 – A metaverse conference: A user attends a cybersecurity conference in the metaverse, not knowing it is organized by cybercriminals. Their aim is to lure high-value stakeholders from the industry to steal their data and digital assets. This event takes place in a well-known conference hall in the metaverse. The registration form for the event includes a smart contract designed to extract personal information from all attendees. Additionally, it embeds a time-triggered malicious code set to steal digital assets from each avatar at random intervals after the conference ends. Investigating such incidents requires a comprehensive multi-dimensional analysis that encompasses marketplaces, metaverse bridges, blockchain activities, individual user behavior in the metaverse, data logs of the conference hall and the platform hosting the event, as well as data from any supporting hardware.

Challenges for forensic investigators and law enforcement


Several challenges exist for metaverse investigators. And as the metaverse evolves, additional challenges are expected to surface. Here are some potential issues law enforcement and cybersecurity investigators may run into.

Decentralization and jurisdictions: The decentralized nature of many metaverse platforms can lead to jurisdictional complexities. Determining which laws apply and which legal authority has jurisdiction over a particular incident can be challenging, especially when the involved parties are spread across different countries. As such, it will be exponentially complex or even impossible in some cases for law enforcement to subpoena criminals or metaverse facilitators.

Anonymity and identity verification: Users in the metaverse often operate in an anonymous or pseudonymous manner with avatars with random nicknames, making it difficult to identify their real-world identities. This anonymity can be a significant hurdle in linking virtual actions to criminals. Only few options for unmasking adversarial activity exist, including tracing IP addresses and analyzing platform logs which can be a complex undertake when dealing with truly decentralized metaverse platforms, often leaving blockchain analytics as the only viable analysis methodology.

Complexity and interpolarity of virtual environments: The metaverse can contain a myriad virtual spaces, each with its own set of rules, protocols and types of interactions. Understanding the nuances of these environments is crucial for effective investigation. To compound on the complexity of virtual environments, many metaverse platforms are interconnected, and an investigation may need to span multiple platforms, each with its own set of data formats and access protocols.

Digital asset tracking: Tracking the movement of digital assets, such as cryptocurrencies or NFTs, across different platforms and wallets through blockchain transactions requires specialized knowledge and tools. Without such dedicated tools, tracing digital assets is impossible as such tools contain millions of walled address attributions, ensuring the effective tracing of funds and assets.

Lack of international standards: The absence of global standards for metaverse technology development allows for a wide variety of approaches by developers. This diversity significantly affects the investigation of metaverse platforms, as each requires unique methods, tools and approaches for forensic analysis. This situation makes forensic processes time-consuming and difficult to scale. Establishing international standards would aid forensic investigators in creating tools and methodologies that are applicable across various metaverse platforms, streamlining forensic examinations.

Blockchain immutability: The immutable nature of blockchain ensures that all recorded data remain unaltered, preserving evidence integrity. However, this same feature can also limit certain corrective actions, such as removing online leaks or inappropriate data and reversing transactions involving stolen funds or NFTs.

Correlation of diverse data sources: Data correlation plays a crucial role in investigations, aiming to merge various data types from disparate sources to provide a more comprehensive insight into an incident. Examples of that can be correlating the events of different systems or combining end-host data with associated network data or the correlation between different user accounts. In the context of the metaverse, the challenge lies in the sheer volume of data sources associated with metaverse technologies. This abundance makes data correlation a complex task, necessitating an in-depth understanding of diverse technologies supporting metaverse platforms and the ability to link disparate data sets meaningfully.

Lack of forensic automation: Investigators commonly use various automated tools in the initial stages of their forensic analysis to automate various pedantic operations. These tools are crucial to identify signs of compromise efficiently and accurately. Without these tools, the scope, efficiency, and depth of the analysis can be greatly impacted. Manual analysis requires more time and heightens the risk of overlooking critical signs of compromise or other malicious activities. The emerging and complex nature of metaverse environments currently lacks these tools, and there is no anticipation of their availability soon.

Metaverse investigation approach


The forensic approach for the metaverse is distinct from traditional approaches, which typically begin with investigations focusing on physical devices for telemetry extraction. Investigating the metaverse is a challenging task because it involves more than just examining various files across multiple systems. Instead, it requires the analysis of diverse systems within different environments and the correlation of such data to draw meaningful conclusions.

An example illustrating metaverse forensic complexities is, a rare digital painting, goes missing from a virtual museum. A forensic system should undertake a comprehensive investigation that includes reviewing security logs in the virtual museum, tracing blockchain transactions, and examining interactions within interconnected virtual worlds and marketplaces. The investigation should also analyze recent data from devices like haptic gloves and virtual reality goggles to confirm any malicious related user activities. The analysis of virtual logs or hardware is dependent on the logs recorded by providers or vendors and whether such logs are made available for analysis. If such information is not present, there is little that can be done in terms of forensic analysis.

In this example, if the metaverse platform and virtual museum did not maintain logs it would be impossible to verify the activities preceding the theft, including information about the adversary. If logs from haptic gloves or reality googles are also not present, the activities described by the user during the adversarial activity would have been impossible to verify. This leaves a forensic investigator unable to perform in-depth analysis apart from monitoring on-chain data and the transfer of the painting between the museum wallet and adversarial wallet addresses.

Digital Forensics for Investigating the Metaverse

Metaverse platforms vary in their approach to logging and data capture, significantly influenced by the method through which users access these environments. There are primarily two access methods: through a web browser and via client-based software. Web browser-based access to metaverse platforms, like Roblox and Sandbox, requires users to navigate to the platform using a browser. In contrast, client-based platforms such as Decentraland necessitate downloading and installing a software application to enter the metaverse. This distinction has profound implications for forensic analysis. For browser-based platforms, analysis is generally limited to network-based approaches, such as capturing network traffic, which may only be feasible when the traffic is not encrypted. On the other hand, client-based platforms can provide a richer set of data for forensic scrutiny. The software client may generate additional log files that record user activities, which, alongside conventional forensic methods like analyzing the registry or Master File Table (MFT), can offer deeper insights into the application’s use and user interactions within the metaverse. Regardless of the access method, the potential for forensic analysis can be further expanded based on the types of logs and data recorded by the metaverse environment itself and made available by the provider. This means that within each metaverse platform, the scope and depth of forensic analysis can vary based on the specific logs kept by the environment, offering a range of analytical possibilities.

Forensic systems suited for metaverse environments should start their investigation in the digital realm and use physical devices for their supporting data. These forensic systems must connect to user avatars, their accounts, and related data to facilitate initial triage and investigation. Forensic solutions for the metaverse should be capable of conducting triage, data collection, analysis and data enrichment, paralleling the requirements for examining current software and systems. The following three features would greatly benefit forensic investigators when analyzing the metaverse:

1. Triage collection: Collection of forensic artefacts start within the metaverse environment or platform, extending to other supporting software and hardware devices enabling users to interface with the metaverse.
2. Analysis: Processing the captured data to link relevant data and activity based on the reported incident aiming to identify anomalies and indicators of compromise (IOCs). Machine learning can be leveraged to automate the investigation by analyzing relevant telemetry based on the reported indicators of compromise or incident outcomes according to similar past incidences and the analysis and resolution provided by forensic analysts.
3. Data enrichment: Based on the IOCs identified, forensic systems must be capable of searching diverse sources such as blockchains, metaverse platforms and other associated information to identify relevant data for added context.

Forensic systems for the metaverse should be able to directly interact with a user’s avatar (Figure 2), which may adopt a non-player character (NPC) for assistance. When activated, the NPC avatar should be able to engage with the user’s avatar, requesting access to the avatar’s data, the metaverse platform, and all associated software and hardware implicated in an incident. This includes the metaverse console, IoT devices, networking devices and blockchain addresses. To ensure enhanced privacy and security, NPC forensic analysts should only be able to access user data if they are only activated or requested by a user and should only obtain read-only access.

The forensic NPC avatar should meticulously record relevant logs and document any detected indicators of compromise (e.g., suspicious metaverse interactions) along with the observed impact (e.g., NFT or crypto token theft) and the estimated timeframe of the incident from the user’s avatar. Given the inherent complexity of metaverse environments, these forensic systems should possess the ability to operate on multiple layers to gather data, among others:

1. Blockchain to analyze transactions and exchanges performed on-chain.
2. Metaverse Bridges to analyze activities across linked metaverse environments.
3. Metaverse Platforms, including different apps and digital assets in the metaverse.
4. Networking, including connections related to the metaverse platform as well as supporting sensors and devices. Supporting devices (haptic gloves, body sensors, computational unit, etc.).

Digital Forensics for Investigating the Metaverse
Figure 2. Metaverse forensics framework outline

During analysis, malicious or anomalous activities should, optimally, be reported in an automated manner to guide the forensic analysts and speed up investigations. After analysis, any detected signs of compromise, such as cryptocurrency addresses, user activities, or files, should undergo data enrichment. This involves conducting searches across different data sources to find relevant information, which helps provide more detail and context for the analyst.

In the following sections of the blog, we provide a deeper view of how each of the three phases proposed operate, providing the data sources that can be leveraged for each, where applicable.

Triage and artefact collection


Forensic systems can analyze various threat types using multiple data sources. As the fields of forensics and the metaverse develop, the demand for new data sources will grow. It’s important to acknowledge that the available telemetry data can vary based on the platform and hardware in use. The absence of international standards and protocols for the metaverse compounds this complexity. With this in mind, we identify the following data sources as potential telemetry that should be logged to allow the effective analysis of metaverse environments. In addition to the telemetry presented below, forensic triage collection should be performed by capturing the memory and disk image from systems involved in an incident.

Authentication and access data:

◉ User login history, IP addresses, timestamps and successful/failed login attempts.
◉ Session tokens and authentication tokens used for access.

Third-party integration data:

◉ Data from third-party integrations or APIs used in the metaverse platform.
◉ Permissions and authorizations granted to third-party apps.

Error and debug logs:

◉ Logs of software errors, crashes or debugging information.
◉ Error messages, stack traces and core dumps.

Script and code data:

◉ Source code or scripts used within the virtual environment.
◉ Execution logs and debug information.
◉ Smart contracts in relevant blockchain wallets.

Marketplace, commerce data and blockchain:

◉ Records of virtual goods or services bought and sold on the platform’s marketplace.
◉ Payment information, such as credit card transactions or cryptocurrency payments.

User account and user behavior:

◉ Profile username, avatar image, account creation time, account status, blockchain address used to open the metaverse account.
◉ User interactions, friendships, groups, locations, and social networks, while preserving privacy.
◉ User activity logs, including participation in events and in-world gatherings.

User device forensics:

◉ User devices for the extraction of supporting data, such as device activity, configuration files, locally stored chat logs, images, etc.
◉ All ingoing and outgoing network activity reaching devices relevant to a metaverse incident.

Asset provenance data:

◉ Detailed asset provenance information with the complete history of ownership and modifications.
◉ Blockchain addresses and wallets, including a copy of their transaction history. Verification of the “from” address (creator or previous owner) and the “to” address (current owner) is required.
◉ If the asset is digital or represented as a token (e.g., an NFT), examine the smart contract that created it. Smart contracts contain rules and history about the asset.
◉ Ensure the asset is not a copy or fake by verifying that the smart contract and token ID are recognized by the creator or issuing authority.

System and platform configuration:

◉ Details of the platform’s architecture, configurations and version history.

Behavioral biometrics:

◉ Behavioral patterns of user interactions and in-game actions to help identify users based on unique behavior. Although such activity can be useful to identify adversaries in the case where very little is known for their activities, such information is not expected to be widely available.

Telemetry analysis


The goal of the telemetry analysis process is to detect unusual or potentially malicious behavior through a semi- or fully automated processing of data and logs, thereby aiding forensic experts and expediting the investigation process.

This can be accelerated by leveraging deep learning techniques to identify harmful patterns using a database of historically analyzed events. Additionally, incorporating reinforcement learning, refined by forensic experts, could enhance the system’s ability to offer better incident response suggestions. For effective training, these machine-learning algorithms would need access to a large repository of forensic strategies and actions taken by professionals in various investigative scenarios, including those spanning across different metaverse environments and artefacts. Utilizing this data allows the algorithms to match current incidents with similar past cases based on the user input provided.

Given the diverse range of threats and types of incidents, along with the emerging state of the metaverse and its insufficient logging features, devising a comprehensive forensic methodology that is universally applicable to all metaverse platforms or systems presents significant challenges. Should metaverse operators provide telemetry data, the analytical process can be simplified by focusing on artifacts that are most pertinent to a specific incident. Nonetheless, the presence of such artifacts in existing metaverse platforms cannot be assured. To overcome this issue and offer practical guidance, we suggest a hybrid forensic strategy that integrates traditional operating system forensics emphasizing Windows-based platforms due to their prevalent use for client-side metaverse platforms, along with specialized analyses that address the unique aspects of the metaverse and blockchain technologies. For better understanding, we categorize each analytical technique as per the divisions used in the triage and artifact collection section of this blog.

Authentication and access data

Metaverse platforms often store records of successful authentication attempts, including the dates, in local log files. If these logs are unavailable, analyzing DNS records and process executions associated with the metaverse platform can provide insights into when a user accessed it.

One approach to uncover such information involves examining browser records (e.g. Chrome) and the history of visited URLs to identify when a user visited and connected to a specific metaverse platform via a web browser. Additionally, routers may maintain by default traffic logs offering further insight into DNS activity.

For process-related investigation, resources like Amcache and Prefetch are valuable for determining the timing of executions for the metaverse platform client. These tools can help trace the usage patterns and activities associated with user interactions with the metaverse.

Third-party integration data

Acquiring such data can be challenging because these operations occur usually on the backend of servers, and logs related to this activity are typically not accessible to users. To obtain this information, which depends on the architecture and API usage of a metaverse platform, one could use network capture tools like Wireshark. This method allows users to monitor any API requests made while using a metaverse platform, and inspect the contents of these communications, provided they are not encrypted. This approach helps in understanding the interaction between the client and the server during the operation of metaverse platforms.

Error and debug logs

Metaverse platforms commonly record client and connectivity issues in local log files. When these logs are not accessible, one can analyze the Windows Application log to identify any errors issued by the application and any software problems that prevent it from either logging in or functioning properly. However, it is important to note that errors occurring specifically within the metaverse environment are not captured by Windows’ native logs, thus remaining invisible to analysts using these tools.

Script and code data

In certain environments, snippets of scripts and other code that serve various functionalities can be accessed through reverse engineering, allowing analysts to determine if a metaverse feature is functioning properly and safely. However, it’s important to note that reverse engineering software may be illegal and is generally advised against.

Despite these limitations in directly analyzing metaverse code, it is still feasible to examine publicly available smart contract code. This code governs on-chain transactions and facilitates exchanges of value between players in metaverse environments. To analyze the smart contract associated with a specific metaverse, one must first identify the blockchain it utilizes. Then, by finding the smart contract’s address, one can inspect its code using a blockchain explorer. For instance, to review the smart contract of UNI (a decentralized exchange) which operates on the Ethereum blockchain, one would use an Ethereum blockchain explorer to locate and examine the contract’s code at the Ethereum address (0x1f9840a85d5aF5bf1D1762F925BDADdC4201F984) used by UNI.

Marketplace, commerce data and blockchain

Transaction records of virtual goods or services exchanged on a metaverse platform can be tracked by examining a user’s account to review the NFTs and other items they possess. Additionally, by conducting on-chain transaction analysis, one can retrieve a complete history of item ownership, including details of items or NFTs bought and sold by users. Thanks to the transparency of public blockchains, this process is straightforward. It only requires the wallet address used by the user to access the metaverse platform. This address can be searched in the relevant blockchain explorer to analyze the user’s historical transactions and items purchased or sold.

User accounts and behavior

Currently, the logging and analytics of user behavior within metaverse environments are largely undeveloped. Basic information like profile usernames and avatar images are stored locally in the metaverse client’s directory. More detailed information about user interactions, friendships, groups, and visited locations can be retrieved from a user’s account, provided the data has not been deleted by the user. Analyzing a user’s social networks may offer deeper insights into their participation in metaverse events and related in-world gatherings.

User device forensics

Various devices enable interaction with the metaverse, including VR headsets, smartphones, gaming consoles and haptic gloves. The extent of data logging varies by device. For example, VR headsets may record details such as connected social networks, usernames, profile pictures and chat logs. It is essential to analyze the specific vendor and device to determine the availability of such logs. As the technology landscape evolves, it is anticipated that more vendors and devices will emerge, further complicating the environment. This dynamic nature will necessitate more sophisticated tools and greater expertise for effective forensic analysis in the future.

Asset provenance data

Detailed information about the provenance of assets in the metaverse, including the complete history of ownership and modifications, can be obtained through on-chain analysis. This process involves examining transactions between blockchain addresses of interest, the non-fungible tokens (NFTs) and other tokens they possess, and their interactions with smart contracts. Because public blockchains are immutable — meaning that once data is recorded, it cannot be deleted or changed — it is relatively straightforward to track asset provenance. By searching for a known wallet address in the appropriate blockchain explorer, one can easily trace the history associated with that address.

When analyzing blockchain data for provenance, it is critical to verify that the addresses interacting with the target address are legitimate. This includes ensuring that entities like metaverse providers or NFT issuers are not misrepresented by posing as the official addresses. Verification can be achieved by visiting the official website of the token or metaverse provider to find and confirm their official blockchain addresses. This step is crucial to ensure that the address in question belongs to the entity it claims to represent. An illustrative case would be investigating the purchase of an expensive plot in the metaverse. Suppose an analysis of a user’s blockchain address reveals an NFT transaction from another address, which purportedly represents a plot identical to the one purchased. However, the source address sending the NFT is not the official one used by the metaverse provider for NFTs. If this discrepancy goes unchecked, it could obscure potential fraud or suspicious activities.

Another key factor in asset provenance is linking blockchain addresses to actual user identities. While blockchain technology typically provides pseudonymity, there are services that offer extensive databases capable of associating specific addresses with various entities and exchanges. This capability enhances an investigator’s ability to trace asset flows more effectively. For instance, WalletExplorer is a website that provides free services for attributing addresses on the Bitcoin network.

System and platform configuration

To effectively investigate a metaverse platform, it’s essential to gather detailed information about its system, architecture, and configuration. However, obtaining this information can be challenging as it is often limited. When available, key sources include official websites, developer documentation, user forums, and community pages. Additionally, valuable insights into the platform’s configuration can often be gleaned from debug and error logs, where these are accessible.

Behavioral biometrics

Behavioral patterns, such as user interactions and in-game actions, are key in identifying users based on their unique behaviors and detecting potential account hijacks. These behaviors can include movement and gesture recognition, voice recognition and the patterns of typing and communication. Additional metrics may involve how users interact with in-game items and other participants.

Currently, most systems used to interact with the metaverse do not extensively log such information, which limits the capacity for in-depth behavioral analysis. What is typically available for analysis includes communication patterns derived from chat logs and basic interaction patterns. These interactions are often analyzed through chats, the groups users join, events they attend, and on-chain analytics for transactions and engagements within the virtual space. This level of analysis, while helpful, only scratches the surface of what could potentially be achieved with more comprehensive behavioral data collection and analysis.

Data enrichment


Following analysis, it is crucial to correlate and analyze diverse data types from multiple sources, including blockchain transactions, IPFS storage, internet-of-things (IoT) devices and activities within the metaverse. Drawing from research, a forensic framework could use APIs from diverse data repositories to aggregate pertinent information. Such information can be retrieved from blockchain analytics vendors for the identification of malicious wallet addresses or traditional databases containing threat intelligence for malicious IP addresses and file hashes. The gathered data can then be processed through Named Entity Recognition (NER) to cleanse the data to extract relevant information and diminish data clutter in larger datasets, ensuring analysts receive concise and clear insights. Enriching threat intelligence demands considerably more effort beyond conventional practices, extending beyond mere checks of IPs, URLs, file hashes and online adversarial behavior. It also encompasses the analysis of blockchain transactions, provenance of digital assets, and the scrutiny of entities within the metaverse, such as casinos and conference venues, given that logs are available for analysis.

The insights gained from each case should be meticulously documented in public databases, outlining the tactics, techniques and procedure employed by adversaries within the metaverse. This documentation aids in refining the forensic capabilities of metaverse systems and provides forensic examinators intelligence for more effective and precise attributions. The selection of data sources for threat intelligence augmentation can be tailored based on investigative needs and emerging developments in the field. While it’s crucial to continue employing conventional threat intelligence strategies to address more traditional and legacy aspects of investigations, for metaverse-specific inquiries, relevant data sources might include:

  • The source code of blockchains or smart contracts (e.g., from GitHub).
  • IPFS (Interplanetary File System) frameworks.
  • Blockchain analytics tools.
  • Social media and community monitoring for discussions and trends on social media.

Source: cisco.com

Tuesday, 2 July 2024

Security Is Essential (Especially in the Cloud)

In an era where cloud computing has become the backbone of enterprise IT infrastructure, we cannot overstate the significance of a robust security posture that evolves with emerging technologies.

Cisco recognizes the multifaceted nature of today’s cloud environments and has taken a step forward with three new certifications designed to empower IT professionals across the full lifecycle of multicloud ecosystems.

Security Is Essential (Especially in the Cloud)

These groundbreaking certifications are created to address the three pillars of cloud mastery: connecting to the cloud, securing the cloud, and monitoring the cloud. In this blog, I’ll focus on the certification that involves securing the cloud.

Securing the cloud


The new Cisco Secure Cloud Access (SCAZT) Specialist Certification dives into the heart of cloud security. As threats become more sophisticated and regulatory demands become stricter, this certification underscores the importance of a security-first approach.

As Cisco’s first-ever Professional-level cloud security certification, this certification is aimed at network engineers, cloud administrators, security analysts, and other IT professionals. And it validates the skills necessary to secure cloud environments effectively.

While the SCAZT exam contains the basics of cloud architecture (you can find its concepts in most cloud deployments), the thing that makes this certification unique is it uses the Cisco equipment and portfolio that some infrastructures already have in their network to secure their cloud.

Plus, the certification is part of the cloud lifecycle—connecting, securing, and monitoring the infrastructure. Most companies cover a single component. But Cisco covers all three elements. So, when you are certified in the security aspect in conjunction with the other two cloud certifications, you can be assured you’re covering the whole cloud lifecycle.

CCNP Security certification alignment


This new cloud security certification is also part of the CCNP Security certification track. This means you can receive a standalone Specialist certification, or combine this cert with the Implementing and Operating Cisco Security Core Technologies (SCOR) exam to earn the CCNP Security certification, which also counts toward recertification and Continuing Education (CE) credits.

Security Is Essential (Especially in the Cloud)

Inside the 300-740 SCAZT exam 


Cisco certification exam topics are designed to group topics logically. When you follow the domains and tasks during your studies, you’ll get a comprehensive understanding, plus it connects the chapters you need to study.

The SCAZT 300-740 exam covers cloud security architecture, user and device security, network and cloud security, application and data security, visibility and assurance, and threat response.

Security Is Essential (Especially in the Cloud)

Cisco exam topics emphasize hands-on technical questions, theoretical concepts, and critical thinking, always from a job role perspective. The certification focuses primarily on the following protocols, architectures, technologies, and platforms:

Security Is Essential (Especially in the Cloud)

Training from Cisco U.


Cisco U. has launched a new Learning Path that’s designed to match the SCAZT exam and provide you with the best possible experience. It requires around 48 hours to complete, eligible for 40 CE credits.

Security Is Essential (Especially in the Cloud)

You can watch presentations about concepts, complete hands-on labs, and review designs and examples. At the end of each topic, an assessment is available to test your knowledge.

Cloud Security job roles


Since most applications and infrastructures are moving to the cloud, if you’re working in a role where cloud concepts are included (whether in an on-premises or hybrid environment), you’re going to need security in every shape and form.

Network security engineers will especially find this certification valuable because it focuses on protocols, architectures, technologies, and platforms relevant to their jobs.

Possible job roles where this certification applies are:

◉ Cloud Security Architect
◉ Cloud Security Engineer
◉ Cloud Security Advisor
◉ Cloud Solutions Architect
◉ Cloud Architect
◉ Cloud Associate
◉ Cloud Engineer
◉ Security Administrator
◉ Security Architect
◉ Security Consultant
◉ Security Engineer
◉ Security Manager
◉ Systems Architect
◉ Systems Engineer
◉ Network Security Engineer
◉ Security Project Manager

Source: cisco.com

Saturday, 29 June 2024

Cisco Enhances Zero Trust Access with Google

Cisco Enhances Zero Trust Access with Google

Cisco Secure Access provides a broad set of security functions in one unified solution to make both users and the IT team more productive, but no single solution can cover all security requirements. With this perspective Secure Access is actively building a strong technology ecosystem to more efficiently serve the wider needs in the market. This week Cisco announced an additional collaboration with Google to bring browser-based threat and data protection from Chrome Enterprise to web apps secured by Cisco Secure Access. As more work activities happen on web applications, a secure enterprise browser can strengthen and simplify endpoint security as part of a broader zero trust approach.

Combined, Google’s Chrome Enterprise and Cisco’s Security Cloud can help customers protect against, detect, and remediate a broad range of cyber-attacks by combining browser- and cloud-based protection. Organizations can more easily mitigate security risks while increasing user productivity (including employees, partners, and contractors), and reducing administrative tasks.

As a critical component of a comprehensive security strategy, Cisco Secure Access, an AI-first Security Service Edge (SSE) solution built on Cisco Security Cloud, provides a converged set of cloud security services. These include Zero Trust Access for private applications, Secure Web Gateway for the web, Cloud Access Security Broker for Software-as-a-Service (SaaS), Browser Isolation for web-based threats, Digital Experience Monitoring to optimize user productivity, Domain Name System security, and more. Chrome Enterprise offers browser-based threat and data protection, policy and access controls and critical security insights.

The combination of Cisco Secure Access and Chrome Enterprise offers enterprises the benefits of both cloud-based and browser-based security. Users are protected across multiple device types, applications, and networks with end-to-end zero trust access, including device trust, strong authorization, and secure application access for both managed and unmanaged devices. Cisco and Google are collaborating to deliver:

Advanced, granular, zero trust security 


The solution protects users, data, and apps through streamlined zero trust access to enterprise applications from managed and unmanaged devices with granular controls.

In addition, independent user-to-app traffic streams and hidden app locations provide unmatched protection against reconnaissance, active threats, and lateral movement. Lastly, an efficient combination of browser and cloud-based DLP controls secure sensitive data and protect against inappropriate copying, pasting, and printing. This includes blocking content transfers to and from GenAI sites when they violate DLP policies.

Frictionless user experience


A good user experience is critical to preventing user subversion of security controls. This solution significantly simplifies the user experience by removing the need for the manual, multi-step, agent and VPN connection process, significantly simplifying the user experience. It provides a one-step, fast connection to private applications through Chrome Enterprise, making it easier for users to access work resources. Users’ devices go through an automated and seamless trust process instantly at login, which ensures they have a strong security posture.

Simplified management


Simplifying the administrative experience is another focus of this collaboration. It starts with an easier deployment process that is enabled by agentless activation of device trust capabilities. Setting access policies for Chrome through Cisco Secure Access centralizes administrative tasks and allows for more consistent policy enforcement across applications. To improve detection times, we allow for security events from Chrome to be collected, analyzed, and extracted, including password changes, unapproved password reuse, data exfiltration, unsafe site visits, extension events and malware transfer events.

Source: cisco.com

Friday, 28 June 2024

200-901 DEVASC Certification: Unlocking New Opportunities

10-Career-Benefits-of-Earning-the-Cisco-200-901-DEVASC-Certification

The 200-901 DEVASC (Developing Applications and Automating Workflows using Cisco Platforms) certification is a highly regarded credential for IT professionals aiming to excel in network automation and application development. This certification demonstrates your proficiency in creating and managing applications on Cisco platforms, positioning you as a valuable asset in the tech industry. In this article, we will delve into ten key career benefits of obtaining the 200-901 DEVASC certification and how it can propel your professional growth to new heights.

What is the 200-901 DEVASC Certification?

The 200-901 DEVASC certification, offered by Cisco, is designed for professionals aiming to gain expertise in software development and network automation. This certification focuses on essential skills such as using Cisco APIs, implementing network programmability, and automating network tasks. It is ideal for those seeking roles in network engineering, software development, and DevOps.

Cisco 200-901 Exam Details:

  • Exam Price: USD 300
  • Duration: 120 minutes
  • Number of Questions: 90-110
  • Passing Score: Variable (750-850 / 1000 Approx.)

What Are the Prerequisites for the Cisco 200-901 DEVASC Exam?

There are no formal prerequisites for taking the 200-901 DEVASC exam. However, Cisco recommends having a foundational understanding of programming concepts and networking basics. Experience with Python programming, REST APIs, and an understanding of network fundamentals can be beneficial.

What Topics Are Covered in the 200-901 DEVASC Exam?

The 200-901 DEVASC exam covers a range of topics essential for developing and automating workflows on Cisco platforms. According to the Cisco 200-901 certification exam syllabus, the primary topics include:

  • Software Development and Design: Understanding software development processes, data formats, and data encoding.
  • Understanding and Using APIs: Knowledge of REST APIs, CRUD operations, and API authentication methods.
  • Cisco Platforms and Development: Familiarity with Cisco platforms and their capabilities.
  • Application Deployment and Security: Techniques for deploying applications and ensuring their security.
  • Infrastructure and Automation: Implementing network automation using tools like Ansible and Puppet.
  • Network Fundamentals: Basic networking concepts and IP addressing.

For a detailed breakdown, refer to the Cisco 200-901 certification exam syllabus.

How Difficult is the 200-901 DEVASC Exam?

The difficulty of the 200-901 DEVASC exam depends on your background and preparation. Candidates with a solid understanding of programming, networking basics, and hands-on experience with Cisco platforms typically find the exam manageable. However, it is still a challenging certification that requires thorough preparation and practice.

What Are the Best Study Materials and Resources for Preparing for the 200-901 DEVASC Exam?

To prepare effectively for the 200-901 DEVASC exam, consider using the following study materials and resources:

Official Cisco Study Guides:

  • Cisco offers official study guides and e-learning courses tailored to the DEVASC exam. These resources are designed by Cisco experts and provide comprehensive coverage of all exam objectives. The guides include detailed explanations, practical examples, and hands-on labs to reinforce learning.

200-901 Practice Tests and Mock Exams:

  • 200-901 Practice tests and mock exams are invaluable for familiarizing yourself with the exam format and identifying areas where you need improvement. They simulate the actual exam environment and help you gauge your readiness.
  • Websites like Nwexam.com offer quality practice tests. Additionally, Cisco's practice exams can be a good benchmark.
Try this Practice test: https://quiz.tryinteract.com/#/60debd4fd5240f001761f1c7

Cisco DevNet:

  • Utilize resources from the Cisco DevNet community for practical insights and tutorials. Cisco DevNet provides a wealth of learning labs, sandboxes, and documentation to help you gain hands-on experience with Cisco technologies.
  • Engage with the DevNet community to ask questions, share knowledge, and learn from others who are also preparing for the exam.

Books:

  • Books such as "Developing Applications for Cisco Webex and Webex Devices" are excellent resources for in-depth learning. They provide detailed information and practical examples that are crucial for understanding the topics covered in the exam.
  • Other recommended books include "Cisco Certified DevNet Associate DEVASC 200-901 Official Cert Guide" and "Programming and Automating Cisco Networks: A Guide to Network Programmability and Automation in the Data Center, Campus, and WAN."

Supplementary Resources:

  • Forums and Study Groups: Join forums and study groups on platforms like Reddit and Cisco Learning Network. Interacting with others preparing for the same exam can provide additional insights and support.
  • Webinars and Videos: Many websites, including Cisco's own training portal, offer webinars and video tutorials that can be helpful.

Top 10 Career Benefits of Earning the 200-901 DEVASC Certification

1. Enhanced Technical Skills

The 200-901 DEVASC certification focuses on developing your technical skills in network automation and programming. You'll gain hands-on experience with Cisco APIs, network programmability, and the integration of software and hardware systems. This expertise is highly sought after in the IT industry, where the demand for skilled professionals in automation and application development continues to grow.

2. Competitive Edge in the Job Market

In a competitive job market, having the 200-901 DEVASC certification on your resume sets you apart from other candidates. Employers recognize the value of this certification and often prioritize candidates who possess it. By demonstrating your knowledge and proficiency in network automation, you increase your chances of landing lucrative job offers and promotions.

3. Higher Earning Potential

Certified professionals often command higher salaries compared to their non-certified counterparts. The 200-901 DEVASC certification can significantly boost your earning potential by validating your specialized skills in network automation and application development. Employers are willing to pay a premium for employees who can streamline processes and improve efficiency through automation.

4. Career Advancement Opportunities

The 200-901 DEVASC certification opens doors to various career advancement opportunities. With this credential, you can pursue roles such as Network Automation Engineer, Software Developer, DevOps Engineer, and more. These positions often come with greater responsibilities, higher salaries, and the potential for leadership roles within organizations.

5. Recognition and Credibility

Earning the 200-901 DEVASC certification enhances your professional credibility and recognition in the industry. It demonstrates your commitment to staying updated with the latest technologies and best practices in network automation and application development. This recognition can lead to increased trust and respect from peers, employers, and clients.

6. Skill Validation and Confidence

The certification process involves rigorous training and examinations that validate your skills and knowledge. Successfully earning the 200-901 DEVASC certification boosts your confidence in your abilities to tackle complex network automation tasks and develop robust applications. This confidence translates into better job performance and career satisfaction.

7. Networking Opportunities

Pursuing the 200-901 DEVASC certification provides you with opportunities to network with other professionals in the field. You can connect with peers, mentors, and industry experts through certification courses, study groups, and professional events. These connections can be valuable for career growth, job referrals, and staying informed about industry trends.

8. Access to Exclusive Resources

As a certified professional, you gain access to exclusive resources provided by Cisco. These resources include advanced training materials, webinars, technical support, and community forums. Leveraging these resources can help you stay ahead of the curve, continuously improve your skills, and solve complex problems more efficiently.

9. Contribution to Organizational Success

With the 200-901 DEVASC certification, you can make significant contributions to your organization’s success. Your expertise in network automation and application development can streamline operations, reduce costs, and enhance overall productivity. Organizations value employees who can drive innovation and deliver tangible results.

10. Personal and Professional Growth

The journey to earning the 200-901 DEVASC certification is challenging and rewarding. It requires dedication, continuous learning, and problem-solving skills. This process not only contributes to your professional growth but also fosters personal development. You become more adept at critical thinking, time management, and adapting to new technologies.

Conclusion

Achieving career growth with the 200-901 DEVASC certification is a strategic move for any IT professional. Cisco Certified DevNet Associate certification offers numerous benefits, including enhanced technical skills, increased earning potential, and greater job opportunities. By investing in this credential, you position yourself as a valuable asset to any organization and set the stage for a successful and fulfilling career in network automation and application development.

Thursday, 27 June 2024

Cisco API Documentations Is Now Adapted for Gen AI Technologies

Developer experience changes rapidly. Many developers and the Cisco DevNet community utilize Generative AI tools and language models for code generation and troubleshooting.

Better data = better model completion

The main challenge for GenAI users is finding valid data for their prompts or Vector Databases. Developers and engineers need to care about the data they plan to use for LLMs/GenAI interaction.

OpenAPI documentations is now available to download


The OpenAPI documentation is a specification that defines a standard way to describe RESTful APIs, including endpoints, parameters, request/response formats, and authentication methods, promoting interoperability and ease of integration.

We at Cisco DevNet care about developers’ experience and want to make your experience working with Cisco APIs efficient and with minimal development/testing costs.

You can find links to OpenAPI documentation in JSON/YAML format here: Open API Documentation page and Search related product API – Navigate to API Reference -> Overview section in left-side menu

Note: Some API documentation can contain multiple OpenAPI Documents

For which purpose you can use related OpenAPI documentation as a part of prompt/RAG:

  • Construct code or script that utilizes related Cisco API
  • Find related API operations or ask to fix existing code using the information in the API documentation
  • Create integrations with Cisco products through API
  • Create and test AI agents
  • Utilize related Cisco OpenAPI documentation locally or using approved AI tools in your organization.

Structured vs Unstructured data


I’ve compared two LLM model completions with a prompt that contains two parts. The first part of the prompt was the same and contained the following information:

Based on the following API documentation, please write step-by-step instructions that can help automatically tag roaming computers using Umbrella API.
High-level workflow description:

  1. Add API Key
  2. Generate OAuth 2.0 access token
  3. Create tag
  4. Get the list of roaming computers and identify related ‘originId’
  5. Add tag to devices.

API documentation:

Second part:

In one case, it contains copy and paste data directly from the doc,
The other one contains LLM-friendly structured data like OpenAPI documents pasted one by one

Cisco API Documentations Is Now Adapted for Gen AI Technologies
Part of CDO OpenAPI documentation

Cisco API Documentations Is Now Adapted for Gen AI Technologies
Claude 3 Sonnet model completion. Prompt with OpenAPI documents 

Cisco API Documentations Is Now Adapted for Gen AI Technologies
Claude 3 Sonnet model completion. Prompt with copy and paste data

Benefits of using LLM-friendly documentation as a part of the prompt


I’ve found that model output was more accurate when we used OpenAPI documents as a part of a prompt. API endpoints provided in each step were more accurate. Recommendations in sections like “Get List of Roaming Computers” contain better and more optimal instructions and API operations.

Source: cisco.com

Tuesday, 25 June 2024

Security Cloud Control: Pioneering the Future of Security Management

Security Cloud Control: Pioneering the Future of Security Management

Organizations face a critical challenge today: attackers are exploiting the weakest links in their networks, such as unsecured users, devices, and workloads. This threat landscape is complicated by the shift from traditional data centers to a distributed environment, where protecting dispersed data across multiple touchpoints becomes complex.

To address these threats, many organizations resort to using multiple security tools, leading to siloed teams, tech stacks, and management systems that hinder effective security. This fragmented approach results in unnecessary costs, longer deployment times, inconsistent security, and critical gaps.

Security products that do not integrate or benefit from each other exacerbate these issues. For example, Network Security Admins struggle to navigate disparate teams and tools for effective policy deployment. Additionally, customers often under-utilize security tools, resulting in poor security hygiene and misconfigurations that increase the risk of a breach. Manual monitoring of multiple tools makes it impossible for organizations to proactively predict issues that lead to operational challenges. Consequently, the burden has been pushed onto the customer to understand the gaps and figure out how to best use the tools.

Inconsistent security policies, siloed management, lack of unified visibility, misconfiguration risks, and cybersecurity skills shortage are all significant challenges organizations face. While organizations are facing these challenges, the urgency is underscored by findings from the IBM X-Force Threat Intelligence report. According to the report, the average time from initial access to ransomware deployment has dropped from 1637 hours (about 2 months 1 week) to just 92 hours (less than 4 days) in 2023. This dramatic reduction means organizations now have much less time to respond to threats, making effective and integrated security solutions more critical than ever.

Without a centralized platform, gaining a holistic view of security is challenging. Manual identification of misconfigurations is error-prone and can lead to breaches. There is a lack of skills, time, and resources to fully utilize security features and maximize ROI. Customers must implement best practices, requiring specialized knowledge and time. Resolving access or policy issues is lengthy due to diverse security products. Admins spend excessive time crafting similar policies across different platforms. Operational issues are often addressed reactively, leading to downtime and suboptimal performance. Non-actionable alerts and overwhelming data cause analysis paralysis and hinder decision-making, with a missing sense of urgency. While we will never fully move away from having distributed enforcement points, there is a significant opportunity for the security industry to provide consistent security across these varied touchpoints.

A unified security platform aims to alleviate these issues by providing a comprehensive view of the security landscape, enabling consistent policy enforcement, simplifying troubleshooting, and offering actionable insights with the help of AI. Thus, it reduces the cognitive load and dependency on specialized skills. When considering Unified Security Management (USM), the goal is to have seamless management experience.

To meet the unique needs of various organizations and support diverse network firewall configurations, our strategy focuses on three core objectives: simplifying operations, enhancing security, and improving clarity. We aim to streamline security management processes, strengthen defenses with advanced Zero Trust and vulnerability protection, and offer clear, actionable insights through AI-driven intelligence. These focused efforts are designed to deliver a more intuitive, robust, and user-friendly security solution.

Customer Outcomes with Security Cloud Control


Security Cloud Control: Pioneering the Future of Security Management

We are excited to launch AIOps, offering a game-changing way to enhance operational efficiency and bolster security. AIOps addresses critical IT challenges such as misconfigurations and traffic spikes, preventing downtime and reinforcing network performance. AIOps provides predictive insights and automation to help administrators improve security and reduce costs. We are introducing key features, such as policy analysis and optimization, best practice recommendations, traffic insights, and capacity forecasting. By incorporating AIOps into our services, we are adopting a more intelligent and proactive methodology to safeguard and optimize the performance and security of your network infrastructure.

Best Practice Recommendations: Nudging admins to get to better security state

Security Cloud Control: Pioneering the Future of Security Management

Predictive Insights with AIOps

Security Cloud Control: Pioneering the Future of Security Management

Benefits of AIOps

Security Cloud Control: Pioneering the Future of Security Management

Our solution is designed to accommodate management of a wide array of form factors of firewalls, ensuring comprehensive security from the ground up to the cloud. It seamlessly integrates with various deployment models, including physical and virtual firewalls (Cisco Secure Firewall Threat Defense), Multicloud Defense, Hypershield, and Adaptive Security Appliances (ASA).

This versatility simplifies the management of your security infrastructure, making it easier to maintain a robust and adaptive defense system across your entire network all from a single place.

Our partnership with Splunk represents a significant leap forward in streamlining security operations. By integrating with Splunk, we enhance the oversight and monitoring capabilities of both cloud-based and on-site firewalls. Utilizing Splunk’s powerful data processing, analytics, and real-time logging strengths, we deliver an enriched, responsive, and comprehensive view of your security posture.

This collaborative effort simplifies the management of security operations, providing Security Operations Center (SOC) teams with a superior, streamlined, and more effective method for protecting their digital landscapes.

We are introducing a unified dashboard that enables our customers to gain a real-time, holistic perspective of their entire network and cloud security ecosystem. Customers can efficiently manage tens of thousands of security devices, coordinating multiple tenants under a centralized global administrator.

Unified Dashboard: A Comprehensive view of firewall and security services

Security Cloud Control: Pioneering the Future of Security Management

We are further simplifying the operations for our admins with the Firewall AI Assistant. It revolutionizes network security by tackling the complexity of firewall rule management. With many organizations handling over a thousand rules—some outdated or conflicting—firewall maintenance becomes a security risk. Gartner notes that misconfigurations may lead to 99% of firewall breaches through 2023, highlighting the need for this AI-driven simplification. Customers can ask the Assistant to explain the intent of the policies and assist with creating rule.

AI Assistant for Firewall: Rule Analysis

Security Cloud Control: Pioneering the Future of Security Management

AI Assistant for Firewall: Rule Creation

Security Cloud Control: Pioneering the Future of Security Management

A key breakthrough in our security strategy is the implementation of seamless object sharing, which plays a pivotal role in maintaining consistent protection across hybrid networks. This feature facilitates the distribution of network objects across both on-premises firewalls and multi-cloud defenses. Its primary objective is safeguarding application and workload data wherever they reside, by enabling our admins to build a consistent policy across different environments. This approach fortifies the security posture of your hybrid environment, and streamlines change management processes, reduce opportunity for errors, thereby, contributing to a more secure, effective, and resilient IT ecosystem.

Consistent Policy Enforcement: Sharing Network Objects across on-prem and Cloud environments

Security Cloud Control: Pioneering the Future of Security Management

We are committed to continuously enhancing our services and expanding our global footprint to better serve our customers. In conclusion, our vision extends beyond merely supplying tools—we strive to revolutionize the user experience.

Through the fusion of cutting-edge technology and intuitive design, our goal is to foster a supportive environment for administrators, where operations are efficient, and security is strong. We are dedicated to alleviating the customer’s burden by offering a Unified Security Platform that empowers them to achieve the best state of security.

Source: cisco.com

Saturday, 22 June 2024

Up your Quality of Life with Secure MSP Hub and Secure MSP Center

Up your Quality of Life with Secure MSP Hub and Secure MSP Center

All the technology around us is meant to increase our productivity through tools and automation so that our quality of life can be improved. The reality can be very different, especially if you are an MSP.  There are so many factors affecting your quality of life like stress due to client emergencies, tight deadlines, unpredictable working hours or challenges at the end of the month for billing and invoicing with your customers. Above all getting ahead of breaches, staying ahead of hackers can all add to reduced quality of life.


I know that we cannot take away all the stress inducing factors for our MSPs, though that is our vision, but for now I want to talk about how we are making it easy for our MSPs to do business with MSP center and have an easier time managing their Cisco security products with MSP Hub.

MSP Center is our simplified, usage based post-paid buying model where you as an MSP can sign up once to get access to Security portfolio. There are no long forms to fill in, training requirements to pass through or chat with several sales reps to get access to the products. If your customer needs a security offer, you can provide it from our portfolio in a few minutes.

Once you sign up on, you get access to MSP Hub which as the name suggests is a dashboard for MSPs to manage all Security products, customers, billing and invoicing along with ecosystem integrations in a single pane. Several hundred partners are currently using the hub and are absolutely loving it. One of our partners remarked, “This is exactly the dashboard we want as an MSP, single pane of glass across all Cisco products for MSPs”.

I want to detail a few use cases which can save a lot of time for MSPs.

◉ Customer Management – The customer management feature on MSP Hub streamlines the customer onboarding process for multiple products in a single place. The Bulk Import feature also lets our partners import their end customers easily saving multiple clicks and reducing mundane tasks for MSPs.

Up your Quality of Life with Secure MSP Hub and Secure MSP Center

◉ Billing and Invoicing – This feature enables easy access to historic billing, ability to change the payment information and a detailed breakdown of usages which in turn helps you as an MSP to reduce the man hours around invoicing the customer and resolving billing and invoicing issues. We also plan to build integrations which can further simplify your life.

Up your Quality of Life with Secure MSP Hub and Secure MSP Center

Up your Quality of Life with Secure MSP Hub and Secure MSP Center

◉ Technical Integrations – We are simplifying how our Cisco Security products can easily integrate with ecosystem partners in a simple 3 click process. This will further save our MSPs from tedious and elaborate integrations. We are working with some of your favorite RMM vendors. Reach out to us to know more.

Up your Quality of Life with Secure MSP Hub and Secure MSP Center

◉ Apart from this, there is a simplified on-demand training portal that your sellers or engineers can use to sell and deploy the products easily.

Source: cisco.com