Tuesday 12 March 2024

Dashify: Solving Data Wrangling for Dashboards

This post is about Dashify, the Cisco Observability Platform’s dashboarding framework. We are going to describe how AppDynamics, and partners, use Dashify to build custom product screens, and then we are going to dive into details of the framework itself. We will describe its specific features that make it the most powerful and flexible dashboard framework in the industry.

What are dashboards?


Dashboards are data-driven user interfaces that are designed to be viewed, edited, and even created by product users. Product screens themselves are also built with dashboards. For this reason, a complete dashboard framework provides leverage for both the end users looking to share dashboards with their teams, and the product-engineers of COP solutions like Cisco Cloud Observability.

In the observability space most dashboards are focused on charts and tables for rendering time series data, for example “average response time” or “errors per minute”. The image below shows the COP EBS Volumes Overview Dashboard, which is used to understand the performance of Elastic Block Storage (EBS) on Amazon Web Services. The dashboard features interactive controls (dropdowns) that are used to further-refine the scenario from all EBS volumes to, for example unhealthy EBS volumes in US-WEST-1.

Dashify: Solving Data Wrangling for Dashboards

Several other dashboards are provided by our Cisco Cloud Observability app for monitoring other AWS systems. Here are just a few examples of the rapidly expanding use of Dashify dashboards across the Cisco Observability Platform.

  • EFS Volumes
  • Elastic Load Balancers
  • S3 Buckets
  • EC2 Instances

Why Dashboards


No observability product can “pre-imagine” every way that customers want to observe their systems. Dashboards allow end-users to create custom experiences, building on existing in-product dashboards, or creating them from scratch. I have seen large organizations with more than 10,000 dashboards across dozens of teams.

Dashboards are a cornerstone of observability, forming a bridge between a remote data source, and local display of data in the user’s browser. Dashboards are used to capture “scenarios” or “lenses” on a particular problem. They can serve a relatively fixed use case, or they can be ad-hoc creations for a troubleshooting “war room.” A dashboard performs many steps and queries to derive the data needed to address the observability scenario, and to render the data into visualizations. Dashboards can be authored once, and used by many different users, leveraging the know-how of the author to enlighten the audience. Dashboards play a critical role in low-level troubleshooting and in rolling up high-level business KPIs to executives.

Dashify: Solving Data Wrangling for Dashboards

The goal of dashboard frameworks has always been to provide a way for users, as opposed to ‘developers’, to build useful visualizations. Inherent to this “democratization” of visualizations is the notion that building a dashboard must somehow be easier than a pure JavaScript app development approach. Afterall, dashboards cater to users, not hardcore developers.

The problem with dashboard frameworks


The diagram below illustrates how a traditional dashboard framework allows the author to configure and arrange components but does not allow the author to create new components or data sources. The dashboard author is stuck with whatever components, layouts, and data sources are made available. This is because the areas shown in red are developed in JavaScript and are provided by the framework. JavaScript is neither a secure, nor easy technology to learn, therefore it is rarely exposed directly to authors. Instead, dashboards expose a JSON or YAML based DSL. This typically leaves field teams, SEs, and power users in the position of waiting for the engineering team to release new components, and there is almost always a deep feature backlog.

Dashify: Solving Data Wrangling for Dashboards

I have personally seen this scenario play out many times. To take a real example, a team building dashboards for IT services wanted rows in a table to be colored according to a “heat map”. This required a feature request to be logged with engineering, and the core JavaScript-based Table component had to be changed to support heat maps. It became typical for the core JS components to become a mishmash of domain-driven spaghetti code. Eventually the code for Table itself was hard to find amidst the dozens of props and hidden behaviors like “heat maps”. Nobody was happy with the situation, but it was typical, and core component teams mostly spent their sprint cycles building domain behaviors and trying to understand the spaghetti. What if dashboard authors themselves on the power-user end of the spectrum could be empowered to create components themselves?

Enter Dashify


Dashify’s mission is to remove the barrier of “you can’t do that” and “we don’t have a component for that”. To accomplish this, Dashify rethinks some of the foundations of traditional dashboard frameworks. The diagram below shows that Dashify shifts the boundaries around what is “built in” and what is made completely accessible to the Author. This radical shift allows the core framework team to focus on “pure” visualizations, and empowers domain teams, who author dashboards, to build domain specific behaviors like “IT heat maps” without being blocked by the framework team.

Dashify: Solving Data Wrangling for Dashboards

To accomplish this breakthrough, Dashify had to solve the key challenge of how to simplify and expose reactive behavior and composition without cracking open the proverbial can of JavaScript worms. To do this, Dashify leveraged a new JSON/YAML meta-language, created at Cisco in the open source, for the purpose of declarative, reactive state management. This new meta-language is called “Stated,” and it is being used to drive dashboards, as well as many other JSON/YAML configurations within the Cisco Observability Platform. Let’s take a simple example to show how Stated enables a dashboard author to insert logic directly into a dashboard JSON/YAML.

Suppose we receive data from a data source that provides “health” about AWS availability zones. Assume the health data is updated asynchronously. Now suppose we wish to bind the changing health data to a table of “alerts” according to some business rules:

1. only show alerts if the percentage of unhealthy instances is greater than 10%
2. show alerts in descending order based on percentage of unhealthy instances
3. update the alerts every time the health data is updated (in other words declare a reactive dependency between alerts and health).

This snippet illustrates a desired state, that adheres to the rules.

Dashify: Solving Data Wrangling for Dashboards

But how can we build a dashboard that continuously adheres to the three rules? If the health data changes, how can we be sure the alerts will be updated? These questions get to the heart of what it means for a system to be Reactive. This Reactive scenario is at best difficult to accomplish in today’s popular dashboard frameworks.

Notice we have framed this problem in terms of the data and relationships between different data items (health and alerts), without mentioning the user interface yet. In the diagram above, note the “data manipulation” layer. This layer allows us to create exactly these kinds of reactive (change driven) relationships between data, decoupling the data from the visual components.

Let’s look at how easy it is in Dashify to create a reactive data rule that captures our three requirements. Dashify allows us to replace *any* piece of a dashboard with a reactive rule, so we simply write a reactive rule that generates the alerts from the health. The Stated rule, beginning on line 12 is a JSONata expression. Feel free to try it yourself here.

Dashify: Solving Data Wrangling for Dashboards

One of the most interesting things is that it appears you don’t have to “tell” Dashify what data your rule depends on. You just write your rule. This simplicity is enabled by Stated’s compiler, which analyzes all the rules in the template and produces a Reactive change graph. If you change anything that the ‘alerts’ rule is looking at, the ‘alerts’ rule will fire, and recompute the alerts. Let’s quickly prove this out using the stated REPL which lets us run and interact with Stated templates like Dashify dashboards. Let’s see what happens if we use Stated to change the first zone’s unhealthy count to 200. The screenshot below shows execution of the command “.set /health/0/unhealthy 200” in the Stated JSON/YAML REPL. Dissecting this command, it says “set the value at json pointer /health/0/unhealthy to value 200”. We see that the alerts are immediately recomputed, and that us-east-1a is now present in the alerts with 99% unhealthy.

Dashify: Solving Data Wrangling for Dashboards

By recasting much of dashboarding as a reactive data problem, and by providing a robust in-dashboard expression language, Dashify allows authors to do both traditional dashboard creation, advanced data bindings, and reusable component creation. Although quite trivial, this example clearly shows how Dashify differentiates its core technology from other frameworks that lack reactive, declarative, data bindings. In fact, Dashify is the first, and only framework to feature declarative, reactive, data bindings.

Let’s take another example, this time fetching data from a remote API. Let’s say we want to fetch data from the Star Wars REST api. Business requirements:

  • Developer can set how many pages of planets to return
  • Planet details are fetched from star wars api (https://swapi.dev)
  • List of planet names is extracted from returned planet details
  • User should be able to select a planet from the list of planets
  •  ‘residents’ URLs are extracted from planet info (that we got in step 2), and resident details are fetched for each URL
  • Full names of inhabitants are extracted from resident details and presented as list

Again, we see that before we even consider the user interface, we can cast this problem as a data fetching and reactive binding problem. The dashboard snippet below shows how a value like “residents” is reactively bound to selectedPlanet and how map/reduce style set operators are applied to entire results of a REST query. Again, all the expressions are written in the grammar of JSONata.

Dashify: Solving Data Wrangling for Dashboards

To demonstrate how you can interact with and test such a snippet, checkout This github gist shows a REPL session where we:

1. load the JSON file and observe the default output for Tatooine
2. Display the reactive change-plan for planetName
3. Set the planet name to “Coruscant”
4. Call the onSelect() function with “Naboo” (this demonstrates that we can create functions accessible from JavaScript, for use as click handlers, but produces the same result as directly setting planetName)

From this concise example, we can see that dashboard authors can easily handle fetching data from remote APIs, and perform extractions and transformations, as well as establish click handlers. All these artifacts can be tested from the Stated REPL before we load them into a dashboard. This remarkable economy of code and ease of development cannot be achieved with any other dashboard framework.

If you are curious, these are the inhabitants of Naboo:

Dashify: Solving Data Wrangling for Dashboards

What’s next?


We have shown a lot of “data code” in this post. This is not meant to imply that building Dashify dashboards requires “coding”. Rather, it is to show that the foundational layer, which supports our Dashboard building GUIs is built on very solid foundation. Dashify recently made its debut in the CCO product with the introduction of AWS monitoring dashboards, and Data Security Posture Management screens. Dashify dashboards are now a core component of the Cisco Observability Platform and have been proven out over many complex use cases. In calendar Q2 2024, COP will introduce the dashboard editing experience which provides authors with built in visual drag-n-drop style editing of dashboards. Also in calendar Q2, COP introduces the ability to bundle dashify dashboards into COP solutions allowing third party developers to unleash their dashboarding skills. So, weather you skew to the “give me a gui” end of the spectrum or the “let me code” lifestyle, Dashify is designed to meet your needs.

Summing it up


Dashboards are a key, perhaps THE key technology in an observability platform. Existing dashboarding frameworks present unwelcome limits on what authors can do. Dashify is a new dashboarding framework born from many collective years of experience building both dashboard frameworks and their visual components. Dashify brings declarative, reactive state management into the hands of dashboard authors by incorporating the Stated meta-language into the JSON and YAML of dashboards. By rethinking the fundamentals of data management in the user interface, Dashify allows authors unprecedented freedom. Using Dashify, domain teams can ship complex components and behaviors without getting bogged down in the underlying JavaScript frameworks. Stay tuned for more posts where we dig into the exciting capabilities of Dashify: Custom Dashboard Editor, Widget Playground, and Scalable Vector Graphics.

Source: cisco.com

Saturday 9 March 2024

Protect Your Cloud Environments with Data Security Observability

Protect Your Cloud Environments with Data Security Observability

Data is the new fuel for business growth


Data is at the heart of seemingly everything these days, from the smart devices in our homes to the mobile apps we use on the go every day. This wealth of information at our fingertips allows us to correlate data points and determine patterns and outcomes faster than humanly possible — enabling us to predict and quickly thwart adverse events on the horizon. We know that the volume of data collected by organizations is a goldmine of information that, when leveraged correctly, can empower growth. We also know that clean data, free of sensitive information, is critical for fueling GenAI initiatives across the globe. However, this uptick in data creation and usage also amplifies the need for organizations to ensure that they handle data responsibly and adhere to increasingly stringent data regulatory standards.

With astronomical amounts of data constantly being generated, tracked, and stored – it’s become more important than ever to secure it and be able to answer several key questions: Where is my data? Who is accessing my data? And is my data secure?

Introducing Observability for Data Security Posture Management (DSPM)


The new Data Security module announced at Cisco Live 2024 Amsterdam is now generally available. It expands our business risk observability capabilities for cloud environments and delivers automated data discovery and classification in data stores like Snowflake and AWS S3. The new module provides real-time data insights that help visualize, prioritize, and act on security issues before they become revenue-impacting.

A quick look at the new data security capabilities:

◉ Discovery and classification of sensitive data: Easily identify all data stores and data entities, to quickly focus on securing sensitive data.

◉ Data access control: Understand which users, roles, and applications are accessing your data and who have access to personally identifiable information. Seamlessly adopt a least privilege approach by detecting unused privileges and locking down access to your data stores.

◉ Exfiltration attempt detection: Unlock GenAI-based detection and remediation guidance for data exfiltration attempts to stop attackers in their tracks.

◉ Identify security risks: Efficiently detect unencrypted buckets, dormant risky users and siloed unused data entities to reduce your overall security risk posture.

Protect Your Cloud Environments with Data Security Observability

The future of data security


With data being created and moving at the speed of light every day, it can be overwhelming to keep track of exactly where the data is and how it’s being stored – let alone comprehensively securing it. Automation is imperative to keep up, and choosing the right tool will enable you to continue leveraging data and innovating while knowing your data is secure. The Data Security module provides teams with deep visibility and actionable insights to effortlessly protect data at scale. The future of data security relies on our ability to put adequate security controls in place now, so we can embrace the full potential of data and the unlimited capabilities that it unlocks.

Source: cisco.com

Thursday 7 March 2024

Using the Power of Artificial Intelligence to Augment Network Automation

Talking to your Network


Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In 2015, after attending Cisco Live in San Diego, I gained a new appreciation of the realm of the possible. Leveraging tools like Ansible and Cisco pyATS, I began to streamline processes and enhance efficiencies within network operations, setting a foundation for what would become a career-long pursuit of innovation. This initial foray into automation was not just about simplifying repetitive tasks; it was about envisioning a future where networks could be more resilient, adaptable, and intelligent. As I navigated through the complexities of network systems, these technologies became indispensable allies, helping me to not only manage but also to anticipate the needs of increasingly sophisticated networks.

Using the Power of Artificial Intelligence to Augment Network Automation

In recent years, my exploration has taken a pivotal turn with the advent of generative AI, marking a new chapter in the story of network automation. The integration of artificial intelligence into network operations has opened up unprecedented possibilities, allowing for even greater levels of efficiency, predictive analysis, and decision-making capabilities. This blog, accompanying the CiscoU Tutorial, delves into the cutting-edge intersection of AI and network automation, highlighting my experiences with Docker, LangChain, Streamlit, and, of course, Cisco pyATS. It’s a reflection on how the landscape of network engineering is being reshaped by AI, transforming not just how we manage networks, but how we envision their growth and potential in the digital age. Through this narrative, I aim to share insights and practical knowledge on harnessing the power of AI to augment the capabilities of network automation, offering a glimpse into the future of network operations.

In the spirit of modern software deployment practices, the solution I architected is encapsulated within Docker, a platform that packages an application and all its dependencies in a virtual container that can run on any Linux server. This encapsulation ensures that it works seamlessly in different computing environments. The heart of this dockerized solution lies within three key files: the Dockerfile, the startup script, and the docker-compose.yml.

The Dockerfile serves as the blueprint for building the application’s Docker image. It starts with a base image, ubuntu:latest, ensuring that all the operations have a solid foundation. From there, it outlines a series of commands that prepare the environment:

FROM ubuntu:latest

# Set the noninteractive frontend (useful for automated builds)
ARG DEBIAN_FRONTEND=noninteractive

# A series of RUN commands to install necessary packages
RUN apt-get update && apt-get install -y wget sudo ...

# Python, pip, and essential tools are installed
RUN apt-get install python3 -y && apt-get install python3-pip -y ...

# Specific Python packages are installed, including pyATS[full]
RUN pip install pyats[full]

# Other utilities like dos2unix for script compatibility adjustments
RUN sudo apt-get install dos2unix -y

# Installation of LangChain and related packages
RUN pip install -U langchain-openai langchain-community ...

# Install Streamlit, the web framework
RUN pip install streamlit

Each command is preceded by an echo statement that prints out the action being taken, which is incredibly helpful for debugging and understanding the build process as it happens.

The startup.sh script is a simple yet crucial component that dictates what happens when the Docker container starts:

cd streamlit_langchain_pyats
streamlit run chat_with_routing_table.py

It navigates into the directory containing the Streamlit app and starts the app using streamlit run. This is the command that actually gets our app up and running within the container.

Lastly, the docker-compose.yml file orchestrates the deployment of our Dockerized application. It defines the services, volumes, and networks to run our containerized application:

version: '3'
services:
 streamlit_langchain_pyats:
  image: [Docker Hub image]
  container_name: streamlit_langchain_pyats
  restart: always
  build:
   context: ./
   dockerfile: ./Dockerfile
  ports:
   - "8501:8501"

This docker-compose.yml file makes it incredibly easy to manage the application lifecycle, from starting and stopping to rebuilding the application. It binds the host’s port 8501 to the container’s port 8501, which is the default port for Streamlit applications.

Together, these files create a robust framework that ensures the Streamlit application — enhanced with the AI capabilities of LangChain and the powerful testing features of Cisco pyATS — is containerized, making deployment and scaling consistent and efficient.

The journey into the realm of automated testing begins with the creation of the testbed.yaml file. This YAML file is not just a configuration file; it’s the cornerstone of our automated testing strategy. It contains all the essential information about the devices in our network: hostnames, IP addresses, device types, and credentials. But why is it so crucial? The testbed.yaml file serves as the single source of truth for the pyATS framework to understand the network it will be interacting with. It’s the map that guides the automation tools to the right devices, ensuring that our scripts don’t get lost in the vast sea of the network topology.

Sample testbed.yaml


---
devices:
  cat8000v:
    alias: "Sandbox Router"
    type: "router"
    os: "iosxe"
    platform: Cat8000v
    credentials:
      default:
        username: developer
        password: C1sco12345
    connections:
      cli:
        protocol: ssh
        ip: 10.10.20.48
        port: 22
        arguments:
        connection_timeout: 360

With our testbed defined, we then turn our attention to the _job file. This is the conductor of our automation orchestra, the control file that orchestrates the entire testing process. It loads the testbed and the Python test script into the pyATS framework, setting the stage for the execution of our automated tests. It tells pyATS not only what devices to test but also how to test them, and in what order. This level of control is indispensable for running complex test sequences across a range of network devices.

Sample _job.py pyATS Job


import os
from genie.testbed import load

def main(runtime):

    # ----------------
    # Load the testbed
    # ----------------
    if not runtime.testbed:
        # If no testbed is provided, load the default one.
        # Load default location of Testbed
        testbedfile = os.path.join('testbed.yaml')
        testbed = load(testbedfile)
    else:
        # Use the one provided
        testbed = runtime.testbed

    # Find the location of the script in relation to the job file
    testscript = os.path.join(os.path.dirname(__file__), 'show_ip_route_langchain.py')

    # run script
    runtime.tasks.run(testscript=testscript, testbed=testbed)

Then comes the pièce de résistance, the Python test script — let’s call it capture_routing_table.py. This script embodies the intelligence of our automated testing process. It’s where we’ve distilled our network expertise into a series of commands and parsers that interact with the Cisco IOS XE devices to retrieve the routing table information. But it doesn’t stop there; this script is designed to capture the output and elegantly transform it into a JSON structure. Why JSON, you ask? Because JSON is the lingua franca for data interchange, making the output from our devices readily available for any number of downstream applications or interfaces that might need to consume it. In doing so, we’re not just automating a task; we’re future-proofing it.

Excerpt from the pyATS script


    @aetest.test
    def get_raw_config(self):
        raw_json = self.device.parse("show ip route")

        self.parsed_json = {"info": raw_json}

    @aetest.test
    def create_file(self):
        with open('Show_IP_Route.json', 'w') as f:
            f.write(json.dumps(self.parsed_json, indent=4, sort_keys=True))

By focusing solely on pyATS in this phase, we lay a strong foundation for network automation. The testbed.yaml file ensures that our script knows where to go, the _job file gives it the instructions on what to do, and the capture_routing_table.py script does the heavy lifting, turning raw data into structured knowledge. This approach streamlines our processes, making it possible to conduct comprehensive, repeatable, and reliable network testing at scale.

Using the Power of Artificial Intelligence to Augment Network Automation

Enhancing AI Conversational Models with RAG and Network JSON: A Guide


In the ever-evolving field of AI, conversational models have come a long way. From simple rule-based systems to advanced neural networks, these models can now mimic human-like conversations with a remarkable degree of fluency. However, despite the leaps in generative capabilities, AI can sometimes stumble, providing answers that are nonsensical or “hallucinated” — a term used when AI produces information that isn’t grounded in reality. One way to mitigate this is by integrating Retrieval-Augmented Generation (RAG) into the AI pipeline, especially in conjunction with structured data sources like network JSON.

What is Retrieval-Augmented Generation (RAG)?


Retrieval-Augmented Generation is a cutting-edge technique in AI language processing that combines the best of two worlds: the generative power of models like GPT (Generative Pre-trained Transformer) and the precision of retrieval-based systems. Essentially, RAG enhances a language model’s responses by first consulting a database of information. The model retrieves relevant documents or data and then uses this context to inform its generated output.

The RAG Process


The process typically involves several key steps:

  • Retrieval: When the model receives a query, it searches through a database to find relevant information.
  • Augmentation: The retrieved information is then fed into the generative model as additional context.
  • Generation: Armed with this context, the model generates a response that’s not only fluent but also factually grounded in the retrieved data.

The Role of Network JSON in RAG


Network JSON refers to structured data in the JSON (JavaScript Object Notation) format, often used in network communications. Integrating network JSON with RAG serves as a bridge between the generative model and the vast amounts of structured data available on networks. This integration can be critical for several reasons:
  • Data-Driven Responses: By pulling in network JSON data, the AI can ground its responses in real, up-to-date information, reducing the risk of “hallucinations.”
  • Enhanced Accuracy: Access to a wide array of structured data means the AI’s answers can be more accurate and informative.
  • Contextual Relevance: RAG can use network JSON to understand the context better, leading to more relevant and precise answers.

Why Use RAG with Network JSON?


Let’s explore why one might choose to use RAG in tandem with network JSON through a simplified example using Python code:

  • Source and Load: The AI model begins by sourcing data, which could be network JSON files containing information from various databases or the internet.
  • Transform: The data might undergo a transformation to make it suitable for the AI to process — for example, splitting a large document into manageable chunks.
  • Embed: Next, the system converts the transformed data into embeddings, which are numerical representations that encapsulate the semantic meaning of the text.
  • Store: These embeddings are then stored in a retrievable format.
  • Retrieve: When a new query arrives, the AI uses RAG to retrieve the most relevant embeddings to inform its response, thus ensuring that the answer is grounded in factual data.

By following these steps, the AI model can drastically improve the quality of the output, providing responses that are not only coherent but also factually correct and highly relevant to the user’s query.

class ChatWithRoutingTable:
    def __init__(self):
        self.conversation_history = []
        self.load_text()
        self.split_into_chunks()
        self.store_in_chroma()
        self.setup_conversation_memory()
        self.setup_conversation_retrieval_chain()

    def load_text(self):
        self.loader = JSONLoader(
            file_path='Show_IP_Route.json',
            jq_schema=".info[]",
            text_content=False
        )
        self.pages = self.loader.load_and_split()

    def split_into_chunks(self):
        # Create a text splitter
        self.text_splitter = RecursiveCharacterTextSplitter(
            chunk_size=1000,
            chunk_overlap=100,
            length_function=len,
        )
        self.docs = self.text_splitter.split_documents(self.pages)

    def store_in_chroma(self):
        embeddings = OpenAIEmbeddings()
        self.vectordb = Chroma.from_documents(self.docs, embedding=embeddings)
        self.vectordb.persist()

    def setup_conversation_memory(self):
        self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

    def setup_conversation_retrieval_chain(self):
        self.qa = ConversationalRetrievalChain.from_llm(llm, self.vectordb.as_retriever(search_kwargs={"k": 10}), memory=self.memory)

    def chat(self, question):
        # Format the user's prompt and add it to the conversation history
        user_prompt = f"User: {question}"
        self.conversation_history.append({"text": user_prompt, "sender": "user"})

        # Format the entire conversation history for context, excluding the current prompt
        conversation_context = self.format_conversation_history(include_current=False)

        # Concatenate the current question with conversation context
        combined_input = f"Context: {conversation_context}\nQuestion: {question}"

        # Generate a response using the ConversationalRetrievalChain
response = self.qa.invoke(combined_input)

        # Extract the answer from the response
answer = response.get('answer', 'No answer found.')

        # Format the AI's response
        ai_response = f"Cisco IOS XE: {answer}"
        self.conversation_history.append({"text": ai_response, "sender": "bot"})

        # Update the Streamlit session state by appending new history with both user prompt and AI response
        st.session_state['conversation_history'] += f"\n{user_prompt}\n{ai_response}"

        # Return the formatted AI response for immediate display
        return ai_response

Conclusion

The integration of RAG with network JSON is a powerful way to supercharge conversational AI. It leads to more accurate, reliable, and contextually aware interactions that users can trust. By leveraging the vast amounts of available structured data, AI models can step beyond the limitations of pure generation and towards a more informed and intelligent conversational experience.

Source: cisco.com

Tuesday 5 March 2024

Improved Area Monitoring with New Meraki Smart Cameras

Improved Area Monitoring with New Meraki Smart Cameras

Meraki’s smart cameras offer businesses an easy-to-deploy way to monitor their physical security, with the added benefit of being managed entirely on the cloud. Various Meraki cameras are deployed in the Cisco Store, including the outdoor smart cameras MV63 and MV93, which have long been useful in the Cisco Store. The MV63’s wide-angle, fixed-focused lens monitors the entrances and exits of the store, while the MV93’s 360° fish-eye lens offers panoramic wide area coverage, enhancing surveillance capabilities even in low lighting. Both cameras have helped keep the Cisco Store secure by using important features such as intelligent object detection using machine learning, motion search, and motion recap.

Now, these two cameras have indoor counterparts. Launched in February 2024, the Meraki MV13 and MV33 cameras will continue to improve security measures with even clearer footage, high performance, and stronger analytics. Meraki’s latest camera features, attribute search and presence analytics, will further improve these cameras’ capabilities.

Introducing the newest indoor smart cameras, Meraki MV13 and MV33


The new Meraki MV13 has a fixed lens and is ideal for monitoring indoor hallways and spaces. It is easy to deploy and offers some of the best visual components like 8.4 MP image quality and up to 4K video resolution.

Improved Area Monitoring with New Meraki Smart Cameras
Meraki MV13 smart camera

Meanwhile, the Meraki MV33 has a 360° fish-eye lens and 12.4 MP image quality, and can be used to monitor general indoor retail, hospitality, education, and healthcare spaces.

Improved Area Monitoring with New Meraki Smart Cameras
Meraki MV33 smart camera

Faster search, smarter insights


Meraki simultaneously launched two new features: attribute search and presence analytics.

The attribute search feature is an easier and faster way of parsing through video footage based on a person’s clothing color (both top and bottom) as well as a vehicle’s color and make. In the event there is a suspicious person or theft, this feature would allow security teams to quickly filter through footage by these attributes from up to four cameras, thus improving store security measures.

Meanwhile, the new presence analytics feature includes area occupancy analytics and line-crossing analytics. These will allow security teams to define areas to be analyzed and then accurately gain insights on people movement in those spaces.

Both the MV13 and MV33 will add to Meraki’s broader portfolio of cameras, giving organizations more flexibility and ways to monitor all areas of their buildings with ease, including in the Cisco Store. Attribute search has been incorporated into both the indoor Meraki MV13 and outdoor Meraki MV63, while presence analytics is now available on all second and third generation cameras. By creating tracking areas and easily being able to adjust those lines, security teams can customize what they monitor and then receive analytics that help them identify suspicious activity and gain insights into crowds.

Source: cisco.com

Saturday 2 March 2024

Showcasing Powerful Private 5G Use Cases at Cisco Live EMEA!

Showcasing Powerful Private 5G Use Cases at Cisco Live EMEA!

For those who joined us at Cisco Live! Amsterdam earlier this month, you might not have noticed that the venue featured a Private 5G Network established in partnership with NTT DATA.

Spanning two halls at RAI Amsterdam, or roughly 26,000 square meters, the seamless integration of this Private 5G network augmented the existing Wi-Fi network, pushing the boundaries of traditional connectivity, and creating a smart venue—a first for Cisco Live!

Built with the support of Intel, the Cisco Country Digital Acceleration team, and RAI Amsterdam—a conference and exhibition space that hosts millions of visitors annually—NTT DATA’s Private 5G network included four radios supporting mission critical and latency-sensitive applications. RAI also had over one hundred Wi-Fi access points supporting the user experience in the same location.

The entire ecosystem performed flawlessly. During busy hours with a full load on the network, Private 5G latency was a speedy 21.9 miliseconds, and Wi-Fi latency was 86 miliseconds. It was incredibly exciting to be part of the future of multi-access connectivity—wired and wireless, Wi-Fi, 4G and 5G, all brought together to enable a seamless digital experience.

The NTT DATA Private 5G-powered communication and streaming services were featured at Cisco Live! Amsterdam as part of the NTT DATA’s Smart Venue Solution, and included the following use cases:

  • Mobile broadcasting – wireless video crews roamed the exhibition halls with low latency and high bandwidth, delivering a streamlined multi-camera environment.
  • Visitor traffic management – Cisco’s Meraki cameras and NTT DATA’s Smart Management Platform tracked visitor movements and congestion, enabling operations and security teams to communicate real-time, data-driven crowd control decisions.
  • Emergency Response Vehicle (ERV) – Pre-packaged, flexible FWA Private 5G connectivity was setup and used to mimic rural cellular/satellite backhaul.
  • Premium Booth Connectivity – When the booth is already built and the floor is laid, network cable cannot be raised. P5G provided booth broadband for the exhibitor.
  • NTT Coffee Booth – Cisco’s Meraki cameras and the NTT DATA’s Smart Management Platform monitored and managed queues and seating to optimize the on-site experience.
  • Enhanced exhibitor experiences – Cisco’s Meraki cameras embedded throughout the venue and in booths captured anonymized data including the number of visitors and time spent in the booth to use for planning and to create better customer experiences.
  • Out of Band management – The Private 5G network, backhaul connectivity, and network operations center were integrated to provide the Cisco Live! events team with faster coordination and emergency response capabilities.
  • Venue Safety – Machine vision detected whether individuals were wearing Personal Protection Equipment (PPE) through a real-time alert system, helping to ensure safety throughout the convention center’s facilities.

Showcasing Powerful Private 5G Use Cases at Cisco Live EMEA!
Figure 1. NTT DATA Smart Venue Dashboard

Beyond the experience for event attendees, RAI benefited from the as-a-Service (aaS) model, which made it easy for them to “turn up” and support large amounts of data and real-time insights on the fly, seamlessly augmenting onsite experiences. Turning up 5G capabilities on an ad hoc basis is the ideal solution for conference centers that host large numbers of exhibitors and visitors.

Outfitting RAI with the ability to support advanced connectivity experiences was just the first step, our goal at Cisco is to provide our Service Provider customers with the seamless and flexible technology they need to create business outcomes that deliver on the bottom line.

According to Shahid Ahmed, Group Executive Vice President of New Ventures and Innovation at NTT DATA: “Private 5G and advanced analytics play a pivotal role in accelerating digitial transformation across industries and serve as a driving force to create smarter cities and venues. We are thrilled to partner with Cisco on this unique project. Private 5G excels in a complex environment like this one, and together with our Smart Management Platform will be the catalyst that accelerates the digital transformation journey for RAI and the City of Amsterdam.”

And the next steps at RAI? Cisco and NTT DATA plan to extend 5G coverage following Cisco Live to the venue’s vast 112,000 square meter footprint.

Source: cisco.com

Thursday 29 February 2024

Evolution to 5G-Advanced and Beyond: A Blueprint for Mobile Transport

Evolution to 5G-Advanced and Beyond: A Blueprint for Mobile Transport

The rapid rollout of 5G technology has marked a historic milestone in the evolution of mobile connectivity. According to research firm Omdia, 5G subscriptions surged from 1.4 billion in the middle of 2023 to a projected 8 billion by 2028, representing a compound annual growth rate (CAGR) of roughly 40%. Despite this impressive uptake, Omdia’s data also reveals that overall mobile revenue is growing at a modest rate of about 2%, and average revenue per user (ARPU) is experiencing a decline.

Wireless trends and opportunities


Communication service providers (CSPs) are responding by scaling their 5G networks to accommodate the soaring bandwidth demands, foster revenue growth, reduce total cost of ownership (TCO), and enhance network efficiency and agility.

The industry has seen significant investments from CSPs, with tens of billions of dollars spent on 5G spectrum and more on radio access network (RAN) infrastructure to support 5G. CSPs’ current focus is monetizing 5G for both consumer and enterprise services (see Figure 1).

Evolution to 5G-Advanced and Beyond: A Blueprint for Mobile Transport
Figure 1. Opportunities and Trends

On the consumer front, fixed wireless access (FWA) has emerged as a leading 5G application. For instance, in 2022, FWA accounted for 90% of net broadband additions in the U.S., surpassing traditional cable and DSL. However, this shift brings its own complexities, including the need for enhanced xHaul transport bandwidth, increased data center resources, and greater demand for spectrum resources.

For businesses, private wireless networks represent a crucial area of growth. These networks are particularly relevant in the manufacturing, transportation, logistics, energy, and mining sectors. The advent of 5G-Advanced technologies could help expand these opportunities further. Network slicing, introduced by the 3rd Generation Partnership Project (3GPP), will be pivotal in deploying private 5G networks and other differentiated services.

Partnerships are becoming increasingly important in network monetization strategies, especially with hyperscalers. Additionally, collaborations with satellite operators are gaining traction due to investment and dramatically reduced launch costs, enabling the deployment of low Earth orbit (LEO) constellations and satellite transition from proprietary silo towards integration with terrestrial and 5G networks. Driven by the need for comprehensive reachability and the development of standardized connectivity, as outlined in 3GPP Release 17, this collaboration allows mobile and fixed operators to expand coverage to remote locations and for satellite operators to tap into new customer bases.

Operators are also focusing on technical advancements to monetize their 5G networks effectively. This includes transitioning from non-standalone (NSA) to standalone (SA) mobile cores, which is essential for enabling advanced 5G capabilities. 5G SA cores are required to launch many capabilities supporting ultra-reliable low latency communications (URLLC), massive machine-type communications (mMTC), and network slicing.

Preparations are underway for 5G-Advanced (3GPP Release 18), with features like non-terrestrial networks (NTN), extended reality (XR), and advanced MIMO. The investment will be fundamental for advancing to 6G.

Another critical development is RAN decomposition and virtualization, which involves breaking down the RAN into individual components and running functions on commercial off-the-shelf hardware. Benefits include better utilization, greater scalability and flexibility, and cost reductions. Implementing decomposition and virtualization using O-RAN promises these benefits while breaking RAN vendor lock-in due to standardized, open interfaces.

Edge infrastructure investment is increasing to support new enterprise applications, integral to 5G SA and 5G-Advanced, by moving processing closer to end users, thereby reducing latency, and serving as a critical driver for cloud-native technology adoption. This approach requires flexible deployment of network functions either on-premises or in the cloud, leading to a decentralization of network traffic that was once concentrated. This evolving trend has become more pronounced with increasing traffic demands, blending network roles and boundaries, and creating a versatile network “edge” within the CSP’s framework.

Operational savings, including cost reduction and sustainability initiatives, are top priorities for CSPs to meet budgetary and carbon footprint goals.

Preparing your mobile transport for 5G Advanced and beyond


Mobile packet transport is critical in these initiatives and network transformation, leading to rapid changes in CSP transport networks. Traditionally, these networks relied on dedicated circuits and data communication appliances. However, modern transport is shifting toward a logical construct using any accessible hardware and connectivity services. Successful network architecture now hinges on the ability to seamlessly integrate a variety of appliances, circuits, and underlying networks into a unified, feature-rich transport network.

The Cisco converged, cloud-ready transport network architecture is a comprehensive solution designed to meet the evolving demands of 5G-Advanced and beyond. The architecture is particularly important for operators to navigate the complexities of 5G deployment, including the need for greater flexibility, scalability, and efficiency. Here’s a detailed look at its essential components:

  • Converged infrastructure: Cisco’s approach involves a unified infrastructure seamlessly integrating various network services across wireline and wireless domains. This convergence is essential for supporting diverse customer types and services, from consumer-focused mobile broadband to enterprise-level solutions. The infrastructure is designed to handle all kinds of access technologies on a single network platform, including 4G, 5G, FWA, and the emerging direct satellite-to-device connectivity outlined in 3GPP’s NTN standards.
  • Programmable transport and network slicing services: At the heart of Cisco’s architecture are advanced transport technologies like Border Gateway Protocol (BGP)-based VPNs and segment routing (SR), crucial for a unified, packet-switched 5G transport. These technologies enable a flexible services layer and an efficient underlay infrastructure. This layering provides essential network services like quality of service (QoS), fast route convergence, and traffic-engineered forwarding. Network slicing is also a key feature, allowing operators to offer customized, intent-based services to different user segments. This capability is vital for monetizing 5G by enabling diverse and innovative use cases.
  • Cloud-ready infrastructure: Recognizing the shift toward cloud-native applications and services, Cisco’s architecture is designed to support a variety of cloud deployments, including public, private, and hybrid models. This flexibility ensures that the transport network can adapt to different cloud environments, whether workloads are on-premises or colocated. Virtual routers in the public cloud play a significant role here, providing required IP networking functions (including BGP-VPN, SR, and QoS).
  • Secure and simplified operations model: Security and operational simplicity with service assurance are essential components in Cisco’s architecture. The network is designed for easy programmability and automation, which is essential for operational efficiency and cost reductions. This includes extensive telemetry and open APIs for easy integration with orchestration tools and controllers. Additionally, AI and machine learning technologies can potentially be used for real-time network visibility and actionable insights for optimizing user experience across both wireline and wireless networks.

The architecture is about current 5G capabilities and future readiness. Preparations for 5G-Advanced and the eventual transition to 6G are integral. The architecture’s design ensures operators can evolve their networks without major overhauls, thereby protecting their investment.

Cisco’s converged, cloud-ready transport network architecture offers a blend of technological innovation, operational efficiency, and flexibility, enabling operators to navigate the challenges of 5G deployment while preparing for the subsequent phases of network evolution.

Source: cisco.com

Tuesday 27 February 2024

The Real Deal About ZTNA and Zero Trust Access

The Real Deal About ZTNA and Zero Trust Access

ZTNA hasn’t delivered on the full promise of zero trust


Zero Trust has been all the rage for several years; it states, “never trust, always verify” and assumes every attempt to access the network or an application could be a threat. For the last several years, zero trust network access (ZTNA) has become the common term to describe this type of approach for securing remote users as they access private applications. While I applaud the progress that has been made, major challenges remain in the way vendors have addressed the problem and organizations have implemented solutions. To start with, the name itself is fundamentally flawed. Zero trust network access is based on the logical security philosophy of least privilege. Thus, the objective is to verify a set of identity, posture, and context related elements and then provide the appropriate access to the specific application or resource required…not network level access.

Most classic ZTNA solutions on the market today can’t gracefully provide this level of granular control across the full spectrum of private applications. As a result, organizations have to maintain multiple remote access solutions and, in most scenarios, they still grant access at a much broader network or network segment level.  I believe it’s time to drop the “network” from ZTNA and focus on the original goal of least-privilege, zero trust access (ZTA).

Classic ZTNA drawbacks


With much in life, things are easier said than done and that concept applies to ZTNA and secure remote access. When I talk to IT executives about their current ZTNA deployments or planned initiatives there are a set of concerns and limitations that come up on a regular basis. As a group, they are looking for a cloud or hybrid solution that provides a better user experience, is easier for the IT team to deploy and maintain, and provides a flexible and granular level of security…but many are falling short.

With that in mind, I pulled together a list of considerations to help people assess where they are and where they want to be in this technology space. If you have deployed some form of ZTNA or are evaluating solutions in this area, ask yourself these questions to see if you can, or will be able to, meet the true promise of a true zero trust remote access environment.

  • Is there a method to keep multiple, individual user to app sessions from piggybacking onto one tunnel and thus increasing the potential of a significant security breach?
  • Does the reverse proxy utilize next-generation protocols with the ability to support per-connection, per-application, and per-device tunnels to ensure no direct resource access?
  • How do you completely obfuscate your internal resources so only those allowed to see them can do so?
  • When do posture and authentication checks take place? Only at initial connection or continuously on a per session basis with credentials specific to a particular user without risk of sharing?
  • Can you obtain awareness into user activity by fully auditing sessions from the user device to the applications without being hindered by proprietary infrastructure methods?
  • If you use Certificate Authorities that issue certs and hardware-bound private keys with multi-year validity, what can be done to shrink this timescale and minimize risk exposure?

While the security and architecture elements mentioned above are important, they don’t represent the complete picture when developing a holistic strategy for remote, private application access. There are many examples of strong security processes that failed because they were too cumbersome for users or a nightmare for the IT team to deploy and maintain. Any viable ZTA solution must streamline the user experience and simplify the configuration and enforcement process for the IT team. Security is ‘Job #1’, but overworked employees with a high volume of complex security tools are more likely to make provisioning and configuration mistakes, get overwhelmed with disconnected alerts, and miss legitimate threats. Remote employees frustrated with slow multi-step access processes will look for short cuts and create additional risk for the organization.

To ensure success, it’s important to assess whether your planned or existing private access process meets the usability, manageability and flexibility requirements listed below.

  • The solution has a unified console enabling configuration, visibility and management from one central dashboard.
  • Remote and hybrid workers can securely access every type of application, regardless of port or protocol, including those that are session-initiated, peer-to-peer or multichannel in design.
  • A single agent enables all private and internet access functions including digital experience monitoring functions.
  • The solution eliminates the need for on-premises VPN infrastructure and management while delivering secure access to all private applications.
  • The login process is user friendly with a frictionless, transparent method across multiple application types.
  • The ability to handle both traditional HTTP2 traffic and newer, faster, and more secure HTTP3 methods with MASQUE and QUIC

Cisco Secure Access: A modern approach to zero trust access


Secure Access is Cisco’s full-function Security Service Edge (SSE) solution and it goes far beyond traditional methods in multiple ways. With respect to resource access, our cloud-delivered platform overcomes the limitations of legacy ZTNA. Secure Access supports every factor listed in the above checklists and much more, to provide a unique level of Zero Trust Access (ZTA). Secure Access makes online activity better for users, easier for IT, and safer for everyone.

The Real Deal About ZTNA and Zero Trust Access

Here are just a few examples:

  • To protect your hybrid workforce, our ZTA architectural design has what we call ‘proxy connections’ that connect one user to one application: no more. If the user has access to several apps as once, each app connection has its own ‘private tunnel’. The result is true network isolation as they are completely independent. This eliminates resource discovery and potential lateral movement by rogue users.
  • We implement per session user ID verification, authentication and rich device compliance posture checks with contextual insights considered.
  • Cisco Secure Access delivers a broad set of converged, cloud-based security services. Unlike alternatives, our approach overcomes IT complexity through a unified console with every function, including ZTA, managed from one interface. A single agent simplifies deployment with reduced device overhead. One policy engine further eases implementation as once a policy is written, it can be efficiently used across all appropriate security modules.
  • Hybrid workers get a frictionless process: once authenticated, they go straight to any desired application-with just one click. This capability will transparently and automatically connect them with least privileged concepts, preconfigured security policies and adaptable enforcement measures that the administrator controls.
  • Connections are quicker and provide high throughput. Highly repetitive authentication steps are significantly reduced.

With this type of comprehensive approach IT and security practitioners can truly modernize their remote access. Security is greatly enhanced, IT operations work is dramatically simplified, and hybrid worker satisfaction and productivity maximized.

Source: cisco.com