Showing posts with label cloud security. Show all posts
Showing posts with label cloud security. Show all posts

Thursday, 4 April 2024

Balancing agility and predictability to achieve major engineering breakthroughs

Balancing agility and predictability to achieve major engineering breakthroughs

I shared the progress we’re making toward building the Cisco Security Cloud, an open, integrated security platform capable of tackling the rigors of securing highly distributed, multicloud environments. This was an honest assessment of what we have achieved and celebrating our significant accomplishments, moving the needle forward on our vision. I want to share how we approach our research, development, execution and what are our core principles to driving innovation at scale.

In any large organization with a diverse enterprise-grade portfolio varying in adoption levels, solution longevity, and product category maturity, you will find the need to continuously look for ways and means to drive efficiency and excellence. We are fortunate to have loyal customers who trust that with Cisco, they can both secure and manage risk to their organization. Our focus has been to meet customers where they are, and that involves delivering security solutions in various form factors and platforms for a hybrid, multi-cloud world.

To do this, we are evolving our engineering organization to deliver on ambitious goals through higher levels of agility. Agility requires the courage to break down organizational silos and embrace the notion of failing fast and learning even faster from those failures. But engineering organizations like ours also have our “day jobs” with the reality that constantly changing customer and business environments can wreak havoc on engineering roadmaps. This leads to the inevitable difficult decision on whether to focus on the backlog of customer-requested features, versus delivering new, innovative features that move the industry forward.

Another way to say this is that as much as engineering organizations strive for agility, we have to be cognizant of how much our customers crave predictability in terms of their security operations and  feature delivery from vendors like Cisco. Let’s look at this from the lens of a customer-impacting factor that may make security operations less predictable: security incidents.

Balancing agility and predictability to achieve major engineering breakthroughs

These numbers are meaningful because cybersecurity is a critical part of any business and part of business resilience plans, which can involve public disclosures. Cybersecurity is also in the line of critical operations functions and can be a cause of major disruptions for the entire business when it fails. So, that is the high-stakes nature of the balancing act we have in front of us with one end of the see-saw being our desire to achieve agility with the other end being our responsibility to our customers to be predictable in their security operations, which are becoming ever more critical in the viability of their businesses.

A pragmatic approach to balancing agility and predictability


Leading a large engineering organization in charge of one of the broadest security product portfolios has challenged me to think about this critically. There are many ways to balance agility and predictability, but we’ve been able to distill this down to a pragmatic approach that I believe works best for us.

Careful short and long-term planning.

This is a critical step that provides the framework for building an engineering org that is both agile and predictable. It starts with iterative planning that allows for reviewing and adjusting plans based on market feedback and changing conditions. This includes meeting shorter-term commitments and regular updates to maintain customer confidence while allowing for adjustments. We also use agile retrospectives and adaptive planning to ensure forward progress and our ability to incrementally improve.

Resource allocation and ruthless prioritization play a key role. We achieve this through segmentation and portfolio management, segmenting a product portfolio into different categories based on levels of predictability and innovation. We exercise scenario planning for risk mitigation and management, developing scenarios that explore different market conditions with strategies for responding to ensure we make informed decisions in uncertain conditions. This helps us identify and mitigate risks that may impact our agility and predictability, account for potential disruptions, prioritize appropriately, and manage expectations.

Clear and consistent communication.

One of the most important aspects of this is the need for clear and consistent communication. As leader, it is my responsibility to clearly articulate the benefits of agility and explain the steps we need to take to ensure the predictability and delivery needed for stable operations. My philosophy is that shared outcomes involve “shared code” that results in a platform-centric development approach and an inner source execution model that allow for acceleration of feature development and delivery velocity.

An org culture willing to adapt.

Even the best of plans will fail without capable people who can and are willing to execute on them. For us, this involves an on-going evolution across our large, highly distributed engineering organization to foster a culture that values both agility and predictability and aligned with one of Cisco’s core values: accountability. A few of the ways we’ve seen success are by:
  • Encouraging cross-functional collaboration and open dialogue about the challenges and benefits of both approaches.
  • Ensuring leadership is aligned with the organization’s approach to balancing agility and predictability.
  • Creating opportunities, like Hackathons, to fail fast and learn even faster, explore the art of the possible, and to dive into technology to solve unexpected challenges.
  • Ensuring consistent messaging and support for team members.

Effective processes, not bureaucracies.

Processes often get a bad rap because they are often associated with bureaucracies that can hinder speed and progress. But processes are critical to make sure we’re executing our plans in the intended ways with the ability to measure progress and adapt as necessary. In our goal to balance agility with predictability, we have implemented some specific aspects to processes that work best for us.

  • We blend agile methodologies with more traditional project management approaches (e.g., agile for new features, waterfall for foundational infrastructure). Our processes allow us to take a “dual plane” approach to innovation with one plane focusing on predictable, stable delivery while the other explores innovative, experimental initiatives.
  • As the aphorism goes, “you can’t manage what you can’t measure”. We have implemented an outcome-focused approach toward metrics that shifts the focus from output (deliverables) to outcomes (business value). This allows us to demonstrate how agility enhances the ability to deliver value quickly and adapt to market changes, solving some of the toughest challenges for our customers.
  • We take a customer-centric approach in all things we do. This means we use customer feedback and market insights to prioritize and guide innovation efforts. This includes dedicated customer advisory boards, and programs built around the voice of our customers like NPS surveys. This helps ensure that agility is directed toward meeting customer needs and not innovating for innovation’s sake.

Our processes involve adaptive governance and continuous learning that accommodates both agility and predictability. This includes providing guidelines for making decisions in dynamic situations, continuously assessing what’s working and what’s not, and encouraging a learning mindset and adjusting strategies accordingly.

Innovating to win


Taking a customer centric approach to all things we do, we’ll continue focusing on the breakthrough successes that showcase our ability to be both agile and predictable to meet market demands and deliver customer outcomes. One example of this is how we, as the official cybersecurity partner of the NFL, helped secure this year’s Super Bowl that was the most watched telecast in this game’s history. We also continue our incredible work with AI and Generative AI like the Cisco AI Assistant for Security to simplify policy, and AI-enabled security operations through innovation for both AI for security and security for AI. When we strike the balance of agility and predictability, we innovate to win.

Source: cisco.com

Tuesday, 20 December 2022

Cisco Secure Cloud Analytics – What’s New

Nowadays, “cybersecurity” is the buzzword du jour, infiltrating every organization, invited or not. Furthermore, this is the case around the world, where an increasing proportion of all services now have an online presence, prompting businesses to reconsider the security of their systems. This, however, is not news to Cisco, as we anticipated it and were prepared to serve and assist clients worldwide.

Secure Cloud Analytics, part of the Cisco Threat, Detection, and Response (TD&R) portfolio, is an industry-leading tool for tackling core Network Detection and Response (NDR) use cases. These workflows focus primarily on threat detection and how security teams may recognize the most critical issues around hunting and forensic investigations to improve their mean-time-to-respond.

Over the last year, the product team worked tirelessly to strengthen the NDR offering. New telemetry sources, more advanced detections, and observations supplement the context of essential infrastructure aspects as well as usability and interoperability improvements. Additionally, the long-awaited solution Cisco Telemetry Broker is now available, providing a richer SecOps experience across the product.

MITRE ATT&CK framework alerting capabilities


As part of our innovation story on alerting capabilities, Secure Cloud Analytics now features new detections tied to the MITRE ATT&CK framework such as Worm Propagation, Suspicious User Agent, and Azure OAuth Bypass.

Additionally, various new roles and observations were added to the Secure Cloud Analytics to improve and change user alerts, that are foundational pieces of our detections. Alerts now include a direct link to AWS’ assets and their VPC, as well as direct access to Azure Security Groups, enabling further investigation capabilities through simplified workflows. In addition, the Public Cloud Providers are now included in coverage reports that provide a gap analysis to determine which accounts are covered. Alert Details offers new device information, such as host names, subnets, and role metrics that emphasize detection techniques. To better configure alerts, we are adding telemetry to gain contextual reference on their priority. Furthermore, the ingest process has grown more robust due to data from the Talos intelligence feed and ISE.

Cisco Secure Cloud Analytics, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco

NDR: A Force Multiplier to Cisco XDR Strategy


The highly anticipated SecureX integration is now available in a single click, with no API credentials required and smooth interaction between the two platforms. Most importantly, Secure Cloud Analytics alerts may now be configured to automatically publish as incidents to the SecureX Incident Manager. The Talos Intelligence Watchlist Hits Alert is on by default due to its prominence among the many alert types.

Cisco Secure Cloud Analytics, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco

Among other enhancements to graphs and visualizations, the Encrypted Traffic widget allows for an hourly breakdown of data. Simultaneously, the Device Report contains traffic data for a specific timestamp, which may be downloaded as a CSV. Furthermore, the Event Viewer now displays bi-directional session traffic to provide even more context to Secure Cloud Analytics flows, as well as additional columns to help with telemetry log comprehension: Cloud Account, Cloud Region, Cloud VPC, Sensor and Exporter.

New Sensor Data to Quickly Detect and Hunt Threats


On-premises sensors now provide additional telemetry on the overview page and a dedicated page where users can look further into the telemetry flowing through them in Sensor Health. To optimize the Secure Cloud Analytics deployment and improve the user experience, sensors may now be deleted from the interface.

Cisco Secure Cloud Analytics, Cisco Career, Cisco Skills, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco

Regarding telemetry, Cisco Telemetry Broker can now serve as a sensor in Secure Cloud Analytics, so users can identify and respond to threats faster with additional context sent to Secure Cloud Analytics. In addition, there will soon be support for other telemetry types besides IPFIX and NetFlow.

As we can see from the vast number of new additions to Secure Cloud Analytics, the product team has been working hard to understand the latest market trends, listen to the customers’ requests, and build one of the finest SaaS products in the NDR industry segment. The efforts strongly underline how Secure Cloud Analytics can solve some of the most important challenges in the NDR space around visibility, fidelity of alerts and deployment complexity by providing a cloud hosted platform that can offer insights on-premise and on cloud environments simultaneously from the same dashboard.

Source: cisco.com

Thursday, 10 November 2022

Cisco Secure Firewall on AWS: Build resilience at scale with stateful firewall clustering

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS

Organizations embrace the public cloud for the agility, scalability, and reliability it offers when running applications. But just as organizations need these capabilities to ensure their applications operate where needed and as needed, they also require their security does the same. Organizations may introduce multiple individual firewalls into their AWS infrastructure to produce this outcome. In theory, this may be a good decision, but in practice—this could lead to asymmetric routing issues. Complex SNAT configuration can mitigate asymmetric routing issues, but this isn’t practical for sustaining public cloud operations. Organizations are looking out for their long-term cloud strategies by ruling out SNAT and are calling for a more reliable and scalable solution for connecting their applications and security for always-on protection.

To solve these challenges, Cisco created stateful firewall clustering with Secure Firewall in AWS.

Cisco Secure Firewall clustering overview


Firewall clustering for Secure Firewall Threat Defense Virtual provides a highly resilient and reliable architecture for securing your AWS cloud environment. This capability lets you group multiple Secure Firewall Threat Defense Virtual appliances together as a single logical device, known as a “cluster.”

A cluster provides all the conveniences of a single device (management and integration into a network) while taking advantage of the increased throughput and redundancy you would expect from deploying multiple devices individually. Cisco uses Cluster Control Link (CCL) for forwarding asymmetric traffic across devices in the cluster. Clusters can go up to 16 members, and we use VxLAN for CCL.

In this case, clustering has the following roles:

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 1: Cisco Secure Firewall Clustering Overview

The above diagram explains traffic flow between the client and the server with the insertion of the firewall cluster in the network. Below defines the roles of clustering and how packet flow interacts at each step.

Clustering roles and responsibilities 


Owner: The Owner is the node in the cluster that initially receives the connection.

◉ The Owner maintains the TCP state and processes the packets. 
◉ A connection has only one Owner. 
◉ If the original Owner fails, the new node receives the packets, and the Director chooses a new Owner from the available nodes in the cluster.

Backup Owner: The node that stores TCP/UDP state information received from the Owner so that the connection can be seamlessly transferred to a new owner in case of failure.

Director: The Director is the node in the cluster that handles owner lookup requests from the Forwarder(s). 

◉ When the Owner receives a new connection, it chooses a Director based on a hash of the source/destination IP address and ports. The Owner then sends a message to the Director to register the new connection. 
◉ If packets arrive at any node other than the Owner, the node queries the Director. The Director then seeks out and defines the Owner node so that the Forwarder can redirect packets to the correct destination. 
◉ A connection has only one Director. 
◉ If a Director fails, the Owner chooses a new Director.

Forwarder: The Forwarder is a node in the cluster that redirects packets to the Owner. 

◉ If a Forwarder receives a packet for a connection it does not own, it queries the Director to seek out the Owner.  
◉ Once the Owner is defined, the Forwarder establishes a flow, and redirects any future packets it receives for this connection to the defined Owner.

Fragment Owner: For fragmented packets, cluster nodes that receive a fragment determine a Fragment Owner using a hash of the fragment source IP address, destination IP address, and the packet ID. All fragments are then redirected to the Fragment Owner over Cluster Control Link.  

Integration with AWS Gateway Load Balancer (GWLB)


Cisco brought support for AWS Gateway Load Balancer (Figure 2). This feature enables organizations to scale their firewall presence as needed to meet demand.

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 2: Cisco Secure Firewall and AWS Gateway Load Balancer integration 

Cisco Secure Firewall clustering in AWS


Building off the previous figure, organizations can take advantage of the AWS Gateway Load Balancer with Secure Firewall’s clustering capability to evenly distribute traffic at the Secure Firewall cluster. This enables organizations to maximize the benefits of clustering capabilities including increased throughput and redundancy. Figure 3 shows how positioning a Secure Firewall cluster behind the AWS Gateway Load Balancer creates a resilient architecture. Let’s take a closer look at what is going on in the diagram.

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 3: Cisco Secure Firewall clustering in AWS

Figure 3 shows an Internet user looking to access a workload. Before the user can access the workload, the user’s traffic is routed to Firewall Node 2 for inspection. The traffic flow for this example includes:

User -> IGW -> GWLBe -> GWLB -> Secure Firewall (2) -> GLWB -> GWLBe -> Workload

In the event of failure, the AWS Gateway Load Balancer cuts off existing connections to the failed node, making the above solution non-stateful.

Recently, AWS announced a new feature for their load balancers known as Target Failover for Existing Flows. This feature enables forwarding of existing connections to another target in the event of failure.

Cisco is an early adaptor of this feature and has combined Target Failover for Existing Flows with Secure Firewall clustering capabilities to create the industry’s first stateful cluster in AWS.

Cisco Secure Firewall on AWS, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Firewall, Cisco AWS
Figure 4: Cisco Secure Firewall clustering rehashing existing flow to a new node

Figure 4 shows a firewall failure event and how the AWS Gateway Load Balancer uses the Target Failover for Existing Flows feature to switch the traffic flow from Firewall Node 2 to Firewall Node 3. The traffic flow for this example includes:

User -> IGW -> GWLBe -> GWLB -> Secure Firewall (3) -> GLWB -> GWLBe -> Workload

Source: cisco.com

Friday, 30 July 2021

Full Stack Observability Driving Customer Experience in a Multi-Cloud Environment

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Study Material, Cisco Certification

Application is the Business & Level of Digitalization is the Brand

In our ever-changing world, where the application represents the business itself and the level of digitization it provides is directly related to the perception of the brand; enterprises must ensure they stand differentiated by providing exceptional user experience – both for their customers as well as their employees alike. When the pandemic hit us, expectations by customers and employees initially were driven by empathy, with disruptions to services expected – but 18 months on, today everyone expects the same level of service they got pre-pandemic, irrespective of where people are working from. This drives a higher-level of expectation on the infrastructure and teams alike – towards providing an exceptional digital experience.

It is evident that application services are becoming increasingly distributed and reimagining applications through customer priorities is a key differentiator going ahead. A recent study on Global Cloud adoption by Frost & Sullivan has indicated a 70% jump in multi-cloud adoption in the Financial Services space. This is driven by a renewed focus towards innovation, along with the digitalization and streamlining of the businesses. On average, financial firms have placed more than half of their workloads in the cloud (public or private hosted) and that number is expected to grow faster than other industries over the next five years.

Digital Experience Visibility

In today’s world of applications moving to edge, applications moving to the cloud, and data everywhere – we really need to be able to manage IT irrespective of where we work, as well as where the applications are hosted or consumed from. It’s relatively easy to write up code for a new application; however, the complexity we are solving for in the current real-world scenario is that of deploying that code in today’s heterogenous environment, like that of a bank. Our traditional networks that we currently use to deploy into the data centers, predates cloud, predates SASE, Colo’s, IoT, 5G and certainly predates COVID and working from home.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Study Material, Cisco Certification

In today’s world cloud is the new data center and internet is the new WAN – thereby removing the concept of an enterprise perimeter and making identity the new perimeter. To provide that seamless experience, IT needs to not just monitor application performance, but also enable application resource monitoring and application dependency monitoring – holistically. This should enable the organization to figure out the business impact of an issue – be that a drop in conversion rate or a degradation in a service, and decide almost proactively if not predictively the kind of resources to allocate towards fixing that problem and curbing the business impact.

Observability rather than Visibility


In today’s world operations are complex with various teams relying on different tools, trying to trouble shoot and support their respective domains. This visibility across individual silos still leaves the organization miles away; left to collate the information and insights via war rooms, only then being able to identify the root cause of a problem. What is required is the ability to trouble shoot more holistically – via a data driven operating model.

Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Study Material, Cisco Certification

Thus, it is important to use the network as a Central Nervous System and utilize Full Stack Observability to be able to look at visibility and telemetry from every networking domain, every cloud, the application, the code, and everything in between. Then use AI/ML to consume the various data elements in real time, figure out dynamically how to troubleshoot and get to the root cause of a problem faster and more accurately.

A FSO platform’s end goal is to have the single pane of glass, that would be able to:

◉ Ingest anything: any telemetry, from any 3rd party, from any domain, into a learning engine which has a flexible meta data model, so that it knows what kind of data it’s ingesting

◉ Visualize anything: end to end in a unified connected data format

◉ Query anything: providing cross domain analytics connecting the dots, providing closed loop analytics to faster pinpointed root cause analysis – before it impacts the user experience, which is critical

AI to tackle Experience Degradation


AI within an FSO platform is used not just to identify the dependencies across the various stacks of an application, but also to correlate the data, address issues, and right size the resources as they relate to performance and costs across the full life cycle of the application.

It is all about utilizing the Visibility Insights Architecture across a hybrid environment that enables balancing of performance and costs through real time analytics powered by AI. The outcome to solve for is Experience Degradation which cannot be solved individually in each of the domains (application, network, security, infrastructure) but by intelligently taking a holistic approach, with the ability to drill down as required.

Cisco is ideally positioned to provide this FSO platform with AppDynamics™ and Secure App at the core, combined with ThousandEyes™ and Intersight™ Workload Optimizer, providing a true end to end view of analyzing and in turn curbing the Business Impact of any issue in real time. This enables the Infrastructure Operators and the Application Operators of the enterprise, to work closely together, breaking the silos and enable this closed loop operating model that is paramount in today’s heterogenous environment.

Download the report: Agents of Transformation: The Rise of Full Stack Observability, to learn more about Business Observability and the challenges technologists are facing.

Source: cisco.com

Tuesday, 21 April 2020

Keeping applications safe and secure in a time of remote work

Businesses around the world have quickly moved to a remote worker initiative, with more users accessing critical workloads outside the traditional workplace than ever before. New attack vectors are inadvertently being introduced to businesses as users are accessing their workloads outside the traditional 4-walls of the workplace and the security protection those 4 walls provide.

To combat the uncertainty and risks introduced by mobilizing a greater than normal remote workforce, it is critical that IT maintains visibility into the network and application behavior from the source of both the users’ remote access machine as well as the critical workloads they’re accessing in the data center, cloud or both (Figure 1). Additionally, it is critical for cybersecurity operators to be able to move to a whitelist/zero-trust segmentation model for network traffic they deem critical for the business to function and do so in a way that can be implemented in a matter of minutes.

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 1 – Example of hybrid deployment with remote access to critical workloads

Cisco Tetration and Cisco AnyConnect are paired together now to provide comprehensive security for workload protection needs that customers are having during these volatile times.  These technologies allow IT operators to mitigate many risks to their critical workloads introduced by having an increased attack surface at the “access” layer of their network and also enforce policies to secure the edge and the workloads.  Let’s take a look at the two most relevant use-cases:

Use Case 1 – Gain visibility to network and application behavior of the remote workforce and the workloads they’re accessing. Figure 2 shows exactly how AnyConnect and Tetration work together by sharing telemetry to provide granular level visibility:

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 2 – Tetration and AnyConnect Integration

Use-Case 2 –  Easily implement whitelist network policies that protect access to workloads themselves. Figure 3 demonstrates Tetration enforcing enterprise wide policies that affect the organization as a whole. Figure 4 shows Tetration enforcing policies based on application and workload behavior remain compliant. Having these policies across workloads running anywhere (on-prem, cloud or both) adds the needed protection that stretches beyond perimeter security. With workloads being remotely accessed, micro-segmentation prevents later movement of threats reducing surface attacks.

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 3 – Enterprise wide policies on Tetration

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 4 – Policies on workload based on workload behavior

Now let us dive into the critical elements that help you maintain full visibility and monitor your security policies as your environment evolves. Note all images below are demonstration derived from running Tetration.

1. Visibility is key—quickly see what applications are being accessed by remote users (Fig.5).

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 5 – Applications are access by remote users

2. Gain control— with deeper insights you have more power to take better IT decisions. Get an understanding of your workload data flow without the added overhead of manual interrogation (Fig.6). With the help of Tetration agent running on each workload, you also have the log on the processes that have been accessed (Fig7).

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 6 – Details flow data

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 7 – Get the accessed process using Tetration Agent

3. Search optimization—get granular search results using user details. Historically, this has been a challenge, but with this capability, it will save you time of deeper intervention (Fig.8). Go further by filtering allowed communication policies amongst workloads by searching AD groups (Fig. 9)

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 8 – Filter based on AD user

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 9 – Filter allowed communications based on AD Group

Cisco Tetration and AnyConnect can help you ramp up your remote access goals securely.

Wednesday, 24 April 2019

Connected Car – What is Your Vehicle Reporting?

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco All Certifications

We looked at In-Vehicle Infotainment, focusing on streamed media and navigation services. In this blog, we’ll look at Connected vehicle services and telemetry. What is this and what data volume does it represent?

Vehicle services


Some vehicle manufacturers such as Jaguar, provide additional information services to the vehicle owner/user. For example, the InControl service includes the ability to report completed journeys. This function provides the customer with information about their journeys including the journey distance, real-time location, the duration of the journey, the average speed and data about the efficiency of the journey.

The information required to offer this function is derived from existing vehicle telemetry that is collected by the vehicle manufacturer. Such information forms a small part of the overall vehicle telemetry that is sent over a cellular connection to the vehicle manufacturer.

A growing number of manufacturers offer ‘remote-control functions’ using a cellular connection, enabling users to perform such functions as enable the heating/air-conditioning, lock or unlock the vehicle, sound the horn, flash the headlamps, check the fuel level or battery charge and effective range, check current location etc. More advanced functions include ‘summoning’ the vehicle, however, these services require a relatively small data exchange between the vehicle and the vehicle manufacturer’s data-center.

Some vehicle manufacturers such as Tesla are using software and firmware update over-the-air. In some cases, these updates are delivered via a cellular connection. In others, WiFi can be used as an alternative delivery method. Anecdotal reports from various driver forums suggest that for Tesla vehicles, the full version updates take place roughly every 6 months, with the version 9.0 update required a download of approximately 1GB. Periodic firmware updates also occur but these are unannounced and are much smaller in size (100-150MB). Over-the-air updates are of significant value to vehicle manufacturers in addressing potential defects or in delivering new capabilities to a vehicle, post-sale. Discussions with a small sample of vehicle manufacturers have identified that some are currently reluctant to use over-the-air updates for anything other than updates to non-safety related software such as infotainment services due to concerns about managing the associated risk.

What is your vehicle reporting?


Vehicle manufacturers are increasingly building their vehicles to be ‘connected’. While some manufacturers gather such information for a limited period of time (typically covering the warranty period) others gather information throughout the lifetime of the vehicle.

BMW collects information including vehicle status information (e.g. mileage, battery voltage, door and hatch status, etc.), position and movement data (e.g. time, position, speed, etc.), vehicle service data (e.g. due date of next service visit, oil level, brake wear, etc.), dynamic traffic information (e.g. traffic jams, obstacles, signs, parking spaces, etc.), environmental information (e.g. temperature, rain, etc.), user profile (personal profile picture/ avatar, settings as navigation, media, communication, driver’s position, climate/light, driver assistance, etc.) and sensor information (e.g. radar, ultrasonic devices, gestures, voice, etc.).

In cases such as a detected fault condition, the information including Diagnostic Trouble Codes (DTC) will be recorded to local storage within the vehicle. This can subsequently be used by service engineers to determine the fault condition that was encountered. Some vehicles will send a summary fault report to the vehicle manufacturer, as well. As more sensors are added to vehicles, not only will vehicle manufacturers gather information about the performance and operation of the vehicle itself but may also gather data generated from the sensors themselves­2. This does not mean that such data is gathered continuously. Vehicle systems may transmit a form of the sensor data in cases of ‘interest’ such as an accident or an unexpected set of telemetry data being recorded. Such information is of interest to not only the vehicle makers but potentially to organisations such as insurance companies.

As one can see from the information collection details, the manufacturers are collecting far more information than just fault conditions. The position and movement information can include details such as braking and acceleration styles. Traction-control indications can help determine road conditions at a location. Some vehicle makers and mapping service providers are starting to use such information to identify roadway hazards such as potholes.

Such services are designed of course, on the premise of having cellular connectivity coverage. However, very few countries are able to provide ubiquitous coverages. A 2017 report noted that the United Kingdom had 91% coverage of national highways but a much lower 58% coverage of non-highway classed roadways. A 2017 report indicates that most major urban areas in the United States have good cellular coverage but with the large geography covered by the US highway system, there are still many locations where cellular services are patchy at best.

From a vehicle manufacturer’s perspective, one cannot rely on universal cellular coverage. As a result, applications need to be designed to operate on the premise that connectivity may or may not be available and therefore vehicle systems need to include the ability to store critical data locally, transmitting valuable information when connectivity is restored.

Data volume today

How much information is the vehicle transmitting to the vehicle manufacturer and when is it taking place? The data volume varies from manufacturer to manufacturer and will also depend on the type and model of the vehicle.

A study performed by ADAC in 2016 identified that the BMW i3 electric vehicle transmits the ‘Last State Call’ automatically every time the driver switches off the car and locks the doors (vehicle is not in motion). This call includes the content of the error memory, battery details including cell temperatures and charge level, the driving mode (eco, eco plus, sport), operational data of the range extender, the mileage at various driving operations, quality of the charging point including malfunctions and the position of the last 16 charging points used.

Key to note that in the BMW case is that some information is obtained while the vehicle is in motion, with other information being collected at the end of the journey. Information provided by OEM A (a Japanese auto-maker) indicates that their personal light vehicles generate a report of ~10-15MB per duty-cycle. This is collected on a monthly basis in an upload over a cellular LTE connection. Information from OEM B (a Japanese auto-maker) indicates a volume of 15-20MB per duty-cycle collected while the vehicle is in operation where the average ‘driven-day’ in Japan is ~90 minutes, equating to a US duty-cycle volume of ~12MB.

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco All Certifications

How does this compare to the typical smartphone users? According to a 2018 report, monthly mobile data traffic per smartphone in North America reached 8.6GB (286MB per day) by the end of 2018.

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco All Certifications

Wednesday, 24 October 2018

Cloud Covered – Are You Insured?

Security is a topic that is a top-of-mind for every CIO out there. It is interesting to know that according to a study performed by Research 451, 64% of enterprises report information security is not a separate line in terms of budget, instead, it is part of their IT infrastructure one.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material, Cisco Certifications

In other words, most of us take security for granted until something bad happens. Pete Johnson (“Cloud Unfiltered”) even referred to it as “insurance” and I believe that it is the appropriate term for it.

We all know we need insurance, but what is the right-coverage for me? Well, it really depends on what are the type of assets you are trying to protect and how your business would be impacted if something happened.

If we think about our daily lives, imagine having 20 doors/windows wide open and then just locking or adding video-surveillance to the one in the backyard (because your neighbor just told you he had been robbed the night before and that the thief broke into his house through the backyard door). Well, that’s a good start, however there are still more than 19 doors & windows still wide open and vulnerable for anybody to access right?

Well, that’s pretty much what happens in IT and only securing a few “doors” is called “black-listing”. Let me explain: every server has 65535 ports open (for TCP and the same amount for UDP). If we consider the black-listing approach, we may just close a few ports based on common vulnerabilities knowledge. Most of the times, we don’t know which ports our apps need to work on, therefore we need to follow this approach and just block a few ports while permitting the rest of them.

In today’s Multicloud world, constant and more sophisticated threats are a fact and black-listing security is definitely not enough.

All we must do is install a Tetration software sensor on top of Operating Systems like Windows, Linux, Solaris, AIX among others, it does not matter if they are running bare-metal, virtualized, container-based or even on any Public Cloud or non-Cisco hardware. Once installed, the sensors will continuously feed every flow going in and out of that host to the Tetration Engine, which will show us the Application Dependency Mappings.

Think of the sensors as continuous-feed cameras while the Tetration Engine performs as that person in the SoC watching 24×7, reporting any process-level/network anomalies and having all the recordings from the past available for you to analyze when needed. Before, we would only rely on “video-samples” from specific places and at specific times (using things like Netflow or SPAN sessions).

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material, Cisco Certifications
This provides us with great value, since now we know what specific ports our apps really need and we can close the rest, which is called “white-listing” or “zero-trust policies”. We can now use that information and execute our Zero-Trust Policies either manually or even automatically as shown in the video below.

Tetration supports enforcing those policies at the sensor level, turning the software sensor into an enforcement agent and executing segmentation at the OS level. We could also automate the configuration of those policies on ACI or on your own firewall using tools like Tuffin.

Tetration software sensors log every flow at the process level, therefore, they may help us to identify any anomalies or deviation from the standard (like privilege escalation, change in binary files, failed logons and many more).

There are many other types of coverage we may need for IT and our apps and a comprehensive solution may be needed. This is where Stealthwatch & Stealthwatch Cloud (which effectively report potential attacks), ACI (that can execute and complement our security strategy at the multicloud network level while encrypting VXLAN communications) and an effective Next-Generation Firewall like the Firepower Family among others, can further reduce blind-spots and help us react faster to potential threats.

Having multiple homes (in this case Clouds) where our applications may live, would normally force us into having multiple insurance policies. With solutions like these, we can have a single, continuous and consistent one, which should help us getting some extra hours of quality sleep at night!

Friday, 23 March 2018

Serverless Security for Public Cloud Workloads with Stealthwatch Cloud

Each year goes by and we find more ways to own less and less of what it takes to operate our digital infrastructure. Information Technology began as a business having to build data centers owning everything starting with the real estate all the way to the applications, quickly it moved to public clouds whereby the infrastructure itself was a service managed by the provider and you only needed to manage the virtual servers up through your applications. The latest in this trend is serverless computing.  As you would guess, this is the latest evolution where the service provider owns and operates everything up to the application and you don’t even manage the servers running your code (thus the name “serverless”).

Friday, 16 February 2018

DevSecOps: Security at the Speed of Business

We will describe another key aspect of DevSecOps – developing security guardrails with a hands-on approach via Agile hackathons.

DevSecOps is about bridging DevOps workflows with Information Security (Infosec) Operations by embedding security as code during development, validation during testing and leveraging automation to run continuous operations. From many years in IT, we know that it’s a good idea to first prove ideas manually before we automate. Agile security hackathon is how we bring in participants from relevant disciplines within Information Security and application teams to first go through a set of implementation steps to configure the most important security requirements – the guardrails. With the winter Olympics in progress, this is akin to the guardrails that help a gravity powered Bobsled go faster along iced tracks in a safe manner.