Showing posts with label API. Show all posts
Showing posts with label API. Show all posts

Thursday, 26 October 2023

Driving API Security Forward: Protecting Vehicle-to-Cloud Communications

Driving API Security Forward: Protecting Vehicle-to-Cloud Communications

In the rapidly evolving automotive landscape, vehicles are no longer just machines that move us from point A to point B; they have transformed into complex, interconnected systems on wheels. With this transformation, API (Application Programming Interface) security has surged to the forefront of automotive design and engineering. APIs serve as critical gatekeepers, managing the flow of data between the vehicle’s Electronic Control Units (ECUs), external devices, and the cloud. But with great power comes great responsibility. If an API is compromised, the implications can range from privacy breaches to threats to passenger safety. Automotive API security, therefore, is not just a technological challenge, it’s a safety imperative. It’s a thrilling yet daunting frontier, where the lines between software engineering, cybersecurity, and automotive design blur, and where the stakes are high— ensuring the safe and secure transportation of millions, if not billions, of people around the world.

In parallel, the Internet of Vehicles (IoV) and Vehicle-to-Everything (V2X) continue to evolve. The IoV encompasses a range of technologies and applications, including connected cars with internet access, Vehicle-to-Vehicle (V2V) and Vehicle-to-Cloud (V2C) communication, autonomous vehicles, and advanced traffic management systems. The timeline for widespread adoption of these technologies is presently unclear and will depend on a variety of factors; however, adoption will likely be a gradual process.

Over time, the number of Electronic Control Units (ECUs) in vehicles has seen a significant increase, transforming modern vehicles into complex networks on wheels. This surge is attributed to advancements in vehicle technology and the increasing demand for innovative features. Today, luxury passenger vehicles may contain 100 or more ECUs. Another growing trend is the virtualization of ECUs, where a single physical ECU can run multiple virtual ECUs, each with its own operating system. This development is driven by the need for cost efficiency, consolidation, and the desire to isolate systems for safety and security purposes. For instance, a single ECU could host both an infotainment system running QNX and a telematics unit running on Linux.

ECUs run a variety of operating systems depending on the complexity of the tasks they perform. For tasks requiring real-time processing, such as engine control or ABS (anti-lock braking system) control, Real-Time Operating Systems (RTOS) built on AUTOSAR (Automotive Open System Architecture) standards are popular. These systems can handle strict timing constraints and guarantee the execution of tasks within a specific time frame. On the other hand, for infotainment systems and more complex systems requiring advanced user interfaces and connectivity, Linux-based operating systems like Automotive Grade Linux (AGL) or Android Automotive are common due to their flexibility, rich feature sets, and robust developer ecosystems. QNX, a commercial Unix-like real-time operating system, is also widely used in the automotive industry, notably for digital instrument clusters and infotainment systems due to its stability, reliability, and strong support for graphical interfaces.

The unique context of ECUs present several distinct challenges regarding API security. Unlike traditional IT systems, many ECUs have to function in a highly constrained environment with limited computational resources and power, and often have to adhere to strict real-time requirements. This can make it difficult to implement robust security mechanisms, such as strong encryption or complex authentication protocols, which are computationally intensive. Furthermore, ECUs need to communicate with each other and external devices or services securely. This often leads to compromises in vehicle network architecture where a highcomplexity ECU acts as an Internet gateway that provides desirable properties such as communications security. On the other hand, in-vehicle components situated behind the gateway may communicate using methods that lack privacy, authentication, or integrity.

Modern Vehicle Architecture


ECUs, or Electronic Control Units, are embedded systems in automotive electronics that control one or more of the electrical systems or subsystems in a vehicle. These can include systems related to engine control, transmission control, braking, power steering, and others. ECUs are responsible for receiving data from various sensors, processing this data, and triggering the appropriate response, such as adjusting engine timing or deploying airbags.

Driving API Security Forward: Protecting Vehicle-to-Cloud Communications

DCUs, or Domain Control Units, are a relatively new development in automotive electronics, driven by the increasing complexity and interconnectivity of modern vehicles. A DCU controls a domain, which is a group of functions in a vehicle, such as the drivetrain, body electronics, or infotainment system. A DCU integrates several functions that were previously managed by individual ECUs. A DCU collects, processes, and disseminates data within its domain, serving as a central hub.

The shift towards DCUs can reduce the number of separate ECUs required, simplifying vehicle architecture and improving efficiency. However, it also necessitates more powerful and sophisticated hardware and software, as the DCU needs to manage multiple complex functions concurrently. This centralization can also increase the potential impact of any failures or security breaches, underscoring the importance of robust design, testing, and security measures.

Direct internet connectivity is usually restricted to only one or two Electronic Control Units (ECUs). These ECUs are typically part of the infotainment system or the telematics control unit, which require internet access to function. The internet connection is shared among these systems and possibly extended to other ECUs. The remaining ECUs typically communicate via an internal network like the CAN (Controller Area Network) bus or automotive ethernet, without requiring direct internet access.

The increasing complexity of vehicle systems, the growing number of ECUs, and pressure to bring cutting edge consumer features to market have led to an explosion in the number of APIs that need to be secured. This complexity is compounded by the long lifecycle of vehicles, requiring security to be maintained and updated over a decade or more, often without the regular connectivity that traditional IT systems enjoy. Finally, the critical safety implications of many ECU functions mean that API security issues can have direct and severe consequences for vehicle operation and passenger safety.

ECUs interact with cloud-hosted APIs to enable a variety of functionalities, such as real-time traffic updates, streaming entertainment, finding suitable charging stations and service centers, over-the-air software updates, remote diagnostics, telematics, usage based insurance, and infotainment services.

Open Season


In the fall of 2022, security researchers discovered and disclosed vulnerabilities affecting the APIs of a number of leading car manufacturers. The researchers were able to remotely access and control vehicle functions, including locking and unlocking, starting and stopping the engine, honking the horn and flashing the headlights. They were also able to locate vehicles using just the vehicle identification number or an email address. Other vulnerabilities included being able to access internal applications and execute code, as well as perform account takeovers and access sensitive customer data.

The research is historically significant because security researches would traditionally avoid targeting production infrastructure without authorization (e.g., as part of a bug bounty program). Most researchers would also hesitate to make sweeping disclosures that do not pull punches, albeit responsibly. It seems the researchers were emboldened by recent revisions to the CFAA and this activity may represent a new era of Open Season Bug Hunting.

The revised CFAA, announced in May of 2022, directs that good-faith security research should not be prosecuted. Further, “Computer security research is a key driver of improved cybersecurity,” and “The department has never been interested in prosecuting good-faith computer security research as a crime, and today’s announcement promotes cybersecurity by providing clarity for good-faith security researchers who root out vulnerabilities for the common good.”

These vulnerability classes would not surprise your typical cybersecurity professional, they are fairly pedestrian. Anyone familiar with the OWASP API Security Project will recognize the core issues at play. What may be surprising is how prevalent they are across different automotive organizations. It can be tempting to chalk this up to a lack of awareness or poor development practices, but the root causes are likely much more nuanced and not at all obvious.

Root Causes


Despite the considerable experience and skills possessed by Automotive OEMs, basic API security mistakes can still occur. This might seem counterintuitive given the advanced technical aptitude of their developers and their awareness of the associated risks. However, it’s essential to understand that, in complex and rapidly evolving technological environments, errors can easily creep in. In the whirlwind of innovation, deadlines, and productivity pressures, even seasoned developers might overlook some aspects of API security. Such issues can be compounded by factors like communication gaps, unclear responsibilities, or simply human error.

Development at scale can significantly amplify the risks associated with API security. As organizations grow, different teams and individuals often work concurrently on various aspects of an application, which can lead to a lack of uniformity in implementing security standards. Miscommunication or confusion about roles and responsibilities can result in security loopholes. For instance, one team may assume that another is responsible for implementing authentication or input validation, leading to vulnerabilities. Additionally, the context of service exposure, whether on the public internet or within a Virtual Private Cloud (VPC), necessitates different security controls and considerations. Yet, these nuances can be overlooked in large-scale operations. Moreover, the modern shift towards microservices architecture can also introduce API security issues. While microservices provide flexibility and scalability, they also increase the number of inter-service communication points. If these communication points, or APIs, are not adequately secured, the system’s trust boundaries can be breached, leading to unauthorized access or data breaches.

Automotive supply chains are incredibly complex. This is a result of the intricate network of suppliers involved in providing components and supporting infrastructure to OEMs. OEMs typically rely on tier-1 suppliers, who directly provide major components or systems, such as engines, transmissions, or electronics. However, tier-1 suppliers themselves depend on tier-2 suppliers for a wide range of smaller components and subsystems. This multi-tiered structure is necessary to meet the diverse requirements of modern vehicles. Each tier-1 supplier may have numerous tier-2 suppliers, leading to a vast and interconnected web of suppliers. This complexity can make it difficult to manage the cybersecurity requirements of APIs.

While leading vehicle cybersecurity standards like ISO/SAE 21434, UN ECE R155 and R156 cover a wide range of aspects related to securing vehicles, they do not specifically provide comprehensive guidance on securing vehicle APIs. These standards primarily focus on broader cybersecurity principles, risk management, secure development practices, and vehicle-level security measures. The absence of specific guidance on securing vehicle APIs can potentially lead to the introduction of vulnerabilities in vehicle APIs, as the focus may primarily be on broader vehicle security aspects rather than the specific challenges associated with API integration and communication.

Things to Avoid


Darren Shelcusky of Ford Motor Company explains that while many approaches to API security exist, not all prove to be effective within the context of a large multinational manufacturing company. For instance, playing cybersecurity “whack-a-mole,” where individual security threats are addressed as they pop up, is far from optimal. It can lead to inconsistent security posture and might miss systemic issues. Similarly, the “monitor everything” strategy can drown the security team in data, leading to signal-to-noise issues and an overwhelming number of false positives, making it challenging to identify genuine threats. Relying solely on policies and standards for API security, while important, is not sufficient unless these guidelines are seamlessly integrated into the development pipelines and workflows, ensuring their consistent application.

A strictly top-down approach, with stringent rules and fear of reprisal for non-compliance, may indeed ensure adherence to security protocols. However, this could alienate employees, stifle creativity, and lose valuable lessons learned from the ground-up. Additionally, over-reliance on governance for API security can prove to be inflexible and often incompatible with agile development methodologies, hindering rapid adaptation to evolving threats. Thus, an effective API security strategy requires a balanced, comprehensive, and integrated approach, combining the strengths of various methods and adapting them to the organization’s specific needs and context.

Winning Strategies


Cloud Gateways

Today, Cloud API Gateways play a vital role in securing APIs, acting as a protective barrier and control point for API-based communication. These gateways manage and control traffic between applications and their back-end services, performing functions such as request routing, composition, and protocol translation. From a security perspective, API Gateways often handle important tasks such as authentication and authorization, ensuring that only legitimate users or services can access the APIs. They can implement various authentication protocols like OAuth, OpenID Connect, or JWT (JSON Web Tokens). They can enforce rate limiting and throttling policies to protect against Denial-of-Service (DoS) attacks or excessive usage. API Gateways also typically provide basic communications security, ensuring the confidentiality and integrity of API calls. They can help detect and block malicious requests, such as SQL injection or Cross-Site Scripting (XSS) attacks. By centralizing these security mechanisms in the gateway, organizations can ensure a consistent security posture across all their APIs.

Cloud API gateways also assist organizations with API management, inventory, and documentation. These gateways provide a centralized platform for managing and securing APIs, allowing organizations to enforce authentication, authorization, rate limiting, and other security measures. They offer features for tracking and maintaining an inventory of all hosted APIs, providing a comprehensive view of the API landscape and facilitating better control over security measures, monitoring, and updates. Additionally, cloud API gateways often include built-in tools for generating and hosting API documentation, ensuring that developers and consumers have access to up-to-date and comprehensive information about API functionality, inputs, outputs, and security requirements. Some notable examples of cloud API gateways include Amazon API Gateway, Google Cloud Endpoints, and Azure API Management.

Authentication

In best-case scenarios, vehicles and cloud services mutually authenticate each other using robust methods that include some combination of digital certificates, token-based authentication, or challenge-response mechanisms. In the worst-case, they don’t perform any authentication at all. Unfortunately, in many cases, vehicle APIs rely on weak authentication mechanisms, such as a serial number being used to identify the vehicle.

Certificates

In certificate-based authentication, the vehicle presents a unique digital certificate issued by a trusted Certificate Authority (CA) to verify its identity to the cloud service. While certificate-based authentication provides robust security, it does come with a few drawbacks. Firstly, certificate management can be complex and cumbersome, especially in large-scale environments like fleets of vehicles, as it involves issuing, renewing, and revoking certificates, often for thousands of devices. Finally, setting up a secure and trusted Certificate Authority (CA) to issue and validate certificates requires significant effort and expertise, and any compromise of the CA can have serious security implications.

Tokens

In token-based authentication, the vehicle includes a token (such as a JWT or OAuth token) in its requests once its identity has been confirmed by the cloud service. Token-based authentication, while beneficial in many ways, also comes with certain disadvantages. First, tokens, if not properly secured, can be intercepted during transmission or stolen from insecure storage, leading to unauthorized access. Second, tokens often have a set expiration time for security purposes, which means systems need to handle token refreshes, adding extra complexity. Lastly, token validation requires a connection to the authentication server, which could potentially impact system performance or lead to access issues if the server is unavailable or experiencing high traffic.

mTLS

For further security, these methods can be used in conjunction with Mutual TLS (mTLS) where both the client (vehicle) and server (cloud) authenticate each other. These authentication mechanisms ensure secure, identity-verified communication between the vehicle and the cloud, a crucial aspect of modern connected vehicle technology.

Challenge / Response

Challenge-response authentication mechanisms are best implemented with the aid of a Hardware Security Module (HSM). This approach provides notable advantages including heightened security: the HSM provides a secure, tamper-resistant environment for storing the vehicle’s private keys, drastically reducing the risk of key exposure. In addition, the HSM can perform cryptographic operations internally, adding another layer of security by ensuring sensitive data is never exposed. Sadly, there are also potential downsides to this approach. HSMs can increase complexity throughout the vehicle lifecycle. Furthermore, HSMs also have to be properly managed and updated, requiring additional resources. Lastly, in a scenario where the HSM malfunctions or fails, the vehicle may be unable to authenticate, potentially leading to loss of access to essential services.

Hybrid Approaches

Hybrid approaches to authentication can also be effective in securing vehicle-to-cloud communications. For instance, a server could verify the authenticity of the client’s JSON Web Token (JWT), ensuring the identity and claims of the client. Simultaneously, the client can verify the server’s TLS certificate, providing assurance that it’s communicating with the genuine server and not a malicious entity. This multi-layered approach strengthens the security of the communication channel.

Another example hybrid approach could leverage an HSM-based challenge-response mechanism combined with JWTs. Initially, the vehicle uses its HSM to securely generate a response to a challenge from the cloud server, providing a high level of assurance for the initial authentication process. Once the vehicle is authenticated, the server issues a JWT, which the vehicle can use for subsequent authentication requests. This token-based approach is lightweight and scalable, making it efficient for ongoing communications. The combination of the high-security HSM challenge-response with the efficient JWT mechanism provides both strong security and operational efficiency.

JWTs (JSON Web Tokens) are highly convenient when considering ECUs coming off the manufacturing production line. They provide a scalable and efficient method of assigning unique, verifiable identities to each ECU. Given that JWTs are lightweight and easily generated, they are particularly suitable for mass production environments. Furthermore, JWTs can be issued with specific expiration times, allowing for better management and control of ECU access to various services during initial testing, shipping, or post-manufacturing stages. This means ECUs can be configured with secure access controls right from the moment they leave the production line, streamlining the process of integrating these units into vehicles while maintaining high security standards.

Source: cisco.com

Tuesday, 14 March 2023

Perform Web GUI Administration and Configuration with the AXL API

The AXL Philosophy and Purpose


We, as programmers, often look at an API with wild dreams about building dazzling user-facing applications that inspire jaw-dropping amazement. That’s just how we’re built. And the AXL API has the power to let you do that.

One word… DON’T.

AXL is not an API for user-facing applications. It’s an administration and configuration API. You don’t want to push an end-user application built on AXL to 1,000 users. And if you do, you’re going to have a bad time.

Think of AXL as a programmatic way to perform web GUI administration and configuration tasks. For example, in the web GUI, you add an end user this way.

1. Select the User Management menu
2. Select End User
3. Click on +Add New
4. Fill out the form
5. Save.

Now, programming that might seem silly and more work than using the web GUI. But think of it this way. You have a text file with a list of names, email addresses, phone numbers, assigned company phone extension and other personal data of new employees. Now you can write an application that reads the file and creates and configures an end-user account for each of the persons and creates and configures lines and phones entries for them. That’s automating an administration and configuration task in a way that makes your life as an administrator easier.

The Basics


AXL is a SOAP-based API. There’s no REST for the wicked here.

The most often used AXL APIs fall into the following groups:

1. addSomething (e.g., add a phone)
2. getSomething (e.g., get a phone’s info and settings)
3. updateSomething (e.g., change a phone’s info and settings)
4. applySomething (e.g., apply the changes you made for the phone)
5. removeSomething (e.g., remove a phone)
6. listSomething (e.g., list all phones)

There are a few other AXL APIs not in those groups that you’ll need at times, but those are the most frequently used operations.

Getting Started: Preparation


The best way to get familiar with AXL is to use a free, open-source tool called SoapUI. SoapUI makes it easy to experiment with the AXL API. But first, you need to download the files you’ll use with SoapUI.

Log into Call Manager as an administrator. Under the Application menu, select Plugins.


Click the Find button (not shown in this screen shot). The first item is the Cisco AXL Toolkit. Click on Download and save it somewhere.


The saved file should look like this:


Open the zip file to see its contents


Open the schema directory.


Pick the version of Call Manager you are using. In this sample, we’ll pick current.


Copy the three files above to a working directory. I chose C:\SOAP.


Download and install the open-source SoapUI from this page. You’re done with preparation. Now, it’s time to create an AXL project to play with the API.

Set Up a SoapUI AXL Project


Click on the File menu and choose New SOAP Project.


Pick a name for your project. Set the Initial WSDL to point to the AXLAPI.wsdl file you saved to a working directory earlier. Click OK.


In the left column, you should see this (assuming you used the name New AXL Test, otherwise look for the name you chose).


Right click on AXLAPIBinding and select Show Interface Viewer. You should see this Dialog Box.


Click on the Service Endpoints tab and you’ll see where you can enter information for AXLAPI binding.


Type what you see in the Endpoint field, except point to your server where it says YOURSERVER. Assuming it’s safe for your work environment to do, enter your Administrator username and password in the appropriate fields. You can create an Administrator account in Call Manager specifically for use with the AXL API, or you can use your primary Administrator account.

You can close this dialog box now.

Now let’s play with one of the requests. In the left column, find listPhone and click on its plus sign. Then double-click on Request 1. You should see all the XML for this request pop up in a new dialog.


The listPhone request has a few potential hangups that are good to learn how to avoid. Any listSomething request is going to return, well, a list of things. Scroll down to the bottom of the request XML and you’ll see these options. These give you the option to skip a number of results, or define the starting point. We don’t want to mess with those options right now, so select them and delete them.


At the top, look for what I have selected here, select it and delete it. This attribute can be useful, and you don’t always have to delete it, but in this case, you’ll need to remove the ‘sequence=”?”’ for the request to work properly.


There’s one more thing. Get rid of what you see selected in this screen shot. Select it and delete it.


There are way too many values to specify, so let’s chop down the request to look like this. Make sure to put a percent sign in the <name></name> tag. This is a wild card, which means it will list ALL the phones. You want to start simple, so this is a simplified listPhone operation.


Now’s the time to try it out. Click on the green “run” icon in the upper left. You should see the right side of the request change to this:


This is an unfortunate bug in the current version of SoapUI. It should show you the XML response by default, but it instead shows you raw information. Until the app is fixed, you’ll have to click on the upper left XML tab to view the response.

The response might look something like this:


With that, you now have enough basic knowledge to experiment with any of the AXL APIs. Hey now, you’re an all-star, get your game on, go play.

Programming Tip


And if you really want to run with the big boys, here’s a tip for running multiple AXL request sequentially. Every time you make an AXL request, Call Manager launches a Tomcat session. When you make many requests in a row, Call Manager will launch multiple Tomcat sessions, which use up CPU and RAM.

Here’s a way around that. At the bottom of the response, open up the headers and you’ll see a cookie named JSESSIONID and its value.


If you set the JSESSIONID cookie and use the same value for your next AXL request, Call Manager will re-use the Tomcat session instead of launching a new one.

What to Avoid and Common Mistakes


Many requests have a list of optional search parameter tags, commonly <name> and <uuid>. You will usually have to choose one and delete the others.

As logical as it may seem, you can’t perform a getPhone, change some values, and then copy and paste the modified XML into an updatePhone request. getPhone and updatePhone XML tags are not mirror images.

Be careful when using APIs that give you direct access to the Call Manager database, like executeSqlQuery. Complicated joins may be clever, but they can also suck up CPU and memory the size of a spy balloon, and that eats into the performance of every other operation.

Source: cisco.com

Tuesday, 23 August 2022

Cisco Project, “An-API-For-An-API,” Wins Security Award

Enterprise software developers are increasingly using a variety of APIs in their day-to-day work. With this increase in use, however, it is becoming more difficult for organizations to have a full understanding of those APIs. Are the APIs secure? Do they adhere to the organization’s policies and standards?  It would be incredibly helpful to have a suite of solutions that provides insights to these questions and more. Fortunately, Cisco has introduced our An-API-For-An-API project to address these concerns.

Introducing

An-API-For-An-API (AAFAA) is a project that controls the end-to-end cycle for enterprise API services and helps developers, from code creation to deployment into a cloud, provisioning of API gateways, and live tracking of API use while the application is in production.  Leveraging APIx Manager, an open-source project from Cisco, it combines CI/CD pipelines where API interfaces are tested to enterprise (security) policies, automatic deployment of applications behind an API gateway in a cloud system, and dynamic assessment of the API service through.

Figure 1. provides an overview of how the various pieces of the AAFAA solution fit and work together. Let’s look at the pieces and what insights they each provide the developer.

Cisco, Cisco API, Cisco Certification, Cisco Tutorial and Material, Cisco Prep, Cisco Preparation, Cisco Project
Figure 1. AAFAA Suite

APIx Manager

The central piece of the AAFAA solution suite is an open-source solution, APIx Manager, which provides API insights to developers in the day-to-day developer workflow. APIx Manager creates a browser-based view that can be shared with the DevSecOps team for a single source of truth on the quality and consistency of the APIs – bridging a critical communication gap. All these features help to manage the API life cycle to provide a better understanding of changes to the APIs we use every day. These can be viewed either through the browser or through an IDE Extension for VS Code. APIx Manager can also optionally integrate with and leverage the power of APIClarity, which brings Cloud Native visibility for APIs.

By creating dashboards and reports that integrate with the CI/CD pipeline and bring insights into APIs, developers and operations teams can have a single view of APIs. This allows them to have a common frame of reference when discussing issues such as security, API completeness, REST guideline compliance, and even inclusive language.

APIClarity

APIClarity adds another level of insights into the AAFAA solution suite by providing a view into API traffic and Kubernetes clusters. By using a Service Mesh framework, APIClarity adds the ability to compare runtime specifications of your API to the OpenAPI specification. For applications that don’t yet have a defined specification, developers can compare an API specification against the OpenAPI or company specifications or reconstruct the Spec if it is not published.

Tracking the usage of Zombie or Shadow APIs in your applications is another critical security step. By implementing APIClarity with APIx Manager, Zombie and Shadow API usage is seen within the IDE extension for VS Code. Seeing when APIs drift out of sync with OpenAPI specifications or start to use Zombie and Shadow at runtime, especially in a Cloud Native application, is vital for the improvement of the security posture of your application.

Panoptica

Adding Panoptica to your AAFAA tool kit brings even more insights into your API usage and security posture. Panoptica provides visibility into possible threats, vulnerabilities, and policy enforcement points for your Cloud Native applications. Panoptica is an important solution as well for being a bridge between development and operations teams to bring security into the CI/CD cycle earlier in the process.

Let’s think about what this means from a practical, day-to-day standpoint.

AAFAA in Practice


As enterprise application developers, we are tasked with building and deploying secure applications. Many companies today have defined rules for applications, especially Cloud Native ones. These rules include things like using quality components, e.g., third-party APIs, and not deploy applications with known vulnerabilities. These vulnerabilities can come in the form of a wide variety of areas, from the cloud security posture, application build images, application configuration, the application itself, or the way APIs are implemented.

There isn’t anything new about this. How we achieve the goal of building and deploying secure applications has changed dramatically in the past several years, with the possibility of vulnerabilities ever increasing. This is where AAFAA comes into service.

AAFAA utilizes three main components in providing insights from the very beginning all the way until the end of an application development lifecycle:

- APIx Manager
- CI/CD pipelines & automatic deployment of applications, and
- dynamic assessments of the API service through APIClarity.

APIx Manager

With its built-in integration into development tools, such as VS Code, APIx Manager is the start of the journey into AAFAA for the developer. It allows developers to gain API security and compliance insights when they are needed the most. At the beginning of the development cycle. Bringing these topics to the attention of developers earlier in the development lifecycle, shifting them left, makes them a priority in the application design and coding process. There are many advantages to implementing a Shift-Left Security design practice for the development team. It is also a tremendous benefit for the Ops teams as they can now see, through APIx Manager’s Comparison functionality, when issues were addressed and if they were a developer, Ops, or joint problem that needed to be resolved or if there was something that still needs attention. From the beginning of the software development cycle to the end, APIx Manager is a key component of AAFAA.

CI/CD Pipeline & Automatic Deployment

With the speed at which applications are being produced and updates being rolled out as part of the Agile development cycle, CI/CD pipelines are how developers are used to working. When we thought about our API solutions, we wanted to bring insights into the workflow that developers already use and are comfortable with. Introducing another app that developers must check wasn’t a realistic option. By incorporating APIx Manager, for example, into the CI/CD pipeline, we allow developers to gain insights into API security, completeness, standard compliance, and language inclusivity in their already established work stream.

Cisco, Cisco API, Cisco Certification, Cisco Tutorial and Material, Cisco Prep, Cisco Preparation, Cisco Project
There continues to be tremendous growth in Cloud Native applications. Gartner estimates that by 2025, just a short three years away, more than 95% of new digital workloads will be deployed on cloud platforms. That’s an impressive number. However, as applications move to the cloud and away from platforms that are wholly controlled by internal teams, we lose a bit of insight and control over our applications. Don’t get me wrong, there are many great things about moving to the cloud, but as developers and operation professionals, we need to be vigilant about the applications and experiences we provide to our end users.

Dynamic Assessments

APIClarity is designed to provide observability into API traffic in Kubernetes clusters. As developers make the move to Cloud Native applications and rely more and more on APIs and clusters, the visibility of our application’s security posture becomes more obscured. Tools like APIClarity improve that visibility through a Service Mesh framework which captures and analyzes API traffic to identify potential risks.

When combined with APIx Manager, we bring the assessment level right to the developer’s workflow and into the CI/CD pipeline and the IDE, currently through a VS Code extension. By providing these insights into platforms, developers are already using, we are helping to shift security to the left in the development process and provide visibility directly to developers. In addition to security matters, APIx Manager provides valuable insights into other areas such as API completeness, adherence to API standards, as well as flagging company inclusive language policies.

As part of the An-API-For-An-API suite of tools, APIx Manager and APIClarity provide dynamic analysis and Cloud Native API environment visibility, respectively.

What Else?


Several teams here at Cisco have worked side-by-side to create AAFAA. It’s been great to see it all come together as a solution that will help developers and operations with visibility into the APIs they use. The AAFAA project has also been recognized with a prestigious CSO50 Award for “security projects or initiatives that demonstrate outstanding business value and thought leadership.” Please join me in congratulating the team for such a high honor for a job well done.

Source: cisco.com

Saturday, 13 March 2021

Migrating PnP API from APIC-EM to Cisco DNA Center

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Career

I have had a number of questions about the best way to migrate from APIC-EM to DNA Center for Plug and Play (PnP) provisioning of devices.  The PnP API was very popular in APIC-EM and both the PnP functionality as well as API have been enhanced in DNA Center. While the older workflow style API still exist in DNAC, they are relatively complex and have no UI component.

Transition approaches

I used to have two answers to the “how do I migrate” question. One approach was to transition (just use the workflow API), and the other was to transform (move to the new “site base” API).

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Career

If you had static configuration files for devices (e.g. some other tool to generate them) you would typically choose the first option.  If you were more interested in templates with variables, you would choose the second.

There is now a hybrid option, using the new site-claim API, with a “fixed” configuration template.

PnP API


First a look at the PnP API and how they work.  There are two steps, 1) Add a device and 2) Claim a device.

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Career

Step one adds the device to the PnP device table.  This can happen in two ways, unplanned (where the device discovers DNA Center), or pre-planned (where the device is inserted into DNA Center).  This step is unchanged from APIC-EM and uses the /import API endpoint.  All that is required is the model of the device and the serial number.

Once the device is part of the PnP table, it can then be claimed.   In the past, the workflow based API used the /claim API endpoint.  The newer /site-claim API endpoint is now recommended.   This requires a device (from step1) and a site.  There are optional attributes for image upgrade and configuration template.

These steps are seen in the UI.  The first device (“pre-planned”) has been added to DNA Center, but not claimed. The second device was added and claimed to a site.  The source of both devices was “User” which indicates they were pre-planned as opposed to “Network” which indicates an un-planned device.

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Career

Using the Scripts


These scripts are available on github. The readme has instructions on installing them. The scripts use the dnacentersdk I described in this post.

The first step is to upload the configuration files as templates.  These should be stored in the “Onboarding Configuration” folder.

$ ./00load_config_files.py --dir work_files/configs
Processing:switch1.cfg: adding NEW:Commiting e156e9e6-653d-4016-85bd-f142ba0659f8
Processing:switch3.cfg: adding NEW:Commiting 9ae1a187-422d-41b9-a363-aafa8724a5b2

Second step is to edit a CSV file contain the devices to be uploaded, and the configuration file. This file deliberately contains some errors (missing config file and missing image) as examples.

$ cat work_files/devices.csv 
name,serial,pid,siteName,template,image
adam123,12345678902,c9300,Global/AUS/SYD5,switch1.cfg,cat3k_caa-universalk9.16.09.05.SPA.bin
adam124,12345678902,c9300,Global/AUS/SYD5,switch2.cfg,cat3k_caa-universalk9.16.09.05.SPA.bin
adam_bad_image,12345678902,c9300,Global/AUS/SYD5,switch2.cfg,cat3k_caa-universalk9.16.09.10.SPA.bin

Third step is to use the script to upload the devices into DNA Center. The missing configuration and missing image are flagged.

$ ./10_add_and_claim.py --file work_files/devices.csv 
Device:12345678902 name:adam123 siteName:Global/AUS/SYD5 Status:PLANNED
##ERROR adam124,12345678902: Cannot find template:switch2.cfg
##ERROR adam_bad_image,12345678902: Cannot find image:cat3k_caa-universalk9.16.09.10.SPA.bin
adam_bad_image,12345678902,c9300,Global/AUS/SYD5,switch2.cfg,cat3k_caa-universalk9.16.09.10.SPA.bin
This will be reflected in the PnP page in DNA Center.

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Career

Under the Covers


Using the SDK abstracts the API.  For those that want to understand the payloads in more detail, here is a deeper dive into the payloads.

Templates

The following API call will get the projectId for the “Onboarding Configuration” folder.

GET dna/intent/api/v1/template-programmer/project?name=Onboarding%20Configuration
The result will provide the UUID of the project. It also provides a list of the templates, so could be used to find the template.   A different call is required to get the template body, as templates are versioned.  The “id” below is  the master template “id”

[
    {
        "name": "Onboarding Configuration",
        "id": "bfbb6134-8b1a-4629-9f5a-435a13dba75a",
        "templates": [

            {
                "name": "switch1.cfg",
                "composite": false,
                "id": "e156e9e6-653d-4016-85bd-f142ba0659f8"
            },

A better way to get the template is to call the template API with a projectId as a query parameter. It is not possible to lookup a template by name, the only option is to iterate through the list of results.

GET dna/intent/api/v1/template-programmer/template?projectId=bfbb6134-8b1a-4629-9f5a-435a13dba75a

Templates have versions. There is the master template Id, as well as an Id for each version. The example below only has one version “id”: “bd7cfeb9-3722-41ee-bf2d-a16a8ea6f23a”

{
        "name": "switch1.cfg",
        "projectName": "Onboarding Configuration",
        "projectId": "bfbb6134-8b1a-4629-9f5a-435a13dba75a",
        "templateId": "e156e9e6-653d-4016-85bd-f142ba0659f8",
        "versionsInfo": [ 
            {
                "id": "bd7cfeb9-3722-41ee-bf2d-a16a8ea6f23a", 
                "author": "admin",
                "version": "1",
                "versionTime": 1590451734078
            }
        ],
        "composite": false
    }

To get the body of the template (to compare SHA hash), use the template API call, for the specific version.

GET dna/intent/api/v1/template-programmer/template/bd7cfeb9-3722-41ee-bf2d-a16a8ea6f23a
Will return the body. Templates apply to a productFamily and softwareType. These will be used when creating or updating templates.

{
    "name": "switch1.cfg",
    "tags": [],
    "author": "admin",
    "deviceTypes": [
        {
            "productFamily": "Switches and Hubs"
        }
    ],
    "softwareType": "IOS-XE",
    "softwareVariant": "XE",
    "templateContent": "hostname switch1\nint g2/0/1\ndescr nice  one\n",
    "templateParams": [],
    "rollbackTemplateParams": [],
    "composite": false,
    "containingTemplates": [],
    "id": "bd7cfeb9-3722-41ee-bf2d-a16a8ea6f23a",
    "createTime": 1590451731775,
    "lastUpdateTime": 1590451731775,
    "parentTemplateId": "e156e9e6-653d-4016-85bd-f142ba0659f8"
}

To add a new template, there are two steps. The template has to be created, then committed. The second step is the same as updating an existing template, which creates a new version. Notice the deviceTypes and softwareType are required.

POST dna/intent/api/v1/template-programmer/project/bfbb6134-8b1a-4629-9f5a-435a13dba75a/template
{
     "deviceTypes": [{"productFamily": "Switches and Hubs"}],
     "name": "switch4.cfg",
     "softwareType": "IOS-XE",
     "templateContent": "hostname switch4\nint g2/0/1\ndescr nice  four\n"
}

This will return a task, which needs to be polled.

{
       "response": {
                 "taskId": "f616ef87-5174-4215-b5c3-71f50197fe72",
                 "url": "/api/v1/task/f616ef87-5174-4215-b5c3-71f50197fe72"
        },
        "version": "1.0"
}

Polling the task

GET dna/intent/api/v1/task/f616ef87-5174-4215-b5c3-71f50197fe72
The status is successful and the templateId is “57371b95-917b-42bd-b700-0d42ba3cdcc2”

{
  "version": "1.0", 
  "response": {
    "username": "admin", 
    "rootId": "f616ef87-5174-4215-b5c3-71f50197fe72", 
    "serviceType": "NCTP", 
    "id": "f616ef87-5174-4215-b5c3-71f50197fe72", 
    "version": 1590468626572, 
    "startTime": 1590468626572, 
    "progress": "Successfully created template with name switch4.cfg", 
    "instanceTenantId": "5d817bf369136f00c74cb23b", 
    "endTime": 1590468626670, 
    "data": "57371b95-917b-42bd-b700-0d42ba3cdcc2", 
    "isError": false
  }
}

The final step is to commit the change to the template.

POST dna/intent/api/v1/template-programmer/template/version
{
  "templateId": "57371b95-917b-42bd-b700-0d42ba3cdcc2"
}

To update an existing template, it is a PUT rather than POST. Again, the deviceTypes and softwareType are required.

PUT dna/intent/api/v1/template-programmer/template
{
 "deviceTypes": [ { "productFamily": "Switches and Hubs" } ],
 "id": "57371b95-917b-42bd-b700-0d42ba3cdcc2",
 "name": "switch4.cfg",
 "softwareType": "IOS-XE",
 "templateContent": "hostname switch4\nint g2/0/1\ndescr nice four **\n"
}

Again, a task is returned, which needs to be polled.

{
  "version": "1.0", 
  "response": {
    "username": "admin", 
    "rootId": "52689b1e-e9b8-4a60-8ae9-a574bb6b451c", 
    "serviceType": "NCTP", 
    "id": "52689b1e-e9b8-4a60-8ae9-a574bb6b451c", 
    "version": 1590470080172, 
    "startTime": 1590470080172, 
    "progress": "Successfully updated template with name switch4.cfg", 
    "instanceTenantId": "5d817bf369136f00c74cb23b", 
    "endTime": 1590470080675, 
    "data": "57371b95-917b-42bd-b700-0d42ba3cdcc2", 
    "isError": false
  }
}

The final step is to commit the change, as when first creating the template.  The UI will show two versions of this template.

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Career

Site

To find the siteId, a simple lookup is used, with the name as a query parameter.  This is the fully qualified name of the site.

GET dna/intent/api/v1/site?name=Global/AUS/SYD5
This will return the siteId.

{
  "response" : [ {
    "parentId" : "ace74caf-6d83-425f-b0b6-05faccb29c06",
    "systemGroup" : false,
    "additionalInfo" : [ {
      "nameSpace" : "Location",
      "attributes" : {
        "country" : "Australia",
        "address" : "177 Pacific Highway, North Sydney New South Wales 2060, Australia",
        "latitude" : "-33.837053",
        "addressInheritedFrom" : "d7941b24-72a7-4daf-a433-0cdfc80569bb",
        "type" : "building",
        "longitude" : "151.206266"
      }
    }, {
      "nameSpace" : "ETA",
      "attributes" : {
        "member.etaCapable.direct" : "2",
        "member.etaReady.direct" : "0",
        "member.etaNotReady.direct" : "2",
        "member.etaReadyNotEnabled.direct" : "0",
        "member.etaEnabled.direct" : "0"
      }
    } ],
    "groupTypeList" : [ "SITE" ],
    "name" : "SYD5",
    "instanceTenantId" : "5d817bf369136f00c74cb23b",
    "id" : "d7941b24-72a7-4daf-a433-0cdfc80569bb",
    "siteHierarchy" : "80e81504-0deb-4bfd-8c0c-ea96bb958805/ace74caf-6d83-425f-b0b6-05faccb29c06/d7941b24-72a7-4daf-a433-0cdfc80569bb",
    "siteNameHierarchy" : "Global/AUS/SYD5"
  } ]
}

Image

To find the imageid, for upgrading software, search for the image by imageName. NOTE, on some platforms this is different to name.

GET dna/intent/api/v1/image/importation?imageName=cat3k_caa-universalk9.16.09.05.SPA.bin
Returns the imageUuid, and a lot of other information about the image, including model numbers etc.

{
    "response": [
        {
            "imageUuid": "04d69fe0-d826-42e9-82c0-45363a2b6fc7",
            "name": "cat3k_caa-universalk9.16.09.05.SPA.bin",
            "family": "CAT3K_CAA",
            "version": "16.9.5",
            "md5Checksum": "559bda2a74c0a2a52b3aebd7341ff96b",
            "shaCheckSum": "a01d8ab7121e50dc688b9a2a03bca187aab5272516c0df3cb7e261f16a1c8ac355880939fd0c24cc9a79e854985af786c430d9b704925e17808353d70bf923f4",
            "createdTime": "2020-05-26 04:20:42.904",
            "imageType": "SYSTEM_SW",
            "fileSize": "450283034 bytes",
            "imageName": "cat3k_caa-universalk9.16.09.05.SPA.bin",
            "applicationType": "",
            "feature": "",
            "fileServiceId": "94eccf65-a1dd-47ca-b7c4-f5dd1a8cdeb7",
            "isTaggedGolden": false,
            "imageSeries": [
                "Switches and Hubs/Cisco Catalyst 3850 Series Ethernet Stackable Switch",
                "Switches and Hubs/Cisco Catalyst 3650 Series Switches"
            ],
            
Add Device

To add the device, supply a serialNumber, and pid. The name is optional. The aaa parameters are not used prior to DNAC 1.3.3.7. They are used to solve an issue with “aaa command authorization”.

POST dna/intent/api/v1/onboarding/pnp-device/import
[
  {
    "deviceInfo": {
      "serialNumber": "12345678902", 
      "aaaCredentials": {
        "username": "", 
        "password": ""
      }, 
      "userSudiSerialNos": [], 
      "hostname": "adam123", 
      "pid": "c9300", 
      "sudiRequired": false, 
      "stack": false
    }
  }
]

The response contains the deviceId, other attributes have been removed for brevity. At this point the device appears in PnP, but is unclaimed.

{
                 "successList": [
                     {
                         "version": 2,
                         "deviceInfo": {
                             "serialNumber": "12345678902",
                             "name": "12345678902",
                             "pid": "c9300",
                             "lastSyncTime": 0,
                             "addedOn": 1590471982430,
                             "lastUpdateOn": 1590471982430,
                             "firstContact": 0,
                             "lastContact": 0,
                             "state": "Unclaimed",

                         "tenantId": "5d817bf369136f00c74cb23b",
                         "id": "5eccad2e29da7c0008613b69"
                     }

Site-Claim

To claim the device to a site, use the siteId, imageId, templateId(configId) from earlier steps. Notice the master templateId is used, rather than a specific version. The master gets the latest version of the template by default. The type should be “Default”. If you are using a stack, then the type would be “StackSwitch”. Wireless Access points will set the type field to “AccessPoint”.

POST dna/intent/api/v1/onboarding/pnp-device/site-claim
{
  "configInfo": {
    "configId": "e156e9e6-653d-4016-85bd-f142ba0659f8", 
    "configParameters": []
  }, 
  "type": "Default", 
  "siteId": "d7941b24-72a7-4daf-a433-0cdfc80569bb", 
   "deviceId": "5eccad2e29da7c0008613b69", 
  "imageInfo": {
    "skip": false, 
    "imageId": "04d69fe0-d826-42e9-82c0-45363a2b6fc7"
  }
}

The response shows success, and this is reflected in the PnP UI.

{
      "response": "Device Claimed",
      "version": "1.0"
}

More on Stacks


One major innovation in DNAC PnP is support for stack renumbering. Prior to this, it was recommended that stack members be powered on two minutes apart, from top to bottom. This was to ensure a deterministic interface numbering. Stack renumbering is a much better solution to this problem. One of two stack cabling methods can be used, and the serial number of the top-of-stack switch is required.

There are two implications for API calls for the pre-planned workflow. The first is for the add device call. The stack parameter needs to be set to True.

POST dna/intent/api/v1/onboarding/pnp-device/import
[
  {
    "deviceInfo": {
      "serialNumber": "12345678902", 
      "aaaCredentials": {
        "username": "", 
        "password": ""
      }, 
      "userSudiSerialNos": [], 
      "hostname": "adam123", 
      "pid": "c9300", 
      "sudiRequired": false, 
      "stack": true 
    }
  }
]

The second is the site-claim. The type needs to be changed to “StackSwitch” and two extra attributes are required.

Note: The topOfStackSerialNumber has to be the same as the serial number used to add the device. In other words, add the device with the serial number you intend to use for the top of stack. It does not matter which switch in the stack initiates contact, as the stack will provide all serial numbers to DNAC.

POST dna/intent/api/v1/onboarding/pnp-device/site-claim
{
  "configInfo": {
    "configId": "e156e9e6-653d-4016-85bd-f142ba0659f8", 
    "configParameters": []
  }, 
  "type": "StackSwitch",
  "topOfStackSerialNumber":"12345678902",
  "cablingScheme":"1A", 
 
  "siteId": "d7941b24-72a7-4daf-a433-0cdfc80569bb", 
  "deviceId": "5eccad2e29da7c0008613b69", 
  "imageInfo": {
    "skip": false, 
    "imageId": "04d69fe0-d826-42e9-82c0-45363a2b6fc7"
  }
}