Thursday, 14 December 2023

The Technology That’s Remaking OU Health into a Top-Tier Medical Center

The Technology That’s Remaking OU Health into a Top-Tier Medical Center

I hold daily status meetings with various groups within our technology team, and the questions almost always drifts to: “What’s today’s challenge?”  Sometimes that challenge might be, “We can’t print at one of our ambulatory care facilities,” or “Today, we can’t send diagnostic images to our remote Radiologists.” The meeting helps us to focus our attention, and as the CTO of OU Health in Oklahoma City, my job is to eliminate the issues that crop up on those meetings and work to minimize or eliminate these issues repeating in the future.

To do so, we needed to tackle the root of the problem. This, along with our desire to replace our electronic health record and revenue cycle system, contributed to OU Health’s decision to completely overhaul our IT infrastructure in support of our long-term organizational needs.

OU Health strives to bring innovation to our patients. As an academic health system, we operate one of 71 National Cancer Institute-designated cancer centers in the country, Oklahoma’s only Level I trauma center and the state’s highest level NICU, offering high-quality patient care and running clinical trials leading to exciting new treatments. More than solving our daily headaches, our IT overhaul will fundamentally transform how we manage our infrastructure and administer the enterprise and clinical systems we use to support our healthcare professionals and patients.

A complete revamp of a health system’s IT infrastructure is a daunting task. To use a well-worn cliché, it is like trying to work on the engine of a moving train. The process has taken dedication and careful planning, the work of my team and the support of technology partners like Cisco to put together and roll out a solution across OU Health’s sites.

A Fresh Start, a Greenfield Network, and Some Limitations


I have had the privilege of working in healthcare IT for over 13 years. My previous experience includes serving as the VP of Technology Architecture at Cablevision for seven years and two years at Discovery as the SVP of Enterprise Architecture, where I gained a wealth of knowledge in telecommunications and entertainment. Following a six-month sabbatical, I had the opportunity to return to my previous role, but I decided to use my expertise to assist others in a much more impactful way. Watching my wife struggling for years with her organization’s IT as a family practice provider and educator made me realize the difficulties that arise when dealing with healthcare IT systems. Thanks to a friend I had worked with earlier in my career, I joined a healthcare consulting company launching my career in healthcare IT to help physicians and patients achieve the best possible outcomes in medical care.

OU Health officially launched in July 2021, following a historic merger that combined the University of Oklahoma College of Medicine faculty practice and OU Medicine, Inc. (sole member, University Hospitals Trust) to create OU Health — Oklahoma’s first fully integrated academic health system. The merger aligned the OU Health clinical enterprise with national best practices across the healthcare industry and enabled the hospitals and clinics to become one unified and cohesive organization.

Since my arrival at OU Health in March of 2022, I have worked closely with our IT leadership team on the Epic migration project. This initiative moved OU Health to a new EHR system. To achieve this, we deployed a greenfield solution, which involved setting up new networks, systems, data centers and applications, while keeping our legacy systems running.

Unfortunately, our environment was not in the best shape. It featured outdated equipment, overlapping solutions, outdated code, unpatched equipment, and more, which caused numerous challenges. Our network was also very slow, which meant that it took 15 to 30 minutes for a radiologist to download X-rays and CAT scans at our remote locations.

This hindered workflows and caused frustration among our clinicians and hospital staff. They had to call us whenever something went wrong, and we had no way to proactively monitor and restore our systems. When a link went down, we didn’t know how to repair it. It was clear that an overhaul was necessary to ensure that our healthcare professionals could focus on caring for their patients and not worry about technical issues.

We Wanted Redundancy, Resilience, and Performance—That Meant Cisco


Healthcare organizations are somewhat conservative when procuring IT. We don’t look for the newest solution, nor do we look at the cheapest. We don’t cut costs because patients’ lives are at stake. Instead, we look at the most advanced tried and true systems. We take this approach when selecting lab equipment, imaging and diagnostic systems, surgical supplies, and more.

At OU Health, we have a long-standing relationship with Cisco, and our lead engineer holds CCIE Enterprise Infrastructure certification. Our internal project manager also brings decades of experience in managing large-scale Cisco deployments. We all know that Cisco can deliver highly performant, redundant and resilient infrastructure. However, my job requires me to be objective, so I attend events like Gartner Summits and the annual HIMSS Global Health Conference & Exhibition to see what’s out there. I have an excellent grasp of the technology landscape, and I’ve yet to find a partner that can deliver on its promises like Cisco.

Hospitals are 24/7 institutions, and hospital infrastructure must fully support a never-down scenario.

When we created the specifications for our new environment, we looked at three things: high redundancy, high resilience, and high performance. OU Health, like most healthcare organizations, runs 24 hours a day, seven days a week, and our infrastructure must fully support a never-down scenario. When it came time to build our new environment, we didn’t ask, “Why Cisco?” The real question was, “Why not Cisco?”

The Technology That’s Remaking OU Health into a Top-Tier Medical Center

Redefining Our Network with Software-Defined Networking


When it came time to pitch our new infrastructure, the team worked with Cisco’s network architects, external partners, and our internal lead architects. We put together a funding request and presented it to our board. We expected our executives to approve less than what our budgeted request was, but instead we were given the green light to build everything we had asked for – a state-of-the-art network and system environment that would put OU Health on the map as a top-tier medical center.

We built our new network on Cisco technology. Its core layer is Cisco ACI (Application Centric Infrastructure), a software-defined networking solution, to help segment our network. We built redundant high-speed links throughout our Wide Area Network (WAN) and multiple paths connecting our core network to our hospitals, clinics, and ambulatory systems. Then we have multiple routers to handle our software-defined network and segmentation. If a router or segment fails, we divert traffic to an alternative path.

We use Cisco ACI to route traffic to specific destinations within our network. A great example is lab results. Our lab equipment doesn’t communicate with the outside world, but our technicians must send results to our EMR. So, we’ve segmented the network to transfer results only to specific servers that then upload them to our records systems.

Patient health includes their healthcare data, and we prioritize the security around our patients’ protected health information (PHI). We used ACI to create a firewalled zone for PHI and other sensitive data per HIPAA regulations. Users can only access that environment if they’re performing a transaction requiring that information.

We also use Cisco UCS servers for scalable efficiency and agility, Cisco Identity Services Engine (ISE) for endpoint management, and we go end-to-end on Cisco wireless solutions, consisting of Cisco Catalyst 9130AX and 9166 access points and 9800-80 controllers. In addition, we adopted Cisco VDI to integrate some of our legacy systems (running on ancient machines) into our new infrastructure. Virtualizing these allows us to keep them running until we’re able to replace the systems without disrupting care.

Migrating Our Data Centers to Ensure IT Transparency


Our three new data centers carry the load of our new network. Two of our data centers are active/active and mirror each other, splitting our workloads in two and with each facility running at 50% capacity. Our third data center is for disaster recovery (DR) and has redundant links to all our hospitals, clinics, and ambulatory systems. Should the unthinkable happen and our primary data centers fail, the third data center will ensure we remain operational.

One of our biggest challenges and opportunities was migrating our systems and infrastructure to new, more sustainable data centers. Instead of building our own on-prem data centers, we decided to partner with a top-tier co-location facility who could host our data centers and lower our footprint dramatically. We partnered with TierPoint, and were one of the first clients at their new facility built with efficiency in mind. They architected the power, cooling, HVAC, and building materials to reduce their carbon footprint. We benefit from robust redundancy and backup features, and the energy-efficient technology helps lower our operating expenses while protecting the environment. Moving our data center off-prem was one of our best business and tech decisions.

On June 3, 2023, OU Health went live with Epic and our new infrastructure. We had already moved to our new primary network in March 2023, but this was the final test. We are already reaping the rewards of our new Cisco infrastructure. We’re no longer fielding complaints about slow Wi-Fi speeds, connection failures, network segment outages and VPN issues. OU Health can access what they need quickly, getting patients what they need faster. I look forward to sitting down with our care teams to find ways to expand, innovate and build on our new platform.

Cisco Is Helping OU Health Elevate Our Organization


Before long, my IT team will be able to leverage our new Cisco network to enable and execute OU Health’s business vision. We can integrate new departments, locations, and facilities into our network faster because we anticipated the need to build more WAN connections and have a comprehensive map of both current and future state. In the past, every network addition was a one-off event. But now, we can expand our platform as needed to add powerful new applications like data analytics and population health management (PHM) tools to aggregate patient information across multiple systems and technologies. Now that we have rolled out Epic, we have a solid foundation to help us push the envelope of quality patient care and cutting-edge research.

The best minds gravitate to organizations that let them add value by supporting a business vision instead of fixing things.

Our new Cisco infrastructure is a valuable retention and recruitment tool. Clinicians and researchers also want to work in high-tech environments. They would prefer user-friendly systems over struggling with electronic forms, drowning in endless emails, or waiting hours for medical images, test results, and trial and patient data. When they have the most efficient tools and IT works flawlessly, they can focus on their patients and research.

OU Health is on a mission to elevate and transform our organization into a top-tier academic health system. Our leadership sees technology as key to delivering on our vision and business strategy. We are expanding our services and research initiatives and will continue to innovate in diabetes and cancer care, pediatrics, and geriatrics. We want to make a difference in Oklahoma and beyond by bringing the best medical care to the populations we serve.

Source: cisco.com

Tuesday, 12 December 2023

Bringing Simplicity to Security: The Journey of the Cisco Security Cloud

Bringing Simplicity to Security: The Journey of the Cisco Security Cloud

In June of 2022 at the RSA Conference, we announced our vision for the Cisco Security Cloud Platform. We set out to provide an integrated experience to securely connect people and devices everywhere to applications and data anywhere. We focused on providing an open platform for threat prevention, detection, response, and remediation capabilities at scale. Since the announcement, we’ve been working hard to deliver, and the core of what we’ve accomplished has been rooted in how we can bring simplicity to security, and simplicity for our customers.

Bringing Simplicity to Security: The Journey of the Cisco Security Cloud

Our platform vision was founded with five key design goals in mind: Cloud-native, multicloud, unified, simplified, AI-first, and open and extensible. Here’s how we have executed on our vision since we launched the Cisco Security Cloud:

  • We delivered Cisco Secure Access, a cloud-delivered security service edge (SSE) solution, grounded in zero trust, that provides our customers exceptional user experience and protected access from any device to anywhere.
  • We improved zero-trust functionality with an integrated client experience (Secure Client), and industry first partnerships with Apple and Samsung using modern protocols to deliver user friendly, zero trust access to private applications, and improved network traffic visibility.
  • We delivered our Extended Detection and Response (XDR) solution with first-of-its-kind capabilities for automatically recovering from ransomware attacks that costs businesses billions of dollars annually.
  • We have made significant investments in advanced technologies and top talent in strategic areas like multicloud defense, artificial intelligence, and identity with the acquisitions of Valtix, ArmorBlox, and Oort.
  • We simplified how customers can procure tightly integrated solutions from us with our first set of Security Suites (User, Cloud, and Breach Protection) that are powered by AI, built on zero trust principles, and delivered by our Security Cloud platform.
  • We have taken a major step in making artificial intelligence pervasive in the Security Cloud with the new Cisco AI Assistant for Security, and introduction of our AI Assistant for Firewall Policy. Managing, updating, and deploying policies is one of the most complex and time-consuming tasks that is fraught with human error. Our AI Assistant solves the complexity of setting and maintaining these policies and firewall rules.

Our goal continues to be lifting the complexity tax for customers


While I’m certainly proud of the tremendous progress we have made in the last two years, I know there’s still work to be done. It’s a well-known fact that within security industry, every time there is a new problem, there would be a cluster of security companies that spring up to solve that problem. This whac-a-mole approach can certainly challenge efficiency but, more importantly, it puts the burden on the customer to constantly deploy a new vendor, a new tool, and manage siloed data. I refer to this as customers paying the “complexity tax”.

This has created fatigue among security practitioners and increased interdependencies, blind spots, and unpredictability as evidenced by the eye-opening data from Gartner showing that 75% of organizations today are pursuing security vendor consolidation. Customers should not have to spend time deciphering what products they need in order to solve their specific security challenges. That should be our job and I take this responsibility to heart.

What’s crucial to our success is to listen to the voice of our customers, which is a powerful force in helping us steer in the right direction. We always appreciate candid feedback we get from customers. A couple of recent reminders we got from customers include:

  • Customers value things that will minimize disruption when migrating to a new solution or platform. They need our help to simplify and make this process easier through features like the Cisco Secure Firewall Migration Tool and the Cisco AI Assistant for Security.
  • We must be mindful that there are operational and business costs associated, and vendor or software consolidation may not always be as easy as technology migration – for example, factoring in for cost of existing software licenses of decommissioned products.
  • Hybrid cloud is the de facto operating model for companies today and security is no exception. We must continue to deliver the benefits of cloud operating model and SaaS-like functionality to on-premises security environments.

The Road Ahead


As we mentioned at launch, fulfilling the Security Cloud vision is a multi-year commitment and journey. From the Cisco Security Engineering standpoint, our go-forward strategy and priorities include:

  • A major priority is for us to optimize the user experience and simplify management across our portfolio for features and products we have shipped. We will continue to focus on delivering innovation from a customer-centric approach and shifting focus from deliverables to outcomes; the business value we can provide and what problems we can solve.
  • Working closely with our customers to prioritize customer-found defects or security vulnerabilities as we develop new features. In general, security efficacy continues to be one of our top objectives for Cisco Security engineering.
  • Harnessing the incredible power and potential of generative AI technology to revolutionize threat response and simplify security policy management. Solving these problems is one of the first “killer applications” for AI and we’re only scratching the surface of what we can do from AI-driven innovation.
  • With Oort’s identity-centric technology, we will enhance user context telemetry and incorporate their capabilities across our portfolio, including our Duo Identity Access Management (IAM) technology and Extended Detection and Response (XDR) portfolios.
  • Leveraging our cloud-native expertise and decades of on-premises experience to reimagine and redefine how security appliances are deployed and used.

We are making big moves, and our Cisco Security Cloud journey continues. Our vision is realized through innovation, and innovation comes from new technology, new concepts for mature technologies, and new ways to build, buy or use our capabilities. Stay tuned on more news from us as we continue to deliver some of the most exciting innovation areas for Cisco and the security industry at large.

Source: cisco.com

Saturday, 9 December 2023

How Cisco Black Belt Academy Learns from Our Learners

Cisco Black Belt Academy offers the latest in technology enablement to our partners, distributors, and Cisco employees. With ever-changing industry trends and market dynamics, an in-depth understanding of end-users’ requirements is of supreme importance, and we strive to offer the best in Partner Experience.

Learning from our learners


Listen-Learn-Act-Repeat is a never ending cycle at Black Belt Academy. We endeavor to engage our partners in a variety of ways to offer Black Belt Academy courses that help them succeed. With an exhaustive enablement catalogue, we ensure that we are in sync with the wants of our learners. Our Partner Experience team works tirelessly behind the scenes to support learners when they need help.

How Cisco Black Belt Academy Learns from Our Learners

In principle, we believe in having a symbiotic approach in the way we go about doing business: we are as good as our learners on any given day.

Constantly refining based on learners’ input


We have identified specific touch points to include our partners’ input in refining our Learning Plans:

Voice of the Partner: The initiative is a part of Global Partner Routes and Sales. It is designed to understand the perspective of our consumers across all Cisco verticals. Such inputs help us in innovating and upgrading our offerings.

Partner Listening: Taking cue of the Voice of the Partner initiative, we have set forth a Partner Listening activity that caters to Black Belt-specific partners and distributors. We take pride in the fact that the initiative has increased our engagement levels with our consumers. Our platform refinement and the revamped framework of our learning courses exemplifies our commitment towards developing user-oriented products.

Review and Feedback: At the end we have the Review/Feedback option attached to our courses. This might be a contemporary tool, but it is nevertheless very important at sorting escalations at the earliest.

Your voice, our actions


The feedback received is diligently addressed by our entire Black Belt Team. We make deliberate efforts to accommodate the requested changes from our partners, especially those that affect our overall engagement and user experience. Our Annual Refresh is dedicated to integrating these changes into both the Platform Experience and the curated content. The Black Belt Content BDMs collaborate to enhance the quality of assets and the context of trainings each year, ensuring we provide what you need in the manner you need it.

An experience based on learners’ needs


We take pride in being among a handful of organizations whose product orientation is based on customers’ wants. Through the implementation of both proactive and reactive measures, we ensure our learners have the best experience. As you learn from us rest assured, we are continuously learning from you.

Source: cisco.com

Wednesday, 6 December 2023

Why You Should Pass Cisco 350-701 SCOR Exam?

The CCNP Security credential confirms your expertise in security solutions. Achieving the CCNP Security certification involves successfully completing two exams: one covering core security technologies and another focusing on a security concentration of your choosing. This article will concentrate on the core examination known as "Implementing and Operating Cisco Security Core Technologies" (350-701 SCOR).

What Is the Cisco 350-701 SCOR Exam?

The 350-701 SCOR exam by Cisco assesses a wide range of competencies, encompassing network, cloud, and content security, as well as endpoint protection and detection. It also evaluates skills in ensuring secure network access, visibility, and enforcement.

The SCOR 350-701 exam, titled "Implementing and Operating Cisco Security Core Technologies v1.0," lasts for 120 minutes and includes 90-10 questions. It is linked to certifications such as CCNP Security, Cisco Certified Specialist - Security Core, and CCIE Security. The exam focuses on the following objectives:

  • Security Concepts (25%)
  • Network Security (20%)
  • Securing the Cloud (15%)
  • Content Security (15%)
  • Endpoint Protection and Detection (10%)
  • Secure Network Access, Visibility, and Enforcement (15%)
  • Tips and Tricks to Pass the Cisco 350-701 SCOR Exam

    When dealing with Cisco exams, it's essential to be clever and strategic. Here are some tips and techniques you can employ to excel in your Cisco 350-701 exam:

    1. Have a Good Grasp of the Cisco 350-701 SCOR Exam Content

    Initially, it's crucial to possess a well-defined understanding of the examination format. You must comprehend the expectations placed on you, enabling you to confidently provide the desired responses without hesitating among seemingly comparable choices.

    2. Familiarize Yourself With the Exam Topics

    Gaining insight into the goals of the Cisco SCOR 350-701 Exam can be highly advantageous. It allows you to identify the key concepts within the course, enabling a more concentrated effort to acquire expertise in those specific areas.

    3. Develop a Study Schedule

    Having a study schedule is crucial as it enhances organization and ensures comprehensive coverage. It provides a clear overview of the time available before the exam, allowing you to determine the necessary study and practice duration.

    5. Perform Cisco 350-701 SCOR Practice Exams

    Engaging in practice exams assists in identifying areas of deficiency, areas for improvement, and whether there's a need to enhance speed. You can access dependable Cisco 350-701 SCOR practice exams on the nwexam website. Repeat the practice sessions, pinpoint your weaker areas, monitor your results, and ultimately build confidence in your knowledge and skills.

    6. Engage in Online Forums

    Numerous online communities are specifically focused on Cisco certifications and exams. By becoming a part of these communities, you can connect with individuals possessing relevant experience or working as professionals in the field. Their insights and recommendations will assist you in steering clear of errors and optimizing your work efficiency.

    7. Brush Up on Your Knowledge Right Before the Exam

    Having a concise set of notes that you can review just before the exam is beneficial. This aids in activating your memory and bringing essential knowledge to the forefront of your mind, saving valuable time that might otherwise be spent trying to recall information.

    8. Strategies for Multiple-Choice Questions

    Utilizing Multiple-Choice Questions strategies is beneficial when you're uncertain about the correct answer. For instance, employing the method of eliminating incorrect options can be effective. It's also advisable to skip questions that are challenging, proceeding to the others without spending excessive time on them. Complete the remaining Cisco 350-701 questions and return to the challenging one later.

    Why Should You Pass the Cisco 350-701 SCOR Exam?

    Indeed, examinations are commonly undertaken to acquire the knowledge and skills necessary to address human challenges. Successfully completing the Cisco 350-701 SCOR exam offers more than just that – it grants the CCNP Security certification and additional advantages, including:

    1. Set Yourself Apart From the Crowd

    The job landscape for IT professionals is intensely competitive. Simultaneously, hiring managers are compelled to seek exceptionally skilled candidates. Consequently, only individuals who have demonstrated dedication and commitment to their careers through examinations and certifications are chosen. Opting for a Cisco 350-701 SCOR exam also signifies your enthusiasm and practical expertise in your professional domain.

    2. Official Validation

    Consider the perspective of the hiring manager: asserting your proficiency in network security technologies through words alone may not be highly persuasive. However, when your resume is enhanced by an industry-standard certification from a reputable vendor, additional explanations become unnecessary. The esteemed reputation of Cisco in the networking field alone is sufficient to secure a job.

    3. Showcase Your Professional Relevance

    Many employers prefer to recruit versatile professionals capable of undertaking diverse responsibilities within a company. Successfully completing the Cisco 350-701 SCOR exam demonstrates your accurate understanding of workplace technologies and your capacity to contribute to organizational empowerment. In essence, certification assures your employer that your skill set aligns with the requirements of their job position.

    4. Boost Your Earnings

    Attaining the CCNP Security certification opens doors to lucrative opportunities for increased earnings. Additionally, you become eligible for job positions that offer higher salaries compared to those available to non-certified professionals.

    5. Propel Your Career Advancement Swiftly

    If you've been aiming for promotions within your company, obtaining CCNP Security certification can be instrumental in reaching even managerial positions. The widespread application of Routing and Switching technologies in many organizations is attributed to their provision of secure communication and data sharing.

    6. Reinforce Your Confidence

    Positions in networking are typically hands-on, demanding consistent performance. Successfully completing the Cisco 350-701 SCOR exam instills confidence in executing tasks, as the acquired skills provide a robust comprehension of network security. Certifications also hold significance for employers during the hiring process, serving as proof that they have selected a qualified and capable professional.

    Conclusion

    Ascending to higher levels and achieving your aspirations in the IT field can be challenging without additional certifications validating your capabilities. Although various organizations provide numerous certifications, it's crucial to identify the one that aligns most effectively with your objectives. Otherwise, the investment of time and money could prove futile. Act promptly and secure your Cisco 350-701 SCOR certification using the best available resources tailored for you.

    Tuesday, 5 December 2023

    Integrated Industrial Edge Compute

    Predicting the future of new technology is often like gambling. Predicting the future of a massive locomotive on a railway track is quite predictable. The future of edge compute is more like a locomotive with a predictable future. It is already impacting the energy and mining industry today and there’s much more to come in the next few years.

    Let’s start with some general enterprise market numbers for context. In early 2023 Grand View Research identified that the edge compute market had grown from $1.9B in 2020 to $11.24B in 2022. Their research predicts an exponential growth curve that will continue at 37.9% compounded annual growth rate and reach $155.9B in 2030. That’s a very big number, almost 100X increase in 10 years. It’s safe to say this technology will touch almost every enterprise.

    Integrated Industrial Edge Compute
    Market Growth Projection

    Industry Trends


    These edge compute numbers may explain my boldness, but the business value of specific outcomes are what drives this growth in energy and mining. The following trends are what provide fuel for this growth in the near term.

    Digital Instrumentation

    I’m often reminded that digital instrumentation has been around for dozens of years in process control. In spite of our history with digital instrumentation, a surprising number of instruments are still read manually and entered by hand into computer systems. This is changing. The use of centralized operations centers and remote experts raises the need for visibility across all process elements from remote tools and dashboards. The move to more complete digital instrumentation is making the role of compute and data infrastructure critical to operations.

    Cloud Challenges

    Almost everyone has drastically shrunk their corporate data center footprint and moved their compute functions to the cloud. This approach continues to be problematic for many operational environments that struggle to get reliable connectivity. Even with the improvement of today’s connectivity options, the risk of failure is still too high for critical operations. In addition to the reliability risk, the latency risk for certain operations is also too high for cloud services. These two factors make compute at the edge an important consideration.

    Artificial Intelligence

    In 2023, It seems like every conversation or publication must contain a reference to Artificial Intelligence (AI) and for good reason. AI can cut through the noise and focus our attention on the critical data points that affect meaningful business outcomes. In many cases the resulting algorithms are quite simple and can be deployed in lightweight container apps at the edge.

    Container Apps at the Cisco Edge


    AI is one of multiple use cases that benefit from container-based compute at the edge. Cisco’s container feature (IoX) directly addresses this trend today in energy and mining companies. Here are a few examples of edge software that address operating outcomes today. Each software solution has an instance running at the edge, in a container, on a Cisco router or switch.

    Cisco Cyber Vision

    What if your network’s routers and switches could give you visibility to the OT inventory on your network and identify any security anomalies that occur?  Cisco’s Cyber Vision can provide this visibility from your Cisco infrastructure. The Cyber Vision agents that monitor OT traffic and describe it in meta data for central analysis, live in containers on Cisco routers and switches. There is no need to invest in separate monitoring devices and an expensive backhaul network to reroute that traffic for analysis. It’s built in.

    Integrated Industrial Edge Compute
    Cyber Vision Architecture

    Industrial Network Solutions

    What if you could implement a small SCADA function in a router or switch that lives in a small cabinet or pumphouse? Industrial Network Solutions has integrated multiple container based agents that can perform all needed functions from a container on a router or switch. This avoids the old model of putting expensive purpose built boxes at every little site. It puts SCADA into locations with Ignition Edge, Node-RED and others where dedicated SCADA devices were not feasible before.

    Cheetah Networks

    What if you could objectively measure the communication experience that your remote site is experiencing? A small container app from Cheetah Networks gives you visibility to modem metrics and other local data points that were very difficult and cumbersome to aquire and manage with other solutions. As you guessed this container app can live in Cisco’s routers and switches without additional compute hardware.

    Cisco Edge Intelligence

    What if you need customized data management? Sometimes data acquisition requires a more flexible tool that can read data locally, normalize it and then send it to a central server as required. Edge Intelligence from Cisco is one such tool that’s highly programmable and comes with a wide variety of connectors and normalization capabilities. As with previous examples, it lives in a container app on Cisco’s routers and switches without additional compute hardware.

    And More


    These are just a few of the apps that have found a home in the small container space on Cisco routers and switches. The sweet spot for this approach is a small site that can’t justify a dedicated compute platform or an app that is tightly associated with network traffic on the router or switch.

    The edge compute trend is coming like a freight train and needs to be assessed across every enterprise. Before investing in a dedicated edge compute platform for every possible location, take a look at the capabilities already built into your network infrastructure. You may be surprised by the capability that’s already there.

    Source: cisco.com

    Saturday, 2 December 2023

    Effortless API Management: Exploring Meraki’s New Dashboard Page for Developers

    A Dashboard Designed for Developers


    APIs serve as the bridges that enable different software systems to communicate, facilitating the flow of data and functionality. For developers, APIs are the foundation upon which they build, innovate, and extend the capabilities of their applications. It’s only fitting that the tools they use to manage these APIs should be just as robust and user-friendly.

    That’s where Meraki’s API & Webhook Management Page steps in. This dedicated interface is a testament to Meraki’s commitment to creating a seamless developer experience. It’s not just another feature; it’s a reflection of the understanding that developers need practical solutions to handle the complexities of APIs effectively.

    Effortless API Management: Exploring Meraki’s New Dashboard Page for Developers

    Simplifying API Key Management


    One of the key aspects that API developers will appreciate is the simplified API key management. With just a few clicks, developers can create and revoke the API keys they need effortlessly. With this new page, you can easily manage the keys associated with the current Dashboard user account.

    Effortless API Management: Exploring Meraki’s New Dashboard Page for Developers

    Streamlining Webhook Setup


    Webhooks have become an integral part of modern application development, allowing systems to react to events in real-time. The new UI offers a separate section to manage your webhook receivers, allowing you to:

    • Create webhook receivers across your various networks
    • Assign payload templates to integrate with your webhook receiver
    • Create a custom template in the payload template editor and test your configured webhooks.

    Effortless API Management: Exploring Meraki’s New Dashboard Page for Developers

    Many external services typically demand specific headers or body properties to accept a webhook. Templates serve as a means to incorporate and modify these webhook properties to suit the particular service’s requirements. Utilize our integrated template editor for crafting and evaluating your personalized webhook integrations. Develop custom webhook templates to:

    • Establish custom headers to enable adaptable security options.
    • Experiment with different data types, like scenarios involving an access point going offline or a camera detecting motion, for testing purposes.

    Effortless API Management: Exploring Meraki’s New Dashboard Page for Developers

    Connecting applications and services via webhooks has never been easier, and developers can do it with confidence, knowing that the shared Secret for webhook receivers is handled securely.

    Access to Essential Documentation and Community Resources


    Every developer understands the value of having comprehensive documentation and access to a supportive community. Meraki’s API & Webhook Management Dashboard Page goes a step further by providing quick links to essential documentation and community resources. This means that developers can quickly find the information they need to troubleshoot issues, explore new features, and collaborate with a like-minded community.

    What’s on the Horizon?


    I hope this blog post has given you a glimpse of the incredible features that Meraki’s API & Webhook Management Page brings to the table. But the innovation doesn’t stop here. Meraki’s commitment to an “API-first” approach means that new API endpoints for generating and revoking API keys will be available very soon, providing developers with even more control over their API integration.

    Additionally, Meraki places a strong emphasis on security, aligning with API key security best practices. The sharedSecret for webhook receivers will no longer be visible after setting, enhancing the overall security of your API connections.

    But we’re not stopping there. The roadmap ahead is filled with exciting updates and enhancements, promising to make the Meraki Dashboard an even more powerful tool for developers.

    Source: cisco.com

    Thursday, 30 November 2023

    Making Your First Terraform File Doesn’t Have to Be Scary

    Making Your First Terraform File Doesn’t Have to Be Scary

    For the past several years, I’ve tried to give at least one Terraform-centric session at Cisco Live. That’s because they’re fun and make for awesome demos. What’s a technical talk without a demo? But I also see huge crowds every time I talk about Terraform. While I wasn’t an economics major, I do know if demand is this large, we need a larger supply!

    That’s why I decided to step back and focus to the basics of Terraform and its operation. The configuration applied won’t be anything complex, but it should explain some basic structures and requirements for Terraform to do its thing against a single piece of infrastructure, Cisco ACI. Don’t worry if you’re not an ACI expert; deep ACI knowledge isn’t required for what we’ll be configuring.

    The HCL File: What Terraform will configure


    A basic Terraform configuration file is written in Hashicorp Configuration Language (HCL). This domain-specific language (DSL) is similar in structure to JSON, but it adds components for things like control structures, large configuration blocks, and intuitive variable assignments (rather than simple key-value pairs).

    At the top of every Terraform HCL file, we must declare the providers we’ll need to gather from the Terraform registry. A provider supplies the linkage between the Terraform binary and the endpoint to be configured by defining what can be configured and what the API endpoints and the data payloads should look like. In our example, we’ll only need to gather the ACI provider, which is defined like this:

    terraform {

      required_providers {

        aci = {

          source = “CiscoDevNet/aci”

        }

      }

    }

    Once you declare the required providers, you have to tell Terraform how to connect to the ACI fabric, which we do through the provider-specific configuration block:

    provider "aci" {

    username = "admin"

    password = "C1sco12345"

    url      = "https://10.10.20.14"

    insecure = true

    }

    Notice the name we gave the ACI provider (aci) in the terraform configuration block matches the declaration for the provider configuration. We’re telling Terraform the provider we named aci should use the following configuration to connect to the controller. Also, note the username, password, url, and insecure configuration options are nested within curly braces { }. This indicates to Terraform that all this configuration should all be grouped together, regardless of whitespaces, indentation, or the use of tabs vs. spaces.

    Now that we have a connection method to the ACI controller, we can define the configuration we want to apply to our datacenter fabric. We do this using a resource configuration block. Within Terraform, we call something a resource when we want to change its configuration; it’s a data source when we only want to read in the configuration that already exists. The configuration block contains two arguments, the name of the tenant we’ll be creating and a description for that tenant.

    resource "aci_tenant" "demo_tenant" {

    name        = "TheU_Tenant"

    description = "Demo tenant for the U"

    }

    Once we write that configuration to a file, we can save it and begin the process to apply this configuration to our fabric using Terraform.

    The Terraform workflow: How Terraform applies configuration


    Terraform’s workflow to apply configuration is straightforward and stepwise. Once we’ve written the configuration, we can perform a terraform init, which will gather the providers from the Terraform registry who have been declared in the HCL file, install them into the project folder, and ensure they are signed with the same PGP key that HashiCorp has on file (to ensure end-to-end security). The output of this will look similar to this:

    [I] theu-terraform » terraform init


    Initializing the backend...


    Initializing provider plugins...

    - Finding latest version of ciscodevnet/aci...

    - Installing ciscodevnet/aci v2.9.0...

    - Installed ciscodevnet/aci v2.9.0 (signed by a HashiCorp partner, key ID 433649E2C56309DE)


    Partner and community providers are signed by their developers.

    If you'd like to know more about provider signing, you can read about it here:

    https://www.terraform.io/docs/cli/plugins/signing.html

    Terraform has created a lock file .terraform.lock.hcl to record the provider

    selections it made above. Include this file in your version control repository

    so that Terraform can guarantee to make the same selections by default when

    you run "terraform init" in the future.

    Terraform has been successfully initialized!

    You may now begin working with Terraform. Try running “terraform plan” to see any changes required for your infrastructure. All Terraform commands should now work.

    If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

    Once the provider has been gathered, we can invoke terraform plan to see what changes will occur in the infrastructure prior to applying the config. I’m using the reservable ACI sandbox from Cisco DevNet  for the backend infrastructure but you can use the Always-On sandbox or any other ACI simulator or hardware instance. Just be sure to change the target username, password, and url in the HCL configuration file.

    Performing the plan action will output the changes that need to be made to the infrastructure, based on what Terraform currently knows about the infrastructure (which in this case is nothing, as Terraform has not applied any configuration yet). For our configuration, the following output will appear:

    [I] theu-terraform » terraform plan

    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

     + create

    Terraform will perform the following actions:


    # aci_tenant.demo_tenant will be created

    + resource "aci_tenant" "demo_tenant" {

    + annotation                    = "orchestrator:terraform"

    + description                   = "Demo tenant for the U"

    + id                            = (known after apply)

    + name                          = "TheU_Tenant"

    + name_alias                    = (known after apply)

    + relation_fv_rs_tenant_mon_pol = (known after apply)

    }


    Plan: 1 to add, 0 to change, 0 to destroy.

    ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

    Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if

    you run "terraform apply" now.

    We can see that the items with a plus symbol (+) next to them are to be created, and they align with what we had in the configuration originally. Great!  Now we can apply this configuration. We perform this by using the terraform apply command. After invoking the command, we’ll be prompted if we want to create this change, and we’ll respond with “yes.”

    [I] theu-terraform » terraform apply                                                      

    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the

    following symbols:

      + create


    Terraform will perform the following actions:


      # aci_tenant.demo_tenant will be created

      + resource "aci_tenant" "demo_tenant" {

          + annotation                    = "orchestrator:terraform"

          + description                   = "Demo tenant for the U"

          + id                            = (known after apply)

          + name                          = "TheU_Tenant"

          + name_alias                    = (known after apply)

          + relation_fv_rs_tenant_mon_pol = (known after apply)

        }


    Plan: 1 to add, 0 to change, 0 to destroy.


    Do you want to perform these actions?

      Terraform will perform the actions described above.

      Only 'yes' will be accepted to approve.


      Enter a value: yes


    aci_tenant.demo_tenant: Creating...

    aci_tenant.demo_tenant: Creation complete after 3s [id=uni/tn-TheU_Tenant]


    Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

    The configuration has now been applied to the fabric!  If you’d like to verify, log in to the fabric and click on the Tenants tab. You should see the newly created tenant.

    Finally – if you’d like to delete the tenant the same way you created it, you don’t have to create any complex rollback configuration. Simply invoke terraform destroy from the command line. Terraform will verify the state that exists locally within your project aligns with what exists on the fabric; then it will indicate what will be removed. After a quick confirmation, you’ll see that the tenant is removed, and you can verify in the Tenants tab of the fabric.

    [I] theu-terraform » terraform destroy                                                    

    aci_tenant.demo_tenant: Refreshing state... [id=uni/tn-TheU_Tenant]

    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the

    following symbols:

      - destroy


    Terraform will perform the following actions:


      # aci_tenant.demo_tenant will be destroyed

      - resource "aci_tenant" "demo_tenant" {

          - annotation  = "orchestrator:terraform" -> null

          - description = "Demo tenant for the U" -> null

          - id          = "uni/tn-TheU_Tenant" -> null

          - name        = "TheU_Tenant" -> null

        }



    Plan: 0 to add, 0 to change, 1 to destroy.


    Do you really want to destroy all resources?

      Terraform will destroy all your managed infrastructure, as shown above.

      There is no undo. Only 'yes' will be accepted to confirm.


      Enter a value: yes


    aci_tenant.demo_tenant: Destroying... [id=uni/tn-TheU_Tenant]

    aci_tenant.demo_tenant: Destruction complete after 1s


    Destroy complete! Resources: 1 destroyed.

    Complete Infrastructure as Code lifecycle management with a single tool is pretty amazing, huh?

    A bonus tip


    Another tip regarding Terraform and HCL relates to the workflow section above. I described the use of curly braces to avoid the need to ensure whitespace is correct or tab width is uniform within the configuration file. This is generally a good thing, as we can focus on what we want to deploy rather than minutiae of the config. However, sometimes it helps when you format the configuration in a way that’s aligned and easier to read, even if it doesn’t affect the outcome of what is deployed.

    In these instances, you can invoke terraform fmt within your project folder, and it will automatically format all Terraform HCL files into aligned and readable text. You can try this yourself by adding a tab or multiple spaces before an argument or maybe between the = sign within some of the HCL. Save the file, run the formatter, and then reopen the file to see the changes. Pretty neat, huh?

    Source: cisco.com