Friday, 17 January 2020

Three IoT trends to watch this month

Cisco Prep, Cisco Tutorials and Material, Cisco Learning, Cisco Certification, Cisco Online Exam

The world of IoT continues to grow as our more than 70,000 customers take their deployments to the next level. Whether or not you are attending Cisco Live in Barcelona on January 27th – 30th, you will want to tune in. Cisco will be making a lot of announcements, and addressing these three IoT trends.

1. The network is the foundation for both IT and OT environments, but a multi-domain architecture is key


While the network has always been the backbone for IT, it has quickly become the foundation for operational technology (OT) environments as well. OT needs data to help them improve customer experiences, enhance safety, increase efficiencies, and reduce costs. There is no better way to achieve these results than by mining data from key assets such as a machine on a factory floor, a fleet of service vehicles, or a remote pipe line. And this is where the importance of the network expands from IT into OT environments. To get the data that OT needs, assets must securely connect to a reliable network.  And with the amount of devices being connected, not just any network will do. Only Cisco is providing a true multi-domain architecture bringing common visibility and management across all domains – including the OT domain – making IoT projects easier to scale.  Look for how we are bringing bigger value to this network in the upcoming weeks.

2. Edge compute and getting the most value out of your data


Edge compute is creating a new set of use cases and business models by allowing data to be accessed and processed at the edge – without ever traversing the WAN. It allows organizations to deploy real-time applications anywhere – even on the side of the road where every second counts to ensure pedestrian and driver safety. Edge is a critical part of our IoT strategy as we work to bring the power of the enterprise to edge environments. It is integrated with our network so applications are easier to manage and deploy. As 5G and other innovation accelerators enter the market, Cisco is ready with edge computing solutions wherever they are needed, even harsh and remote environments.

Cisco Prep, Cisco Tutorials and Material, Cisco Learning, Cisco Certification, Cisco Online Exam

As part of this, we will talk about the challenges around harnessing the data that will bring your business to the next level.  Does your organization have data deluge? Or a data drought? Do you know who has access to your data and who doesn’t? Getting the right data to the right person at the right time can be critical to saving lives, edging out the competition, or reducing downtime. The key to doing this is Cisco IoT solutions. We will discuss how Cisco can help organizations tackle the data challenge including its collection, transformation and delivery so that you can make sense of it all.

3. Security at the edge is more critical than ever, and IT and OT need to work together for its success


As millions of devices come online in operational environments, the cyber security risks grow exponentially. So, as the network becomes more distributed to connect these industrial environments, the security must become distributed too. In factories, for example, machine controllers are now smarter. They have their own software and CPU helping create more agile manufacturing environments. But, the combination of their intelligence and their network connectivity, is also making them more vulnerable to attacks. We will address how organizations can more easily secure these OT environments at scale.

Also, we touch on the importance of IT and OT working together. In order to implement security properly, a very diverse skillset is required – a skillset that only IT and OT together, can provide. IT understands how to secure networks, while OT are experts at optimizing their processes. Bringing together the knowledge of the network and security with the knowledge of the business and its process is critical for success. In the upcoming weeks, stay tuned as to how organizations can do this all successfully with Cisco.

Thursday, 16 January 2020

Disk Image Deception

Cisco’s Computer Security Incident Response Team (CSIRT) detected a large and ongoing malspam campaign leveraging the .IMG file extension to bypass automated malware analysis tools and infect machines with a variety of Remote Access Trojans. During our investigation, we observed multiple tactics, techniques, and procedures (TTPs) that defenders can monitor for in their environments. Our incident response and security monitoring team’s analysis on a suspicious phishing attack uncovered some helpful improvements in our detection capabilities and timing.

In this case, none of our intelligence sources had identified this particular campaign yet. Instead, we detected this attack with one of our more exploratory plays looking for evidence of persistence in the Windows Autoruns data. This play was successful in detecting an attack against a handful of endpoints using email as the initial access vector and was able to evade our defenses at the time. Less than a week after the incident, we received alerts from our retrospective plays for this same campaign once our integrated threat intelligence sources delivered the indicators of compromise (IOC). This blog is a high level write-up of how we adapted to a potentially successful attack campaign and our tactical analysis to help prevent and detect future campaigns.

Incident Response Techniques and Strategy


The Cisco Computer Security and Incident Response Team (CSIRT) monitors Cisco for threats and attacks against our systems, networks, and data. The team provides around the globe threat detection, incident response, and security investigations. Staying relevant as an IR team means continuously developing and adapting the best ways to defend the network, data, and infrastructure. We’re constantly experimenting with how to improve the efficiency of our data-centric playbook approach in the hope it will free up more time for threat hunting and more in-depth analysis and investigations. Part of our approach has been that as we discover new methods for detecting risky activity, we try to codify those methods and techniques into our incident response monitoring playbook to keep an eye on any potential future attacks.

Although some malware campaigns can slip past the defenses with updated techniques, we preventatively block the well-known, or historical indicators and leverage broad, exploratory analysis playbooks that spotlight more on how attackers operate and infiltrate. In other words, there is value in monitoring for the basic atomic indicators of compromised like IP addresses, domain names, file hashes, etc. but to go further you really have to look broadly at more generic attack techniques. These playbooks, or plays, help us find out about new attack campaigns that are possibly targeted and potentially more serious. While some might label this activity “threat hunting”, this data exploration process allows us to discover, track, and potentially share new indicators that get exposed during a deeper analysis.

Defense in depth demands that we utilize additional data sources in case attackers successfully evade one or more of our defenses, or if they were able to obscure their malicious activities enough to avoid detection. Recently we discovered a malicious spam campaign that almost succeeded due to a missed early detection. In one of our exploratory plays, we use daily diffs for all the Microsoft Windows registry autorun key changes since the last boot. Known as “Autoruns“, this data source ultimately helped us discover an ongoing attack that was attempting to deliver a remote access trojan (RAT). Along with the more mundane Windows event logs, we pieced together the attack from the moment it arrived and made some interesting discoveries on the way — most notably how the malware seemingly slipped past our front line filters. Not only did we uncover many technical details about the campaign, but we also used it as an opportunity to refine our incident response detection techniques and some of our monitoring processes.

IMG File Format Analysis


.IMG files are traditionally used by disk image files to store raw dumps of either a magnetic disk or of an optical disc. Other disk image file formats include ISO and BIN. Previously, mounting disk image file files on Windows required the user to install third-party software. However Windows 8 and later automatically mount IMG files on open. Upon mounting, Windows File Explorer displays the data inside the .IMG file to the end user. Although disk image files are traditionally utilized for storing raw binary data, or bit-by-bit copies of a disk, any data could be stored inside them. Because of the newly added functionality to the Windows core operating system, attackers are abusing disk image formats to “smuggle” data past antivirus engines, network perimeter defenses, and other auto mitigation security tooling. Attackers have also used the capability to obscure malicious second stage files hidden within a filesystem by using ISO and DMG (to a lesser extent). Perhaps the IMG extension also fools victims into considering the attachment as an image instead of a binary pandora’s box.

Know Where You’re Coming From


As phishing as an attack vector continues to grow in popularity, we have recently focused on several of our email incident response plays around detecting malicious attachments, business email compromise techniques like header tampering or DNS typosquatting, and preventative controls with inline malware prevention and malicious URL rewriting.

Any security tool that has even temporarily outdated definitions of threats or IOCs will be unable to detect a very recent event or an event with a recent, and therefore unknown, indicator. To ensure that these missed detections are not overlooked, we take a retrospective look back to see if any newly observed indicators are present in any previously delivered email. So when a malicious attachment is delivered to a mailbox, if the email scanners and sandboxes do not catch it the first time, our retrospective plays look back to see if the updated indicators are triggered. Over time sandboxes update their detection abilities and previously “clean” files could change status. The goal is to detect this changing status and if we have any exposure, then we reach out and remediate the host.

This process flow shows our method for detecting and responding to updated verdicts from sandbox scanners. During this process we collect logs throughout to ensure we can match against hashes or any other indicator or metadata we collect:

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Online Exam, Cisco Prep

Figure 1: Flow chart for Retrospective alerting

This process in combination with several other threat hunting style plays helped lead us to this particular campaign. The IMG file isn’t unique by any means but was rare and stood out to our analysts immediately when combined with the file name as a fake delivery invoice – one of the more tantalizing and effective types of phishing lures.

Incident Response and Analysis


We needed to pull apart as much of the malicious components as possible to understand how this campaign worked and how it might have slipped our defenses temporarily. The process tree below shows how the executable file dropped from the original IMG file attachment after mounting led to a Nanocore installation:

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Online Exam, Cisco Prep

Figure 2: Visualization of the malicious process tree.

Autoruns


As part of our daily incident response playbook operations, we recently detected a suspicious Autoruns event on an endpoint. This log (Figure 2) indicated that an unsigned binary with multiple detections on the malware analysis site, VirusTotal, had established persistence using the ‘Run’ registry key. Anytime the user logged in, the binary referenced in the “run key” would automatically execute – in this case the binary called itself “filename.exe” and dropped in the typical Windows “%SYSTEMROOT%\%USERNAME%\AppData\Roaming” directory:

{

    "enabled": "enabled",

    "entry": "startupname",

    "entryLocation": "HKCU\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run",

    "file_size": "491008",

    "hostname": "[REDACTED]",

    "imagePath": "c:\\users\\[REDACTED]\\appdata\\roaming\\filename.exe",

    "launchString": "C:\\Users\\[REDACTED]\\AppData\\Roaming\\filename.exe",

    "md5": "667D890D3C84585E0DFE61FF02F5E83D",

    "peTime": "5/13/2019 12:48 PM",

    "sha256": "42CCA17BC868ADB03668AADA7CF54B128E44A596E910CFF8C13083269AE61FF1",

    "signer": "",

    "vt_link": "https://www.virustotal.com/file/42cca17bc868adb03668aada7cf54b128e44a596e910cff8c13083269ae61ff1/analysis/1561620694/",

    "vt_ratio": "46/73",

    "sourcetype": "autoruns",

}

Figure 3: Snippet of the event showing an unknown file attempting to persist on the victim host

Many of the anti-virus engines on VirusTotal detected the binary as the NanoCore Remote Access Trojan (RAT), a well known malware kit sold on underground markets which enables complete control of the infected computer: recording keystrokes, enabling the webcam, stealing files, and much more. Since this malware poses a huge risk and the fact that it was able to achieve persistence without getting blocked by our endpoint security, we prioritized investigating this alert further and initiated an incident. 

Once we identified this infected host using one of our exploratory Autoruns plays, the immediate concern was containing the threat to mitigate as much potential loss as possible. We download a copy of the dropper malware from the infected host and performed additional analysis. Initially we wanted to confirm if other online sandbox services agreed with the findings on VirusTotal. Other services including app.any.run also detected Nanocore based on a file called run.dat being written to the %APPDATA%\Roaming\{GUID} folder as shown in Figure 3: 

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Online Exam, Cisco Prep

Figure 4: app.any.run analysis showing Nanocore infection

The sandbox report also alerted us to an unusual outbound network connection from RegAsm.exe to 185.101.94.172 over port 8166.

Now that we were confident this was not a false positive, we needed to find the root cause of this infection, to determine if any other users are at risk of being victims of this campaign. To begin answering this question, we pulled the Windows Security Event Logs from the host using our asset management tool to gain a better understanding of what occurred on the host at the time of the incident. Immediately, a suspicious event that was occurring every second  jumped out due to the unusual and unexpected activity of a file named “DHL_Label_Scan _ June 19 2019 at 2.21_06455210_PDF.exe” spawning the Windows Assembly Registration tool RegAsm.exe. 

Process Information:

 New Process ID:  0x4128

 New Process Name: C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe

 Token Elevation Type: %%1938

 Mandatory Label:  Mandatory Label\Medium Mandatory Level

 Creator Process ID: 0x2ba0

 Creator Process Name: \Device\CdRom0\DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe

 Process Command Line: "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe"

Figure 5: New process spawned from a ‘CdRom0’ device (the fake .img) calling the Windows Assembly Registration tool

This event stands out for several reasons.

◉ The filename:

1. Attempts to social engineer the user into thinking they are executing a PDF by appending “_PDF”

2. “DHL_Label_Scan” Shipping services are commonly spoofed by adversaries in emails to spread malware.

◉ The file path:

1. \Device\CdRom0\ is a special directory associated with a CD-ROM that has been inserted into the disk drive.

2. A fake DHL label is a strange thing to have on a CD-ROM and even stranger to insert it to a work machine and execute that file.

◉ The process relationship:

1. Adversaries abuse the Assembly Registration tool “RegAsm.exe” for bypassing process whitelisting and anti-malware protection.

2. MITRE tracks this common technique as T1121 indicating, “Adversaries can use Regsvcs and Regasm to proxy execution of code through a trusted Windows utility. Both utilities may be used to bypass process whitelisting through use of attributes within the binary to specify code that should be run before registration or unregistration”

3. We saw this technique in the app.any.run sandbox report.

◉ The frequency of the event:

1. The event was occurring every second, indicating some sort of command and control or heartbeat activity.

Mount Up and Drop Out


At this point in the investigation, we have now uncovered a previously unseen suspicious file: “DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe”, which is strangely located in the \Device\CdRom0\ directory, and the original “filename.exe” used to establish persistence.

The first event in this process chain shows explorer.exe spawning the malware from the D: drive.

Process Information:

New Process ID:  0x2ba0

New Process Name: \Device\CdRom0\DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe

Token Elevation Type: %%1938

Mandatory Label:  Mandatory Label\Medium Mandatory Level

Creator Process ID: 0x28e8

Creator Process Name: C:\Windows\explorer.exe

Process Command Line: "D:\DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe"

Figure 6: Additional processes spawned by the fake PDF

The following event is the same one that originally caught our attention, which shows the malware spawning RegAsm.exe (eventually revealed to be Nanocore) to establish communication with the command and control server:

Process Information:

New Process ID:  0x4128

New Process Name: C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe

Token Elevation Type: %%1938

Mandatory Label:  Mandatory Label\Medium Mandatory Level

Creator Process ID: 0x2ba0

Creator Process Name: \Device\CdRom0\DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe

Process Command Line: "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe"

Figure 7: RegAsm reaching out to command and control servers

Finally, the malware spawns cmd.exe and deletes the original binary using the built-in choice command: 

Process Information:

New Process ID:  0x2900

New Process Name: C:\Windows\SysWOW64\cmd.exe

Token Elevation Type: %%1938

Mandatory Label:  Mandatory Label\Medium Mandatory Level

Creator Process ID: 0x2ba0

Creator Process Name: \Device\CdRom0\DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe

 Process Command Line: "C:\Windows\System32\cmd.exe" /C choice /C Y /N /D Y /T 3 & Del "D:\DHL_Label_Scan _  June 19 2019 at 2.21_06455210_PDF.exe"

Figure 8: Evidence of deleting the original dropper.

At this point in the investigation of the original dropper and the subsequent suspicious files, we still could not answer how the malware ended up on this user’s computer in the first place. However with the filename of the original dropper to pivot with, a quick web search for the filename turned up a thread on Symantec.com from a user asking for assistance with the file in question. In this post, they write that they recognize the filename from a malspam email they received. Based on the Symantec thread and other clues, such as the use of the shipping service DHL in the filename, we now know the delivery method is likely via email.

Delivery Method Techniques


We used the following Splunk query to search our Email Security Appliance logs for the beginning of the filename we found executing RegAsm.exe in the Windows Event Logs.

index=esa earliest=-30d

[search index=esa "DHL*.img" earliest=-30d

| where isnotnull(cscoMID)

| fields + cscoMID,host

| format]

| transaction cscoMID,host

| eval wasdelivered=if(like(_raw, "%queued for delivery%"), "yes", "no")

| table esaTo, esaFrom, wasdelivered, esaSubject, esaAttachment, Size, cscoMID, esaICID, esaDCID, host

Figure 9: Splunk query looking for original DHL files.

As expected, the emails all came from the spoofed sender address noreply@dhl.com with some variation of the subject “Re: DHL Notification / DHL_AWB_0011179303/ ETD”. In total, CSIRT identified a total of 459 emails from this campaign sent to our users. Of those 459 emails, 396 were successfully delivered and contained 18 different Nanocore samples.

396 malicious emails making it past our well-tuned and automated email mitigation tools is no easy feat. While the lure the attacker used to social engineer their victims was common and unsophisticated, the technique they employed to evade defenses was successful – for a time.

Detecting the Techniques


During the lessons learned phase after this campaign, CSIRT developed numerous incident response detection rules to alert on newly observed techniques discovered while analyzing this incident. The first and most obvious being, detecting malicious disk image files successfully delivered to a user’s inbox. The false-positive rate for this specific type of attack is low in our environment, with a few exceptions here and there – easily tuned out based on the sender. This play could be tuned to look only for disk image files with a small file size if they are more prevalent in your environment.

Another valuable detection rule we developed after this incident is monitoring for suspicious usage (network connections) of the registry assembly executable on our endpoints, which is ultimately the process Nanocore injected itself into and was using to facilitate C2 communication. Also, it is pretty unlikely to ever see legitimate use of the choice command to create a self-destructing binary of sorts, so monitoring for execution of choice with the command-line arguments we saw in the Windows Event above should be a high fidelity alert.

Some additional, universal takeaways from this incident:

1. Auto-mitigation tools should not be treated as a silver bullet – Effective security monitoring, rapid incident response, and defense in depth/layers is more important.

2. Obvious solutions such as blocking extensions at email gateway are not always realistic in large, multifunction enterprises – .IMG files were legitimately being used by support engineers and could not be blocked.

3. Malware campaigns can slip right past defenders on occasion, so a wide playbook that focuses on how attackers operate and infiltrate (TTPs) is key for finding new and unknown malware campaigns in large enterprises (as opposed to relying exclusively on indicators of compromise.)

Indicators Of Compromise (IOCS)


2b6f19fac64c847258fe776a2ea6444cc469ac6a348e714fcab23cc6cb2c5b74

327c646431a644192aae8a0d0ebe75f7a2b98d7afa7a446afa97e2a004ca64b0

3718957d7f0da489935ce35b6587a6c93f25cff69d233381131b757778826da3

3873ef89a74a9c03ba363727b20429a45f29a525532d0ef9027fce2221f64f60

3a7c23a01a06c257b2f5b59647461ebf8f58209a598390c2910d20a9c5757c62

4eb2af63e121c22df7945258991168be4a70aa32669db173743701aab94383fb

5d14e5959c05589978680e46bffd586e10c1fcabc21ddd94c713520cd0037640

6a2af44e186531d07c53122d42280bc18929d059b98f0449c1a646d66a778ffb

80ab695da86e97861b294b72ba1ef2e8e2f322e7ec0d0834e71f92497515b63d

a34aa05710cf0afb111181c23468c2dcc3a2c2d6aa496c9dffe45dde11e2c4d1

abf41ea1909a39c644e5b480b176ef8a3c4a80e2ee8b447d4320e777384392cf

af5d9ca1ed166a8d378c5b5ed7e187035f374b4376bdd632c3a2ee156613fd29

afb87da69c9ad418ac29af27602a450a7eae63132443c7bc56ab17785dd3bbfd

d871704baad496b47b15da54e7766c0a468ac66337d99032908ad7d4732ecffb

da79495b8b75c9b122a1116494f68661ec45a1fdfb8fd39c000f1f691b39bc13

deb805ce329f17a48165328879b854674eb34abd704eeb575e643574f31d3e83

eaee0577806861c23bef8737e5ba2d315e9c6bfa38bf409dda9a2a13599615b4

fc0cf381e433cd578128be91dfd7567d2294a6d3ff4d2ce0e3f4046442b1f5f0

185.101.94.172:8166

Wednesday, 15 January 2020

How AppDynamics helps improve IT applications

Choosing which application enhancements will best serve the most users can be a difficult decision for any development team, and it certainly is for mine. It can also be difficult for teams to identify the right sources of issues in application performance. Now, many of those decisions are easier for my team because of the detailed information we get from the AppDynamics Application Performance Management solution.

Cisco Study Materials, Cisco Prep, Cisco Guides, Cisco Learning, Cisco Certification

We develop and maintain Cisco SalesConnect, our digital sales enablement automation platform that empowers our sellers and partners with sales collateral, training and customer insights needed to deliver exceptional customer experiences.

Available globally, SalesConnect has over 9 million hits per quarter, with 140,000+ unique users accessing nearly 40,000 content assets on over 100 microsites.  SalesConnect is popular among our target users: over 80 percent of Cisco salespeople use the platform along with 34 percent of Cisco customer experience and service employees, and more than 88,000 unique partner users worldwide.

Data to Improve a Sales Enablement Platform


Cisco Study Materials, Cisco Prep, Cisco Guides, Cisco Learning, Cisco Certification
We release a new version of SalesConnect every three weeks, giving us many opportunities to improve the platform’s functionality, performance, and user experience. In the past, it was hard to know which of these improvements should receive priority, especially for making the best use of our limited development resources and budget.

For example, if we develop a new feature, should we make it available for all browsers or just the most popular ones? We couldn’t easily answer this question because we didn’t have the right information. We would make educated guesses, but we never had detailed visibility into which browsers and devices were used to access SalesConnect.

We implemented the AppDynamics solution to gain this visibility as well as in-depth application monitoring for our platform. When planning development priorities, AppDynamics gives us detailed data on the usage levels of specific browsers and devices, which allows us to make informed decisions. The data also helps us ensure we are developing and testing code only for the browsers and devices preferred by our users. In a similar way, we can identify the needs of different countries or regions based on AppDynamics data about the geographic origins of page requests.

The AppDynamics data was a big help when my team needed to develop a new SalesConnect mobile beta app in a very short timeframe. We knew we wouldn’t have time to develop separate apps for both iOS and Android devices, but which one should we choose?

In just a few minutes, I was able to find the device usage data we needed in AppDynamics and it clearly showed that iOS devices are the choice for most of our users. This information simplified our beta development decision and allowed us to postpone the expense and effort of developing an Android app until feedback from the beta was analyzed.

When planning new releases, the knowledge we gain from AppDynamics data gives us more confidence that we are aligning our development resources on the capabilities that will deliver the most impact for our users.

Powerful Insights for Application Availability and Response


AppDynamics serves a powerful function by helping us maintain SalesConnect application uptime and responsiveness, reduce resolution time when problems occur, and avoid issues through proactive alerts. Specifically, the AppDynamics data helps us identify issues in the IT services used by the SalesConnect platform and in our integrations with other applications and databases.

In one case, AppDynamics data indicated that the source of brief but recurring outage was in one IT service used by SalesConnect.  Our IT team was able to go directly to that team and obtain a resolution within a few hours, not the days that might have been needed before.

This ability to rapidly diagnose and solve application problems has a tremendous payoff in time, effort, and cost savings for Cisco IT. When SalesConnect experiences problems, we no longer need to set up a “war room” and involve people from multiple teams to diagnose a cause that might be unrelated to their application or service. And because we can give users specific information about a problem and how we are working to resolve it, they have greater confidence about the platform’s reliability.

AppDynamics delivers continuous benefits to my team in maintaining the SalesConnect platform and planning new application capabilities. What types of data would be helpful to you for maintaining application availability or prioritizing application development?

Tuesday, 14 January 2020

Cisco Releases Terraform Support for ACI

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Online Exam, Cisco Prep

Customers have embraced or are on the path to embrace the DevOps model to accelerate application deployment and achieve higher efficiency in operating their data centers as well as public cloud deployments. This arises from the fact that infrastructure needs to change and respond faster than ever to business needs.

The business needs of customers can extend beyond having infrastructure respond faster, they may also require considerations around performance, cost, resiliency and security. This has led to customers adopting multi-cloud architectures. One of the key requirements of multi-cloud architectures is to have network connectivity between application workloads running in different environments. This is where Cisco Application Centric Infrastructure (ACI) comes in.

Cisco ACI allows application requirements to define the network using a common policy-based operational model across the entire ACI-ready infrastructure. This architecture simplifies, automates, optimizes, and accelerates the entire application deployment life cycle across data center, WAN, access, and cloud.

The ability to interact with “infrastructure” in a programmable way, has made it possible to treat Infrastructure-as-Code. The term Infrastructure-as-Code, defines a comprehensive automation approach. This is where Hashicorp Terraform comes in.

Hashicorp Terraform is a provisioning tool for building, changing, and versioning infrastructure safely and efficiently. Terraform manages both existing, popular services and custom in-house solutions, offering over 100 providers. Terraform can manage low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. All users have to do is describe, in code, the components and resources needed to run a single application or the entire datacenter.

With a vision to address some of the challenges listed above, especially in multi-cloud networking, using Terraform’s plugin based extensibility, Cisco and HashiCorp have worked together to deliver the ACI Provider for Terraform.

This integrated solution supports more than 90 resources and datasources, combined, which cover all aspects of bringing up and configuring the ACI infrastructure across on-prem, WAN, access, and cloud. Terraform ACI Provider also helps customers optimize network compliance, operations and maintain consistent state across the entire multi-cloud infrastructure. The combined solution also provides customers a path to faster adoption of multi-cloud, automation across their entire infrastructure and support for other ecosystem tools in their environments.

One of the key barriers to entry for network teams to get started with automation is setting up the automation tool and defining the intent of the network through the tool. Terraform addresses these concerns by providing its users a simple workflow to install and get started with. Here are the steps to started with Terraform.

With Terraform installed, let’s dive right into it and start creating some configuration intent on Cisco ACI.

If you don’t have an APIC, you can start by installing the cloud APIC (Application Policy Infrastructure Controller) on AWS and Azure or use Cisco DevNet’s always-on Sandbox for ACI.

Configuration


The set of files used to describe infrastructure in Terraform is simply known as a Terraform configuration. We’re going to write our first configuration now to create a Tenant, VRF, BD (Bridge Domain), Subnet, Application Profile and EPG (Endpoint Groups) on APIC.

The configuration is shown below. You can save the contents to a file named example.tf. Verify that there are no other *.tf files in your directory, since Terraform loads all of them.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Online Exam, Cisco Prep
(Note: This is not a complete policy configuration on APIC)

Provider


The provider block is used to configure the named provider, in our case “aci”. A provider is responsible for creating and managing resources. A provider is a plugin that Terraform uses to translate the API interactions with the service. A provider is responsible for understanding API interactions and exposing resources. Multiple provider blocks can exist if a Terraform configuration is composed of multiple providers, which is a common situation.

Cisco ACI Terraform Provider works for both on-prem and cloud APIC. In addition, it supports both X509 cert based and Password based authentication.

Resources


The resource block defines a resource that exists within the infrastructure.

The resource block has two strings before opening the block: the resource type and the resource name. In our example, the resource type is an ACI object like tenant “aci_tenant” and resource name is “cisco_it_tenant”.

Cisco ACI Provider supports more than 90+ resources and datasources.

Monday, 13 January 2020

An Overview of Zero Trust Architecture, According to NIST

While ZTA is already present in many cybersecurity policies and programs that sought to restrict access to data and resources, this document is intended to both “abstractly define” ZTA and provide more guidance on deployment models, uses cases and roadmaps to implementation.

What’s the problem they’re trying to solve? Agencies and enterprise networks have given authorized users broad access to resources, since they’ve traditionally focused on perimeter defenses. But that’s led to lateral movement within the network – one of the biggest security challenges for federal agencies.

Realistically, NIST recognizes that the migration to a ZTA is more of a journey rather than a complete replacement of an enterprise’s infrastructure. Most enterprises will likely continue to operate in a hybrid model – of both zero trust + legacy mode – for awhile as they continue their IT modernization investments.

And despite the misleading name, they state that ZTA is not a single network architecture, but rather a set of guiding principles.

The overall design denotes:


◉ A shift away from wide network perimeters to a narrower focus on protecting individual or small groups of resources
◉ No implicit trust is granted to systems based on their physical or network location

While traditional methods block attacks coming from the internet, they may not be effective at detecting or blocking attacks originating from inside the network.

ZTA seeks to focus on the crux of the issue, which NIST defines as two main objectives:

1. Eliminate unauthorized access to data and services
2. Make the access control enforcement as granular as possible


Zero Trust Architecture Tenets


NIST lists out a few conceptual guidelines that the design and deployment of a ZTA should align with (summarized for brevity below):

1. All data and computing services are considered resources. For example, an enterprise might classify personally-owned devices as resources, if they’re allowed to access enterprise resources.

2. All communication is secure regardless of network location. This means access requests from within the network must meet the same security requirements as those from outside of it, and communication must be encrypted and authenticated.

3. Access to individual enterprise resources is granted on a per-connection basis. The trust of whatever is requesting access is evaluated before granted access – authentication to one resource doesn’t automatically mean they get access to another resource.

4. Access to resources is determined by policy, including the state of user identity and the requesting system, and may include other behavioral attributes. NIST defines ‘user identity’ as a network account used to request access, plus any enterprise-assigned attributes to that account. A ‘requesting system’ refers to device characteristics (software versions, network location, etc.). ‘Behavioral attributes’ include user & device analytics, any behavior deviations from baselined patterns.

5. The enterprise ensures all owned and associated systems are in the most secure state possible, while monitoring systems to ensure they remain secure. Enterprises need to monitor the state of systems and apply patches or fixes as needed – any systems discovered to be vulnerable or non-enterprise owned may be denied access to enterprise resources.

6. User authentication is dynamic and strictly enforced before access is allowed. NIST refers to this as a ‘constant cycle of access’ of threat assessment and continuous authentication, requiring user provisioning and authorization (the use of MFA for access to enterprise resources), as well as continuous monitoring and re-authentication throughout user interaction.


Zero Trust Architecture Threats


What follows is a summary of some of the key potential ZTA threats listed in the publication:

Insider Threat

To reduce the risk of an insider threat, a ZTA can:

◉ Prevent a compromised account or system from accessing resources outside of how it’s intended
◉ MFA for network access can reduce the risk of access from a compromised account
◉ Prevent compromised accounts or systems from moving laterally through the network
◉ Using context to detect any access activity outside of the norm and block account or system access

To prevent the threat of unauthorized access, Duo provides MFA for every application, as part of the Cisco Zero Trust framework. An additional layer of identity verification can help mitigate attacker access using stolen passwords or brute-force attacks. That paired with Duo’s device insight and policies provides a solid foundation for zero trust for the workforce.

Learn more about Duo’s new federal editions tailored to align with:

◉ FedRAMP/FISMA security controls
◉ NIST’s Digital Identity Guidelines (NIST SP 800-63-3)
◉ FIPS 140-2 compliance

Network Visibility

In a ZTA, all traffic should be inspected, logged and analyzed to identify and respond to network attacks against the enterprise. But some enterprise network traffic may be difficult to monitor, as it comes from third-party systems or applications that cannot be examined due to encrypted traffic.

In this situation, NIST recommends collecting encrypted traffic metadata and analyzing it to detect malware or attackers on the network. It also references Cisco’s research on machine learning techniques for encrypted traffic (section 5.4, page 22):

“The enterprise can collect metadata about the encrypted traffic and use that to detect possible malware communicating on the network or an active attacker. Machine learning techniques [Anderson] can be used to analyze traffic that cannot be decrypted and examined. Employing this type of machine learning would allow the enterprise to categorize traffic as valid or possibly malicious and subject to remediation.”

Cisco Encrypted Traffic Analytics (ETA) allows you to detect and mitigate network threats in encrypted traffic to gain deeper insight without decryption. It also allows you to quickly contain infected devices and uses, while securing your network. Paired with Cisco Stealthwatch, you can get real-time monitoring using machine learning and context-aware analysis.

Zero Trust Architecture: Continuous Monitoring


The publication also references having a strong Continuing Diagnostics and Mitigations (CDM) program as “key to the success of ZTA.”

This is a complete inventory of physical and virtual assets. In order to protect systems, agencies need insight into everything on their infrastructure:

◉ What’s connected? The devices, applications and services used; as well as the security posture, vulnerabilities and threats associated.

◉ Who’s using the network? The internal and external users, including any (non-person) entities acting autonomously, like service accounts that interact with resources.

◉ What is happening on the network? Insight into the traffic patterns, messages and communication between systems.

◉ How is data protected? Enterprise policies for how information is protected, both at rest and in transit.

Having visibility into the different areas of connectivity and access provides a baseline to start evaluating and responding to activity on and off the network.

Cisco Zero Trust


Asking the above discovery questions and finding a solution that can accurately and comprehensively answer them can be challenging, as it requires user, device, system and application telemetry that spans your entire IT environment – from the local corporate network to branches to the multi-cloud; encompassing all types of users from employees to vendors to contractors to remote workers, etc.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Online Exam, Cisco Prep

Get visibility into everything on your infrastructure, and get control over who can access what, on an ongoing basis. Cisco Zero Trust provides a comprehensive approach to securing all access across your applications and environment, from any user, device and location. It protects your workforce, workloads and workplace.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Online Exam, Cisco Prep
It is comprised of a portfolio of the three following primary products:

◉ To protect the workforce, Duo Security ensures that only the right users and secure devices can access applications.

◉ To protect workloads, Tetration secures all connections within your apps, across multi-cloud.

◉ To protect the workplace, SD-Access secures all user and device connections across your network, including IoT.

This complete zero-trust security model allows you to mitigate, detect and respond to risks across your environment. Verifying trust before granting access across your applications, devices and networks can help protect against identity-based and other access security risks.

Sunday, 12 January 2020

Datacenter Security: How to Balance Business Agility with Great Protection

When IDC consults with enterprise customers or performs world wide surveys, security is invariably an acute concern. That’s regardless of geography, industry, and identity of respondent (executive, LoB, IT, DevOps, etc.). While the challenge of providing protection and security extends across all places in the network, the problem is especially vexing in the datacenter.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certifications

There’s good reason for that, of course. The parameters of the datacenter have been redrawn by the unrelenting imperative of digital transformation and the embrace of multicloud, which together have had substantive implications for workload protection and data security.

As workloads become distributed – residing in on-premises enterprise datacenters, in co-location facilities, in public clouds, and also in edge environments – networking and network-security challenges proliferate and become more distributed in nature. Not only are these workloads distributed, but they’re increasingly dynamic and portable, subject to migration and movement between on-premises datacenters and public clouds.

Data proliferates in lockstep with these increasingly distributed workloads. This data can inform and enhance the digital experiences and productivity of employees, contractors, business partners, and customers, all of whom regularly interact with applications residing across a distributed environment of datacenters. The value of datacenters is ever greater, but so are the risks of data breaches and thefts, perpetrated by malevolent parties that are increasingly sophisticated.

In that cloud is not only a destination but also an operating model, the rise of cloud-native applications and DevOps practices have added further complications. As DevOps teams adopt continuous integration and continuous deployment (CI/CD) to keep up with the need for business speed and as developers leverage containers and microservices for agility and simplicity, traditional security paradigms – predicated on sometimes rigid controls and restrictions – are under unprecedented pressure. For enterprises, the choice seems to be between the agility of cloud and cloud-native application environments on one side and the control and safety of traditional datacenter-security practices on the other.

Perhaps that isn’t true, though. There is a way to move forward that gives organizations both agility and effective security controls, without compromise on either front. Put another way, there needn’t a permanent unresolved tension between the need for business agility and the require for strong security, capable of providing the controls that organizations want while aligning more closely with business outcomes.

The first step toward this goal involves achieving visibility. If you can’t see threats, you can’t protect against them. This visibility must be both pervasive and real-time, capable of sensing and facilitating responses to anomalies and threats that span users, devices, applications, workloads, and processes (workflow). From a network standpoint, visibility must be available within datacenters – into north-south and east-west traffic flows –between them, and out to campus and branch sites as well as to clouds. The visibility should extend up the stack, too, all the way to application components and behavior, giving organizations views into potentially malicious activity such as data exfiltration and the horizontal spread of malware from server to server.

Once visibility is achieved, organizations can leverage the insights it provides to implement policy-based segmentation comprehensively and effectively, mitigating lateral propagation of attacks within and between datacenters and preventing bad actors from gaining access to high-value datacenter assets.

The foundations of visibility and policy-based segmentation, in turn, facilitate a holistic approach to threat protection, helping to establish an extensive network of capabilities and defenses that can quickly detect and respond to threats and vulnerabilities before they result in data loss or prohibitively costly business disruptions.

While it might seem that cloud-era business agility and effective security are irreconcilable interests, there is a path forward that merges the two in unqualified alignment.

Saturday, 11 January 2020

Enterprise Networking Business 2019 Year in Review

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

Towards the end of this busy and innovative year, Cisco leadership decided to combine several businesses under one leader, SVP/GM Scott Harrell, to create the Intent-Based Networking Group. So, what is the meaning in a change of names? The new organization consists of engineering and product marketing teams from Enterprise Networking and Data Center, with a renewed focus on creating deep multi-domain integrations across wireless, wired, data center, cloud, and SD-WAN/edge computing.

The name change represents how we are focusing on solving customer challenges with complete intent-based networking solutions. As enterprises enhance the ways their workforce connects and collaborates, Cisco is there. As organizations move applications and data resources to multiple cloud platforms to improve flexibility and responsiveness of business processes, Cisco is there. When branch offices need to connect to SaaS applications over the internet, Cisco is there to secure the data, devices, and provide high quality of experience to the distributed workforce.

In this review of 2019 achievements, both technical and cultural, we will take a closer look at how our engineering teams’ accomplishments have benefited enterprises large and small, in every region in the world. Throughout this post, I’ll highlight products and solutions with links to past blog posts and external articles for deeper dives.

Solving Customer Digital Transformation Challenges


Everything we design, code, and manufacture is created to support our customers’ digital transformation journey with multi-domain connectivity, built-in security, and high-availability.

Expanding Wireless Connectivity with Wi-Fi 6


Top of mind for many organizations in 2019 was the arrival of Wi-Fi 6. Wireless connectivity is the preferred method of connecting devices to enterprise networks, applications in the cloud, and internet data sources. The next generation of faster, lower latency, and higher density wireless communications is already replacing the existing wireless LAN infrastructure and it is expected to be a high-priority, multi-year project for organizations of all sizes. To support this major transition, Cisco engineering created the Catalyst Access Points and Wireless LAN Controllers to exceed the Wi-Fi 6 standard, incorporating innovative features such as Flexible Radio Assignment, real-time analytics, integrated security, and intelligent capture. In addition, we introduced new Catalyst 9000 switches to unite the new faster and higher bandwidth wireless networks with the wired campus.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

Many new enterprise endeavors are already relying on Cisco Wi-Fi 6 wireless technology to bring fast connections in high-density sites and in complex facilities, such as manufacturing, where older Wi-Fi bersions struggled to work at all. There will be even more innovations ahead as we work to connect the proliferation of IoT devices with Wi-Fi 6 with its power-saving capabilities to conserve IoT device battery life and the new Catalyst IE3k Rugged Series Switches.

As telecommunications service providers expand their 5G footprints, Cisco is providing methods for integrating the two wireless networks to deliver seamless connectivity and take full advantage of network slicing to provide specialized services to enterprise applications governed by common security policies. Wi-Fi 6 was a big leap in 2019 and will be even more important as enterprise workforces continue to be more distributed and mobile, while the business applications people need to access are hosted in multiple cloud platforms.

Uniting Campus and Branch with Cloud Resources using SD-WAN


2019 was also the year that Cisco SD-WAN powered by Viptela became the go-to solution for uniting a distributed workforce in branch offices, retail stores, and partners’ systems with cloud and SaaS applications. We built-in full stack security to ensure that using direct internet connections at branch locations to connect to cloud applications doesn’t expose data and devices to external and internal security threats. With centralized cloud management, Cisco SD-WAN connects remote offices with zero-touch edge routers, traffic segmentation, and threat detection using built-in Application-Aware Enterprise Firewall, intrusion detection system, and URL-filtering with Cisco Umbrella. As a result of these enhancements, Cisco SD-WAN was given a coveted CRN Product of the Year award.

Our next goal for SD-WAN last year was to ensure a high quality of experience (QoE) for cloud and SaaS applications being accessed by a distributed workforce. Working with cloud application providers, such as Microsoft and their Office 365 applications, we built Cloud OnRamps that automatically connect workers at branch offices with the nearest, or most efficient, point of presence for the desired application via the SD-WAN. Cisco Cloud OnRamps monitor and adjust traffic to ensure the best level of performance for the primary cloud application providers.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

Taking the OnRamp concept one step further, we developed Cloud OnRamps for CoLocation for regional point of presence and IaaS centers. This advancement creates transport-independent connections to regional hubs to service multiple branches and business sites to provide high QoE for applications. The regional aspect of the colocation also addresses the need for some enterprises to keep certain types of personal data local, versus storing it in global clouds, while providing an SD-WAN fabric that is easy to manage from a central console.

Augmenting NetOps Skills with AI and Machine Reasoning


Just because networks grow in complexity doesn’t mean they have to be complicated to manage. But trying to make sense of the billions of data points generated by campus-sized networks of switches, routers, and access points can quickly overwhelm an IT team. Using machine learning, machine reasoning, and artificial intelligence algorithms to analyze the vast data lakes of telemetry to determine norms and anomalies, we developed Cisco AI Network Analytics to help IT navigate the torrents of network telemetry to zero-in on time-critical problems. Applying machine reasoning to the analysis of network anomalies leverages thousands of man-hours of Cisco troubleshooting knowledge to suggest the correct remedies for many challenging issues.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

Empowering IT with an Architecture for Access Control


To simplify the complexity of campus to branch to cloud connectivity, we augmented Cisco SD-Access with additional intelligence to translate business intents into segmentation and security polices—a foundational aspect of intent-based networking. SD-Access shifts the workload from IT staff performing routine tasks of onboarding individual devices and managing network configurations, to building intelligence into the network. The network learns to manage itself by, for example, automatically onboarding specific device types with pre-determined security and access policies that follow people and devices across the wired and wireless fabrics, from ground to cloud.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

We also improved the Cisco Identity Services Engine (ISE) to work with multiple Cisco DNA Centers. This enables regional Cisco DNA Centers to leverage a master instance of Cisco ISE so that SD-Access can apply access and segmentation policies across each region. With this capability, SD-Access ensures that security and access policies defined by corporate IT are implemented consistently across global networks, while enabling regional control over specific aspects of workforce and device rules.

Focusing on Innovations in Connectivity Solutions


At several 2019 events, Cisco had the opportunity to demonstrate OpenRoaming, an open method of enabling mobile devices to automatically and securely connect to Wi-Fi networks without entering IDs and passwords. We created the OpenRoaming Federation ecosystem with partners such as Apple, Intel, and Samsung. As the Federation grows with additional device and access providers, the general public will be able to seamlessly connect to authorized Wi-Fi networks in stores, public spaces, and offices without manually signing in to captive portals with IDs and passwords. OpenRoaming unites wireless connectivity from LTE, 5G, and Wi-Fi to provide continuous internet connectivity to the applications people depend on for collaboration, finance, shopping, and community. Last year, OpenRoaming was demonstrated in real-world environments such as Mobile World Congress in Barcelona, Cisco Live in San Diego, Cisco Impact in Las Vegas, and a public trial at the Canary Wharf Group business center in London.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

Building on the premise of always-on connectivity for mobile devices with OpenRoaming, we released the Cisco DNA Spaces Cloud Location Platform to empower property managers to interact with guests’ devices to offer location-specific services, wayfinding, and customized experiences. For sites that already use Cisco access points, capabilities such as Operational Insights, Locate, and Detect are available through Cisco DNA Center and the DNA Spaces SDK for building custom location apps, with no need for additional hardware or software overlays. Physical spaces become digital spaces that improve customer service by measuring and understanding the habits and preferences of guests using wireless devices.

Worldwide Events Bring Cisco Customers and Engineers Together


Like most technology companies, Cisco often announces new solutions sets in conjunction with customer and partner events that provide an opportunity to receive immediate feedback from customers, industry analysts, and the technology press. This year we used events to unveil and demonstrate:

◉ OpenRoaming and DNA Spaces Cloud Platform at Cisco Live Barcelona
◉ Wi-Fi 6 Catalyst Access Points and Wireless Controllers at Cisco Live Melbourne
◉ Cisco AI Network Analytics at Cisco Live San Diego
◉ SD-WAN integration with MS Azure vWAN and Office 365 at Partner Summit
◉ SD-WAN integration with AWS Transit Gateway at AWS re:Invent

Being Inclusive and Innovative Makes Cisco the #1 Place to Work


Cisco stands committed to empowering business, society, and people to help develop a more Inclusive Future for all stakeholders. Our investments in Country Digital Acceleration (CDA) goes hand in hand with our People, Culture, and Social Impact initiatives to solve some of the world’s most challenging problems.

Our innovation mindset in Enterprise Network engineering produces an average of 300 patents a year. To turbocharge our internal thinking, we host or participate in multiple events throughout the year. For example, our annual EN Hackathon combines team building with technical prowess and a healthy portion of fun, to generate original prototypes that could one day become products that solve customer challenges. The Pioneer Awards represent a similar take on innovation, but with a focus on solutions brought to market that are making a significant impact—the Cisco AP4800 with Location-based Intelligent Capture was this year’s best product, and the best productivity solution went to WARP (Workflow Architecture Renewal Program), which is key to keeping the IOS XE network operating system up-to-date. Engineers also attend external events—such as the Grace Hopper Celebration and Women of Impact—to broaden their thinking and make new professional connections.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Learning, Cisco Guides, Cisco Online Exam

One result of these internal and external celebrations of innovation is that Cisco was named #1 World’s Best Workplaces by Great Place to Work in 2019, capping off a year of employee engagement and Cisco’s Corporate Social Responsibility (CSR) in a wide variety of social endeavors around the world.

Enterprise Network Engineering is a significant driver of Cisco solutions. We take great pride in our innovations and progress in producing quality solutions for our worldwide customers. Now that we are an integral part of the larger Intent-Based Networking Group, I personally look forward to the amazing journey ahead in 2020.