Friday, 11 January 2019

Localization: 6 tips for success

Software localization is often an afterthought instead of being embedded into projects from the start. In 2017, we started a localization project for the software-fulfillment process in our Tokyo office. The regional team had approached us because partners and end customers were having a “broken” user experience—from commerce applications to fulfillment process—because some applications were in Japanese, others in English.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

In 2014, we had launched a corporate extended relationship management (xRM) project, which included localizing major partner and customer-facing applications. The project had a multi-language data foundation. However, constant updates to the software and newer projects were causing us to fall behind. In addition, we faced challenges in getting partners to align with our software subscription business strategy mainly for a localized user experience. For most projects, localization support was added long after the release of the application or new capability.

Since the Tokyo project, we’ve brought localization into the initial phase of software development. Here are the six guidelines we follow.

1. Define success


The time and resources spent on localization can be hard to justify in quantitative terms because outcomes can be subjective and difficult to evaluate. For the Tokyo project, we measured success based on:

◈ Establishing strategic self-sufficiency by addressing/removing customized localization work-arounds by various users.
◈ Providing a smooth user experience between commerce and fulfillment applications.
◈ Storing consistent user language preferences and sharing user language preferences across applications.
◈ Making it as easy to add a new language during software product planning as adding a new user story.
◈ Accelerating go-to-market.
◈ Giving us a competitive advantage in international markets.


2. Realize that localization is foundation work, not an add-on



Our original localization initiative tried to address the problem after new applications and features were released. Our analysis of the software-fulfillment process in Tokyo showed that this approach led to production stoppage, re-engineering, and increased resource requirements to identify customer impacts and application dependencies. Thinking about localization from the initial phase of software development avoids these risks. To achieve this, we re-engineered the architecture to be ready for internationalization. According to Wikipedia, internationalization (i18N) is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization (l10N) is the process of adapting internationalized software for a specific region or language by translating text and adding locale-specific components.

3. Know the stakeholders and key partners


Assessing localization requirements requires an understanding of all stakeholder groups, which often have a wide variety of backgrounds. Regional teams drove the discussion, but the Cisco IT architecture framework team was also involved because they owned the application platforms. There were many stakeholders like Japan sales operations, Japan strategy & planning, corporate business team, corporate IT team, global translation services team, and the architecture framework team. To achieve our localization goals within the target timeframe, we took the following steps:

◈ Aligned the business outcomes with the country’s go-to-market strategy
◈ Established a partnership with the architecture framework team to enable scalable internationalization
◈ Scheduled regular team meetings to enable dynamic collaboration across all functional teams responsible for software business adoption
◈ Implemented a phased approach to localization, starting with the highest-priority applications


4. Obtain buy-in from key stakeholders



The localization process is complex and requires commitments from business leaders as well as engineers. To obtain these commitments, we:

◈ Established dynamic teams, consisting of members from across the globe including various stakeholders who could participate in reviews and take necessary actions.
◈ Educated upper management about the importance of dynamic team sync up meetings on a regular basis.
◈ Assigned dedicated teams to re-engineer the architecture to be internationalization (i18N) ready.
◈ Encouraged dynamic team collaboration by scheduling regular meetings at times that worked for team members in different time zones.
◈ Analyzed application dependencies during the project to ensure smooth project sign-off to avoid release time surprises and re-work.


5. Know the difference between machine translation and localization



Localization is more than simply translating existing web properties. More broadly, it’s adapting content and applications for regional or local consumption. This sometimes requires modifying the flow of the user interface or requires changes to the source language (English for instance) itself and other site elements to match the user’s cultural expectations.

To ensure the quality of localization, we:

◈ Aligned the user interface to follow same user preferences across different user applications
◈ Localized every possible user interface flow
◈ Asked linguists to review application screens
◈ Utilized appropriate change management to review with teams so that no gaps arise when changing the user interface in one language, impacting the core (English) user interface flow and vice versa.


6. Speak directly to the customers in a language they understand



This guideline applies whether you’re building a website, customer application, or partner application. In a Harvard Business Review study, 72% of consumers said they’d be more likely to buy a product that’s in their own language and 56% said this was more important than price.

Progress to-date

Globalization and internationalization are becoming a standard for every Cisco IT project (Figure 1). Once the projects get realized, if the project has gone through localization, it can increase the business opportunities among international customers. The data gathered with a localized user interface can give clear visibility for data analysis.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Figure 1

We are working to make localization a business priority in other regions. To support that effort, we are building a strategic and holistic operating model and framework for localization across products, IT platforms, technical support documents, and channel programs portfolio

Wednesday, 9 January 2019

Planning Your Cloud Communications Migration: Connectivity And Network Service Options

Layer 1 and 2 Access Connectivity Type


In the ISO (International Standards Organization) data connectivity model, Layer 1 and 2 represent the physical and data layers. The most common Layer 1 physical connectivity are copper, fiber, and radio frequencies. This layer includes both the wiring and switching equipment. Layer 2 data connectivity protocols include things such as Ethernet, PPP, and ATM. Using these layers help to understand how quality internet services are constructed. Like a building, the integrity of the foundation is critical for the overall quality of the final structure. Trying to deliver high quality internet services on poor quality wiring is both challenging and not recommended.

Cisco Study Material, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

In this next section, we will do a quick review of access types and their suitability for IP-based business communications. This is not intended as a technical overview of access types but instead provides guidance for IT and procurement managers for their migration planning process. Key access types we will discuss are DSL, T1 / PRI, Coax, Ethernet over Copper, and Fiber-based access.

DSL and DSL Variants:

DSL or digital subscriber line is a family of technologies designed to deliver internet services over telephone lines. DSL variants include “asynchronous” ADSL, “synchronous” SDSL, and higher speed technologies such as “very-high-bit-rate” VDSL and “G.fast.” Although DSL technologies continue to advance and produce higher and higher theoretical speeds (even up to 1Gb/s), these technologies are dogged by the quality problems with the underlying twisted-pair phone lines on which they are built. In addition, DSL service performance is sensitive to the distance from the central office (CO). This combination of challenges brings variability to the performance of what planners can expect for specific target sites. For these reasons, DSL should largely be avoided for access connectivity planning and only be considered in limited circumstances. One of the circumstances to consider DSL is when its access capacity can be bonded with another access circuit through advanced network service such as SD-WAN. We will discuss SD-WAN in the next section and will refer back to this scenario.

T1 (DS1) / T3 (DS3) Variants:

T1 or the T-carrier family of access connectivity services is built on a four wire transmission circuit. Originally developed at Bell Labs in the 60s, this family of services is built on the aggregation of “DS0” 64Kb/s channels. Typically delivered by telecom operators, offers in this class provide symmetrical internet service and speeds of 1.544 Mb/s (T1), 2.048Mb/s (E1), and up to 45 Mb/s (T3/DS3). These services are highly reliable and served as the primary method of “last mile” internet connectivity for most businesses in the late 90s and 2000s. While the T-carrier family is largely considered a legacy access methodology, there could be instances where businesses might consider some higher capacity variants, especially from a fractional T3 or above and where more modern managed Ethernet or fiber-based services are not available. Some CSPs that still sell T3 access circuits may offer these at substantial discounts.

Coax:

This class of internet services are typically delivered by cable MSOs and lead with a coaxial copper cable for the Layer 1 physical delivery of internet service. Most cable networks are actually now engineered as hybrid fiber coaxial (HFC) with much of the regional internet traffic distributed over new fiber plant. Coaxial connectivity provides last mile connections to homes and businesses. Coax-based internet services from most Cable MSOs are offered at attractive prices and provide high bandwidth, with speeds starting in many cases at 100 Mb/s. Speeds are typically asymmetrical with higher download than upload. For communications services planning, IT managers should focus on upload speeds. Most business packages start at 10 Mb/s, enough to support a large number of IP voice channels and enough for limited HD video channels. Many coax-based services from cable MSOs are delivered as a “shared service” where all the users from a single street, neighborhood, or a strip mall may share a single pool of bandwidth. While shared services should be a concern for IT planners, consider that cable MSOs use the DOCSIS protocol to reserve bandwidth for high priority media traffic. DOCSIS is only offered on the cable MSOs’ own services and not for OTT-based services. For OTT services, IT planners should consider adding healthy buffers and/or overhead to their estimated bandwidth demands. Or, if they are engaging with a cable MSO for business internet, they should inquire about fiber and DIA-based solutions.

Carrier Ethernet (over copper):

Ethernet dominates enterprise and business networks. Ethernet has grown into this position with its ease of deployment, self-configuration, and excellent price per bit and price per port. The primary limitation for Ethernet, especially over copper, is on transmission distance. For this reason Ethernet is used for most in-building and small campus links but not used for access networks that may extend thousands of feet to several miles. Vendor innovation has extended the range of Ethernet services. These innovations have enabled CSPs to offer access Ethernet services with symmetrical speeds and high quality of service. Speeds are still dependent on the distance from the CO. In most cases, speeds available are well-understood by CSPs and can be easily provided and quoted based on site address. Carrier Ethernet services are considered a state-of-the-art offer from CSPs and provide an excellent platform for cloud-based IP communications services. Service pricing will depend on geography and local competitive alternatives.

Fiber-Based Services:

Just as much Ethernet over copper is an attractive option for business IP communications, Ethernet over fiber is even more attractive. Fiber provides far greater resistance to the signal degradation that occurs when some protocols are transmitted over copper. Requesting fiber-based access for a site is often prohibitively expensive. It is for this reason that most businesses should initiate their cloud migration plan for “what is available” from current providers at particular sites. As we’ve mentioned previously in this post, fiber-based connectivity is now available at more than 50% of business sites across the US. The main question facing planners is exactly what kind of premium the business needs to pay for securing fiber-based connectivity. In addition, IT planners should be prepared to layer service assurance and network services to provide QoS for voice and video connectivity. Though fiber offers an excellent transmission medium, it can still experience contention where fiber links are stressed with lots of demanding traffic such as HD video.

Network Services to Support Access Layer Service Quality


After physical connectivity above (fiber, copper, etc.), QoS of packets and traffic flow is managed through a variety of standards and protocols. There are benefits and drawbacks to most management approaches. In most cases, businesses prefer to apply some type of bandwidth reservation for real-time media such as voice and video traffic. A basic summary of traffic demands of various communications and data traffic is provided below and can help to show the sensitivity of traffic.

Cisco Study Material, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 5: Representation of Traffic QoS Demands by Jason D. Hintersteiner, CWNE #171

DOCSIS:

Cable MSOs use DOCSIS, “data over cable service interface specification,” to deliver IP broadband service over their hybrid fiber-coaxial (HFC) infrastructure. Within the DOCSIS standards are methods for delivering a higher class of service for specific types of traffic, specifically real-time protocol (RTP) used for IP-based voice and some video communications traffic. Priority service is achieved using unsolicited grant synchronization (UGS), which provides an immediate grant of bandwidth access from the cable modem termination system (CMTS).

MPLS:

One of the most mature and proven methods for assuring call quality, MPLS or “multi-packet label switching,” provides a virtual tunnel across the CSPs access connection to assure transmission quality across potential points of traffic contention. Considered one of the most trusted and preferred methods for assuring resilience, MPLS is considered relatively costly and may not be available across all geos. The price of MPLS has represented a barrier for some planners who might be looking at cloud migration strictly on a cost comparison basis – where MPLS port charges eat away many of the savings of PBX maintenance and TDM access trunk charges.

While MPLS port charges have been coming down in recent years, and in some geographies at a compound rate of 10+% / year, reliance on MPLS alone for access connectivity assurance is proving challenging for both CSPs and IT planners. For the economics and flexibility needed to plan a multi-site and phased migration approach, IT planners should look to a mix of MPLS and other access methods.

SD-WAN:

Perhaps the most exciting innovation in access networking and WAN services has been the innovations around SD-WAN (SD = Software Defined). In fact, Frost & Sullivan recently revealed that 94% of businesses have deployed, are deploying, or will deploy an SD-WAN service in the next two years. A key benefit of SD-WAN is that it uses multiple paths for traffic to traverse a network. SD-WAN can improve service resilience across existing access circuits (by running several parallel traffics streams) or can improve resilience by running parallel traffic streams across multiple access circuits.

In the case of overlaying multiple circuits, consider a scenario where a particular business’s site may only have IP access via DSL and a shared cable broadband service. While either service might not provide the needed resilience for the business’s traffic demands, an SD-WAN service across a combination of both the DSL and cable circuits could offer the needed performance to deploy high quality hosted communications. From an economics standpoint, the combination of DSL and cable broadband circuits plus SD-WAN services would be very cost-effective.

Note that some SD-WAN resilience measures can add overhead traffic to voice and video media. Overhead can come in the form of multiple media streams and from the increased size of some packet headers. IT managers should work with their SD-WAN and CSPs to correctly size access circuits with these overhead factors in consideration.

All in all, SD-WAN offers a lot of benefits and should certainly be seriously evaluated as a part of IP access services for cloud-based communications. Consider the 2018 report by Transparency Market Research. They forecast that the global SD-WAN market will expand at a 51.4% CAGR between 2017 and 2025 and be worth US $34.35 billion.

Sunday, 6 January 2019

Cisco Mobility Express and Cisco Umbrella – Security Simplified!

We’ve had a few busy months with our Cisco Mobility Express solution. How busy? Following in this trend of new innovations, I am excited to share another key enhancement to the Mobility Express solution: Cisco Umbrella integration with Mobility Express via the latest AireOS 8.8.111.0 release.

With today’s digital consumers, providing Wi-Fi in your business is a necessity rather than simply a luxury. On top of that there is increasing complexity caused by the proliferation of smartphones, tablets, wearables and IoT end points that are beyond IT’s direct control. According to Cisco Visual Networking Index (VNI), 49% of global traffic in 2020 will be Wi-Fi based. With this explosive Wi-Fi growth in the network, providing a safe and secure connection is of paramount importance. Threats continue to increase in sophistication and have reached exponential levels, increasing in speed with every passing year.

So how do you secure your wireless network if you’re a small to medium-sized organization with a lean or nonexistent IT department? How will you keep pace with your competitors while successfully deploying, managing, and securing your network?

Enter Mobility Express and Umbrella.

Limited budget? No problem. IT team of one? That’s okay too. With these integrated solutions, it’s easier than ever to quickly deploy and secure an on-premise wireless network. Mobility Express offers industry-leading wireless LAN technology with a built-in virtual controller, and Umbrella provides the first line of defense against threats on the internet wherever users go. And you don’t have to sacrifice enterprise-class performance and reliability.

Umbrella is a cloud-delivered security platform that protects against threats like malware, ransomware, and phishing. With Umbrella, you gain visibility and enforcement at the DNS layer, so you can block requests to malicious domains and IPs before a connection is ever made. The Umbrella integration across the Cisco wireless LAN controller (WLC) portfolio – including Mobility Express, WLC 3504, 5520 and 8540 – provides comprehensive security coverage that is simple to deploy and manage.

Deploy and Protect in Minutes 


You can quickly and easily enable Umbrella policies per SSID in three easy and intuitive steps from the Cisco Mobility Express WebUI itself. The ability to map granular policies on a per-SSID basis allows the network to evolve rapidly to your changing business needs. All of this added protection is enforced without any additional latency, so the end user experience is not impacted.

Step1: Enable Umbrella and enter the Umbrella API Token

Cisco Mobility Express, Cisco Umbrella, Cisco Security, Cisco Guides, Cisco Learning

Step 2: Create profile and register the profile with Umbrella

Cisco Mobility Express, Cisco Umbrella, Cisco Security, Cisco Guides, Cisco Learning

Step 3: Apply the profile to the WLAN

Cisco Mobility Express, Cisco Umbrella, Cisco Security, Cisco Guides, Cisco Learning

Licensing & ordering Umbrella with Mobility Express


With AireOS 8.8.111.0 release, this feature is available to all customers and there is no additional license on Mobility Express to enable this feature. However, customers who wish to use Umbrella with Mobility Express will need an Umbrella license and account.

With the amount of Mobility Express innovations coming from Cisco, make sure to bookmark this blog page so that you’re always up-to-date.

Friday, 4 January 2019

Hybrid Chat for Cisco Journey Solutions


Cisco Customer Care, now Cisco Customer Journey Solutions (CJS), is by definition the best architecture to ride and support the current highest priority in large enterprises – Customer Experience sales innovation, the #1 priority for 71% of the business leaders (2017 Global CX Benchmarking Report).  CJS, very often considered a cost center in the past, is now seen by enterprises as a driver of revenue, able to increase customer loyalty, retention rate, and important financial metrics such as the Annual Renewal Rate (ARR).

Today, 65% of customers prefer Chats versus traditional voice calls to customer care (BT Global services-Cisco-Davies Hickman Partners 2017). Thus, to consider these changes of users habit, a modern CJS has to offer a selection of contact methods, called Omnichannel, and at the same time offer the possibility to move seamlessly between interaction channels bringing the context along.

Conversational self service powered by artificial intelligence


Customers also expect a near instant response time and quick resolution of their needs – both being key business metrics proven to drive customer retention and loyalty. One third of the time it needs two or more interactions to resolve the issue, causing customer dissatisfaction and 40% of them eventually leaving to find a new provider (ICMI, 451 Research). This business ask is setting another mandatory need for a modern CJS: it has to offer Conversational Self Service solutions powered by Artificial Intelligence that are efficient, productive and cost effective.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material
The four major business needs addressed by the “Hybrid Chat, Artificial Intelligence solution for Cisco CCE/CCX/HCS”

The next picture describes the architecture of the solution developed by Bucher & Suter and Expertflow, a Cisco Ecosystem partner. The architecture is constituted of several building blocks able to interact, dialogue, and orchestrate through OPEN API’s to allow easy customization of the end customer solution:

◈ DIGITAL TOOLS (any sort of present and future type of CHAT tools used by end users)
◈ ARTIFICIAL INTELLIGENCE services and vendors
◈ Cisco CJS architecture: CCX, CCE, PCCE, HCS and CJP
◈ A CONVERSATIONAL ENGINE developed by the ECOSYSTEM partner, being the broker, the orchestrator between digital tools, CJS APIs, AI vendors and NLP services, and offering the integration of both end users and agents front ends.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material

Let’s see the way it works, beginning with a description of its hybrid approach

When implementing a chat bot in a digital CJS you always need a hand-off strategy for all those cases where the BOT isn’t confident enough to answer and thus needs a human agent. This means that in a standard solution a chat is always managed either by a BOT or by an agent, which very often results in very low productivity of the CJS, especially if the chat bot is not powered with AI.

The solution presented in this article features a different innovative approach where the agent, the BOT, and the user are always engaged in a Continuous Chat Conference, and the agent can monitor multiple chats and leverage the BOT during the entire conversation, thereby reducing the workload and response time. After a hand-off to an agent, the BOT remains in the conversation and works as an agent assistant so upon every customer utterance query, the Hybrid Chat presents the most appropriate answers identified by the BOT to the agent.

A colored icon signals the agent which chats demand an intervention (RED), the conversations where the BOT can run independently (GREEN), and those where the BOT has multiple options (including a “strike probability view”) but it is not 100% sure so best would be having the agent picking the right one or overwriting (YELLOW). The agent can let the BOT auto-answer with the highest-scoring answer, intervene and select one among those that the BOT suggests, or even draft a new response to the customer.

A timer displayed with a colored circle around the chat icons indicates timeouts upon which certain configurable actions are taken.

The BOT uses a model created with Machine Learning powered by Google Dialogflow to answers chats, but the solution is quite innovative also because the messages tagged and validated by the agents can be used as new training data to the BOT in order to improve future recognition rates (Natural Language Understanding) and answers (Dialogue Engine).

The chatbot is constantly learning through conversations from person-to-person (clients and agents) making the whole solution self-tuning on the job, where the performances of the BOT are continuously improving in a specific contest further reducing engagement of the agents and therefore raising productivity and lowering costs. The interplay between customer, agents, and the BOT also reduces the response time, increasing the quality of the service delivered and enabling higher customer satisfaction and loyalty.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material

Let’s now analyze the way this solution interacts and integrates with a Cisco CCX/CCE or HCS CJS.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Material

edia (SMS is slower than FB Chat). Based on such analysis, it assigns multiple chats in parallel to agents interacting with Cisco CJS through Open APIs (CTI and UQ API), ensuring that each agent has the same work volume. If an agent is fully charged, the Conversational engine makes a new synchronous media routing request to the CJS to reserve the next full-time agent. Conversely, if a chat session requires a full-time collaboration session (escalation to audio and/or video and screen sharing), all other ongoing chats are given back to the general chat pool and distributed to other agents and that agent is reserved for the full-time session.

The solution presented in this article is showing the incredible potential of combining together the Cisco architectures with Google artificial intelligence to design custom solutions targeting the modern business needs of large, medium, and small enterprises: Customer Experience, customer loyalty, customer retention, increased renewal revenue, decreased costs.

Wednesday, 2 January 2019

Cognitive Intelligence: Empowering Security Analysts, Defeating Polymorphic Malware

In psychology, the term “cognition” refers to a human function that is involved in gaining knowledge and intelligence. It helps describe how people process information and how the treatment of this information may lead to various decisions and actions. Individuals use cognition every day. Examples as simple as the formation of concepts, reasoning through logic, making judgments, problem-solving, and achieving goals all fall under the purview of this term.

In cybersecurity, applying the principles of cognition helps us turn individual observed threat events into actionable alerts full of rich investigative detail. This process improves over time through continuous learning. The goal is to boost discovery of novel or morphing threats and streamlining of the cybersecurity incident response. The work of the security operations teams can be vastly optimized by delivering prioritized actionable alerts with rich investigative context.

Enhancing Incident Response


Let’s take a moment to think of the tasks that a security team performs on a day-to-day basis:

◈ Looking through ever-increasing numbers of suspicious events coming from a myriad of security tools.
◈ Conducting initial assessments to determine whether each particular anomaly requires more investigation time or should be ignored.
◈ Triaging and assigning priorities.

All of these actions are based on the processes, technology, and knowledge of any particular security team. This initial decision-making process by itself is crucial. If a mistake is made, a valid security event could be ignored. Or, too much time could be spent to investigate what ends up being a false positive. These challenges, coupled with the limited resources that organizations typically have, as well as complexities associated with attack attribution, may be daunting.

That’s why security teams should embrace automation. At Cisco, we’re committed to helping organizations step up their game through the use of our Cognitive Intelligence. This technology allows correlating telemetry from various sources (Cisco and 3rdparty web proxy logs, Netflow telemetry, SHA256 hash values and file behaviors from AMP and Threat Grid) to produce accurate context-rich threat knowledge specific to a particular organization. This data, combined with the Global Risk Map of domains on the Internet, allows organizations to confidently identify variants of memory-resident malware, polymorphic malware with diversified binaries, and in general any innovative malware, that attempts to avoid detection by an in-line blocking engine.

As a result of automation like this, less time needs to be spent on detailed threat investigations to confirm the presence of a breach, identify the scope and begin triage. And that will in turn dramatically help mitigate the shortage of skilled security personnel by increasing the effectiveness of each analyst.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials, Cisco Malware, Cisco Security
Example of a Confirmed Threat Campaign

In a sense, Cognitive Intelligence algorithms mimic the threat hunting process for observed suspicious events. It identifies combinations of features that are indicative of malware activity, in a similar fashion that an incident responder would do, starting with relatively strong indicators from one dataset and pivoting through the other datasets at its disposal. The pivot point may lead to more evidence, such as behavioral anomalies that help reinforce the infection hypothesis. Alternatively, the breach presumption may fade away and can either be terminated very quickly or re-started when new data becomes available. These algorithms are similar to incident response playbooks used by Cisco CSIRT and other incident response teams, but operate on a much larger scale.

What’s New in 2018: Probabilistic Threat Propagation


One of the example algorithms that we call Probabilistic Threat Propagation (PTP) is designed to scale up the number of retrospectively convicted malware samples (threat actor weapon), as well as the number of malicious domains (threat actor infrastructure) across the Cisco AMP, Threat Grid, and Cognitive knowledge bases.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials, Cisco Malware, Cisco Security
Probabilistic Threat Propagation in a Nutshell

PTP algorithm monitors network communications from individual hashes to hosts on the Internet and constructs a graph based on the observed connections. The goal is to accurately identify polymorphic malware families and yet unknown malicious domains, based on the partial knowledge of some of the already convicted hashes and domains. The key here is that malware authors often reuse the same command-and-control (C2) infrastructure. Hence the C2 domains often remain the same across polymorphic malware variants. At the same time, these domains are usually not accessed for benign purposes.

For example, if an unknown file connects to a confirmed malicious domain, there’s a certain probability that this sample is malicious. Likewise, if a malicious file establishes a connection to an unknown domain, there’s a probability for this domain to be harmful. To confirm such assumptions, Cisco leverages statistical data surrounding the domain to determine how frequently it’s accessed, by which files and so on.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials, Cisco Malware, Cisco Security
Graph built by Probabilistic Threat Propagation Algorithm

The capability that we have introduced helps security analysts track and detect new versions of malware, including polymorphic and memory-resident malware, given the fact that C2 infrastructure remains intact. Similarly, this method is capable of tracking migrations of attacker’s C2 infrastructure, given the knowledge of malicious binaries which belong to the same malicious family. Cognitive Intelligence helps leverage specific telemetry from a stack of security products (file hashes from AMP, file behaviours from Threat Grid, anomalous traffic statistics and threat campaigns from Cognitive). That allows Cisco to model threat actor behaviors across both the endpoint and the network to be able to better protect its customers.

Probabilistic Threat Propagation algorithm also provides additional sensitivity to file-less malware (that doesn’t have file footprint on the disk of the system) and process injections. Such infections can be detected when a legitimate process or a business application starts communicating with domains associated with C2 infrastructure, that other malicious binaries predominantly contacted.

The beauty of this capability is that it runs offline in the Cisco cloud infrastructure, and therefore does not require any additional computational resources from customers’ endpoints or infrastructure. It simply works to provide better protection and the increased count of retrospective detections for novel variants of known malware.

Measuring Results


This blog entry wouldn’t be complete if we didn’t speak about the initial results, that just this single algorithm delivers. From a single malicious binary, Probabilistic Threat Propagation algorithm is able to identify tens if not hundreds of additional binaries that are a part of the same threat family and that also get convicted as a part of this analysis. Similarly, with this new mechanism of tackling polymorphism, we will generally be able to identify tens of additional infected hosts affected by a polymorphic variant of a particular threat. That is especially rewarding when it comes to measuring the positive impact on Cisco customers.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials, Cisco Malware, Cisco Security
Scaling threat detection efficacy with Probabilistic Threat Propagation

Cisco AMP for Endpoints and other AMP-enabled integrations (AMP for Email Security, AMP for WSA, AMP for Networks, AMP for Umbrella) leverage AMP cloud intelligence to provide improved threat detection capabilities boosted by the PTP algorithm.

Sunday, 30 December 2018

A Hybrid Cloud Solution to Improve Service Provider Revenue

Media and Telecom service providers serve millions of customers, and it is a challenge to monitor and assure that customers have a satisfactory experience with the services. Service providers incur high operation costs through customer support and truck rolls. Reactive customer support often causes customer dissatisfaction resulting in churn and revenue loss. Large volume and variety of data (network, CPE, billing, customer issues etc.) is maintained across multiple systems but is underutilized to add value to business. Different business units work in silos and non-availability of integrated customer profile leads to half-matured marketing efforts, unsatisfactory customer experience and loss of business opportunities. Common roadblocks for business improvement include:

◈ Lack of consolidated data & accurate insights
◈ Extended cycle time to process data and delay in access to insights
◈ Dependency on legacy systems to process data

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Barriers to business improvement

A container-based, hybrid cloud solution


A container-based hybrid cloud analytics solution that will help service providers to understand their customers better. It will provide a unified view about end customers and help improve the services and grow their business.

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Inputs to gain customer insights

POC scope


Customer churn analysis and prediction

Aggregate data from different data sources (billing, customer support, service usage, CPE telemetry etc.), create an integrated view of customer data and analyze churn
Implement a simple churn prediction model using hybrid cloud service

Tools and services used

Cisco Container Platform for CI/CD and management of micro services
GCP Pub/Sub for data aggregation
GCP Datalab for data exploration
GCP Dataflow for stream and batch processing of data
GCP BigQuery for analysis and BigQuery ML for churn prediction

Solutions architecture

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Solution diagram

Model training and serving with Google Cloud Platform:

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Model Prediction Data Flow

Overview of steps involved to develop the POC


1. Preliminary analysis on data consolidated across all (US) regions is performed, for example, Customer Sentiment analysis. Once this data is ready with all the feature labeling, etc, Cisco Container Platform (CCP) and Google Cloud Platform (GCP) are leveraged for gaining meaningful insights about this data.

2. Service catalogue is installed on the master node of the CCP cluster. It will provision and bind service instances using registered service broker. Custom application will leverage these service bindings and enable true hybrid cloud use cases.

3. In the CCP platform, using the Pub/Sub application, Media telecom customer data gets posted to GCP Pub/Sub.

4. Once data gets published to GCP Pub/Sub topics from periodical batch program, published data object will be consumed through Cloud data flow Job

5. Cloud Dataflow allows user to create and run a job by choosing google predefined dynamic template Pub/Sub to big query dynamic templates which initialize pipeline implicitly to consume data from topics and ingest into appropriate Big Query data set configured while creating Dataflow.

6. Once Dataflow predefined template Job gets started, it begins consumption of data object from input topics which get ingested into BigQuery table dynamically as a pipeline. This table data is then explored using Datalab, and required data pre-processing steps — such as removing null values, scaling features, finding correlation among features, and so on — are performed (please see the Model prediction data flow diagram above). This data is then returned back to BigQuery for ML modeling.

7. ML model built using BigQuery will be used for prediction of Customer churn for subsequent data received.

8. This processed churn data is retrieved using service broker to CCP and later consumed by UI

Dashboard

1. From the Solution dashboard (see sample screen shot shown below) service providers can view the forecasted churn based on region, service, and reason. Customer reported issues, and the services currently being used by the customers can also be visualized.

2. Solution dashboard allows service providers to take quick action. For instance, improving the wireless service or 4K streaming service, thereby preventing customer churn.

Cisco Study Materials, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Customer Insights Dashboard

Solution Demo Video

Friday, 28 December 2018

Transforming Enterprise Applications with 25G Ethernet SMF

Bandwidth Drivers for 25G


Bandwidth requirements in today’s Enterprise networks are now being driven by dramatic increases in video conferencing by such systems as Cisco’s Telepresence and other real-time applications such as Augmented Reality, Mixed Reality and Virtual Reality. These are taxing the limits of traditional 10G infrastructure. Whether it’s IEEE802.1ax WiFi Access Points or direct wired equipment with copper/fiber ports that require 1G/2.5G/5G/10G backhaul interfaces, new enterprisenetworks are being built with high speed equipment that now requires 25G ethernet interfaces.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 1. Cisco Telepresence and new applications demanding high bandwidth.

Cisco’s new SFP-10/25G-LR-S transceiver provides Single Mode Fiber (SMF) interfacing for Cisco’s newest platforms with 25G interfaces, including the new Catalyst 9500/9400/9300/9200’s, other new switches, new routers, and new servers / NICs (Network Interface Cards).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 2. Cisco’s SFP-10/25G-LR-S transceiver .

What is “LR”?


For SFP (Small form Factor Pluggable) transceiver technology “LR” stands for Long Reach that traditionally refers to a reach of 10km. The 25G SFP form factor, called SFP28 (28 Gb/s to account for encoding overhead) has been standardized and the LR specifications are available in IEEE P802.3cc™ – 2017 Amendment 11: Physical Layer and Management Parameters for Serial 25 Gb/s Ethernet Operation Over Single-Mode Fiber.

The 25G transceiver is similar to the 10G transceiver in that it uses a simple NRZ (Non-Return-to-Zero) modulation but it has higher bandwidth transmitter and receiver for 25G communication. It also includes a CDR (Clock Data Recovery) circuit to clean up the signals. The 25G transceiver also requires that the host ports support RS-FEC (Reed Solomon – Forward Error Correction), which is not required for 10G.

Cisco’s newest 25G products, including the Catalyst Enterprise switches 9500/9400/9300/9200’s, have advanced ASICs that implement RS-FEC for 25G communication so that transmission error rate can be improved from a BER (Bit Error Rate) of 5×10-5 to 1×10-12. A BER of 1×10-12 is traditionally considered to be “error free” and is associated with other ethernet rates where upper layer protocols can deal with infrequent transmission errors.

Inter-building and Intra-building applications for SFP-10/25G-LR


Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 3. Inter-building and Intra-building applications for 25G.

25G-LR SMF transceivers are now being used for both inter-building and intra-building campus applications to provide high speed connectivity.

Inter-building applications: In large campus environments 25G is used to connect from the building’s distribution switches to a core switch(es) in another campus building. Because of the 25G-LR’s reach of 10km (~6.2 miles) the transceiver provides an excellent low-cost solution for relatively large campus environments such as hospitals, medical offices, college campuses, and business parks. The core switch typically connects to the service provider’s metro/core network with 40/100G links, but those links may also use 25G LR technology.

Intra-building applications: In many situations SMF is used (or has been used) to connect wiring closet switches for distribution. In these applications, network builders and architects go beyond the limits of the traditional 300m over OM3 (or 400m over OM4) MMF (Multi Mode Fiber) by using SMF for large spans found in mega shopping malls, huge airports, and large manufacturing buildings. Now with Cisco’s SFP-10/25G-LR, networks can communicate at 25G without changing the SMF fiber infrastructure.

Migration from 10G to 25G


The new SFP-10/25G-LR transceiver has dual-rate capability that enables interoperability with 10G-LR SMF transceivers. This allows the network to be incrementally upgraded at either the end of the fiber. For example, Figure 4 shows how a Catalyst distribution switch is replaced with a new switch equipped with a SFP-10/25G-LR, but still communicates with the legacy 10G Catalyst wiring closet switch using 10G. Then when the wiring closet switch is replaced with a new 25G Catalyst switch, it communicates with the distribution switch at 25G without changing the transceiver at the latter end.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 4. Migration to 10G from 25G.

Interoperability with 40G and 100G


In some circumstances, the distribution switch (or far end switch) may only have QSFP interfaces. The new SFP-10/25G-LR it can interoperate with Cisco’s QSFP-100G-PSM-S transceiver or with Cisco’s QSFP-4X10G-LR-S transceiver via fiber breakout cables or cassettes, thereby connecting QSFP ports with SFP ports. 25G mode requires the use of RS-FEC (Forward Error Correction) on both hosts, which is available on Cisco 100G and 25G ports.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 5. SFP-10/25G interoperates with 25G and 10G.