How to put the IBM built-in data scientist to work for you

In modern IT operations teams, one of the biggest challenges is monitoring an increasingly complex environment—across many different tools—with fewer people. On top of that, teams face more pressure to avoid outages. And due to the immediacy of social media, outages can become very public, very quickly, negatively affecting customer sentiment of the company’s brand.
Some companies are choosing to employ data scientists to help them overcome challenges like these. The data scientist can use machine learning libraries to build a custom solution to help monitor their environment for potential problems.
There’s a better option if you do not want to be in the business of building and maintaining custom tools. You could choose an automated data scientist. It’s a tool that can learn the normal behavior of your time series data to help you avoid service impacting outages. It can also unify your performance monitoring systems into a single pain of glass, discover mathematical relationships to help perform root cause analysis and consolidate multiple anomalies into one problem.
With IBM Operations Analytics, a cognitive data scientist is essentially built into the product. The cognitive data scientist automatically creates and maintains the most suitable data models for your monitoring data. It intercepts analytic output, tests it and only notifies your team of high-confidence anomalies. To help operations teams take action, the technology delivers insights that include forecasts, discovered relationships, correlations and anomaly history.
How does the built-in data scientist help IT operations?
First, the team doesn’t need to focus on how the insights were achieved (no new hires, no new skill-sets, no statistical headaches). They can focus on what they do best: delivering great services, assisted by machine learning. Because the “data scientist” is in the code, actionable insights can be achieved in real-time and at scale. When IT environments change, the IBM technology will simply adapt and learn the “new normal,” avoiding the need to manually adapt data models and thresholds.
Perhaps the biggest bang for your buck is what IBM calls the “performance manager of managers.” Typically, centralized operations teams have between 20 to 40 performance managers, each requiring domain knowledge and configuration settings to create alerts. The IBM technology takes feeds from any performance manager and provides a single solution to dynamically set and maintain thresholds across your entire infrastructure and applications. And because the baselines can be highly seasonal, they are consistently more effective than traditional manual methods. The IBM technology can actually reduce noise while delivering increased efficiency.
The data scientist in practice: Banking
One real-world example comes from the banking industry. One IBM banking client is using   IBM Operations Analytics technology to manage their online banking application. The solution helps them identify performance anomalies which the bank’s operations team uses to take action.
Over a three month period, the team successfully reduced major incidents on the banking application by 85 percent, from 20 to three as measured in a three-month period. Think about the value this team achieved through machine assisted proactive operations:

85 percent fewer interruptions to the online banking service
85 percent fewer chances of revenue loss
85 percent less chance of brand-damaging feedback circulating on social media

Stay tuned for more IBM Operations Analytics insights
In this post I highlighted one of my favorite client value stories and explained how the unique IBM approach can help you achieve similar results without specialised skill sets.
In the next post, Ian Manning, lead developer for IBM Operations Analytics, will take us under the hood. He will explain how IBM differs from competitors, and most importantly how scalable proactive operations is enabled through actionable insights on performance data.
In the third post, Kristian Stewart, senior technical staff member for IBM Analytics and Event Management will explain how our approach delivers effectiveness and efficiency gains, at massive scale, through actionable insights from event data.
Finally, to complete the series, Jim Carey, offering manager for Netcool and BSM products will discuss how IBM is meeting the need to shift to DevOps. He’ll demonstrate strong new value for cognitive and agile operations.
Interested in learning more? Check out what’s possible for your business with IBM Operations Analytics.
The post How to put the IBM built-in data scientist to work for you appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How to build a conversational app using Cloud Machine Learning APIs, Part 1

By Chang Luo and Bob Liu, Software Engineers

For consumers, conversational apps (such as chatbot) are among the most visible examples of machine learning in action. For developers, building a conversational app is instructive for understanding the value that machine-learning APIs bring to the process of creating completely new user experiences.

In this two-part post, we’ll show you how to build an example “tour guide” app for Apple iOS that can see, listen, talk and translate via API.AI (a developer platform for creating conversational experiences) and Google Cloud Machine Learning APIs for Speech, Vision and Translate. You’ll also see how easy it is to support multiple languages on these platforms.
The two parts will focus on the following topics:

Part 1
Overview
Architecture
API.AI intents
API.AI contexts

Part 2
API.AI webhook with Cloud Functions
Cloud Vision API
Cloud Speech API
Cloud Translation API
Support multiple languages
This post is Part 1. Part 2 will be published in the following weeks.

Architecture
Using API.AIAPI.AI is a platform for building natural and rich conversational experiences. For our example, it will handle all core conversation flows in the tour guide app. (Note that API.AI provides great documentation and a sample app for its iOS SDK. SDKs for other platforms are also available, so you could easily extend this tour guide app to support Android.)

Create Agent
The first step is to create a “Tour Guide Agent.”

Create Intents
To engage users in a conversation, we first need to understand what users are saying to the agent. We do that with intents and entities. Intents map what your users say to what your conversational experience should do. Entities are used to extract parameter values from use queries.

Each intent contains a set of examples of user input and the desired automated response. To do that, you need to predict what users will say to open the conversation, and then enter those phrases in the “Add user expression” box. This list doesn’t need to be comprehensive. API.AI uses machine learning to train the agent to understand more variations of these examples. Later on, you can train the API.AI agent to understand more variations. For example, go to the Default Welcome Intent and add some user expressions “how are you,” “hello,” “hi” to open the conversation.

The next step after that is to add some more text responses.
Next, it’s time to work on contexts.

Contexts
Contexts represent the current context of a user’s request. They’re helpful for differentiating phrases that may be vague or have different meanings depending on the user’s preferences or geographic location, the current page in an app or the topic of conversation. Let’s look at an example.

User: Where am I?
Bot: Please upload a nearby picture and I can help find out where you are.
[User uploads a picture of Golden Gate Bridge.]
Bot: You are near Golden Gate Bridge.
User: How much is the ticket?
Bot: Golden Gate Bridge is free to visit.
User: When does it close today?
Bot: Golden Gate Bridge is open 24 hours a day, 7 days a week.
User: How do I get there?
[Bot shows a map to Golden Gate Bridge.]

In the above conversation, when user asks “How much is the ticket?” and “When does it close today?” or “How do I get there?”, the bot understands that the context is around Golden Gate Bridge.

The next thing to do is to weave intents and contexts together. For our example, each box in the diagram below is an intent and a context; the arrows indicate the relationships between them.

Output Contexts
Contexts are tied to user sessions (a session ID that you pass in API calls). If a user expression is matched to an intent, the intent can then set an output context to be shared by this expression in the future. You can also add a context when you send the user request to your API.AI agent. In our example, the where intent sets the where output context so that Location intent will be matched in the future.

Input Contexts
Input contexts limit intents to be matched only when certain contexts are set. In our example, location’s input context is set to where. The location intent is matched only when we’re under where context.

Here are the steps to generate these intents and contexts:

First, create where intent and add where output context. This is the root in the context tree and has no input context.
Second, create location intent. Add where input context. Reset where output context and add location output context. In our tour guide app, the input context of location is where. When the location intent is detected, the where context needs to be reset so that any subsequent conversation won’t trigger this context again. This is done by setting the lifespan of the output context where to 0. By default, a context has a lifespan of 5 requests or 10 minutes.

Next, create ticket intent. Add location input context. Add location output context so that hours and map intents can continue to use the location context as input context.

You can pass the parameter from the input context with the format of #context.parameter; e.g., pass the location string from intent inquiry-where-location to inquiry.where.location.ticket in the format #inquiry-where-location.location.
Finally, create hours and map intents similar to ticket intent.

Next time
In Part 2, we’ll cover how to use Webhook integrations in API.AI to pass information from a matched intent into a Cloud Functions web service and then get a result. Finally, we’ll cover how to integrate Cloud Vision/Speech/Translation API, including support for Chinese language.

You can download the source code from github.
Quelle: Google Cloud Platform

Import Power BI Desktop files into Azure Analysis Services

Last week we released a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make simple changes fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to a development or production AAS model.

The Azure Analysis Services web designer now allows you to import data models from a Power BI Desktop file (PBIX) into Azure Analysis Services. Once imported to AAS, you will be able to use those models with all of the AAS features including table partitioning.

You can import your own PBIX file by following the steps below.

Before getting started, you need:

An Azure Analysis Server at the Standard or Developer tier.
A Power BI Desktop (.pbix) file. New models created from Power BI Desktop files support Azure SQL Database, Azure SQL Data Warehouse, Oracle, and Teradata data sources.

Importing a Power BI Desktop file

1. In your server's Overview blade > Web designer, click Open.

 

2. In Web designer > Models, click + Add.

3. In New model, type a model name, and then select Power BI Desktop file.

4. Browse for the file you wish to import and then click Import.

At this point, the model inside of your desktop file will be converted to an Azure Analysis Services file. You can then query this file in the web directly, or open it up in the Power BI Desktop as a live connection. Further edits to the model can be made in the Azure Analysis Services web designer or through Visual Studio.

Learn more about Azure Analysis Services and the Azure Analysis Services web designer.
Quelle: Azure

Mirantis Doubles Down on NFV; Optimizing Mirantis Cloud Platform for Telcos

The post Mirantis Doubles Down on NFV; Optimizing Mirantis Cloud Platform for Telcos appeared first on Mirantis | Pure Play Open Cloud.
AT&T, Vodafone, Saudi Telecom, China Mobile rely on Mirantis to easily deploy and update NFV via DriveTrain

SUNNYVALE, Calif., July 27, 2017 (GLOBE NEWSWIRE) — Mirantis today announced a series of innovative NFV-focused updates to Mirantis Cloud Platform (MCP), optimized for easy deployment, operations and updates via DriveTrain.

“MCP now includes significant new enhancements for NFV, available for customers to consume via the DriveTrain toolchain,” said Boris Renski, Mirantis co-founder and CMO. “Leading Communications companies are selecting Mirantis to enable their VNFs and unlock a ‘disaggregated’ NFV stack that’s tuned for high performance and based on open source standards and non-proprietary infrastructure hardware.”

Mirantis continues to add capabilities supporting NFV for telecom operators, cable providers and enterprises. These new capabilities include significant new functionality for NFV, providing the VIM (including SDN controller) + NFVi layers of the ETSI NFV reference architecture. Specifically, they include:

OVS-DPDK over bonded interfaces: Allows users to consume higher bandwidth over a single link aggregated interface.
VLAN-aware VMs: Enables users to consume significantly fewer vNICs, where previously a separate vNIC was required for each VLAN. This dramatically reduces the networking complexity of the virtualized environment.
Per-VF QoS: Bandwidth capping on a per-virtual-function level permits fine-grained traffic shaping and prevents noisy-neighbor syndromes.

With MCP, Mirantis departs from the traditional software-centric method that revolves around licensing and support subscriptions. Instead, the company is pioneering an operations-centric approach, where open infrastructure is continuously delivered with an operations SLA through a managed service or by the customer themselves. This way, software updates no longer happen once every 6-12 months, but are introduced in minor increments on a bi-weekly basis, and with no down time.

Announced in April, Mirantis Cloud Platform includes leading open source software such as OpenStack and Kubernetes, continuously delivered via the DriveTrain Continuous Integration / Continuous Delivery (CI/CD) pipeline and provided to customers in a unique build-operate-transfer delivery model that ensures successful hybrid cloud operations at scale.

Mirantis Cloud Platform is:

Open Cloud Software — provides a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis Cloud Platform to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN).
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain — sets the foundation for DevOps-style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility in customizing the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight — enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards. Stacklight:

Avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
Includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
Focuses on SLA. The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

The build-operate-transfer model provides a turnkey experience, with Mirantis operating the cloud for customers for a period of at least six months with up to four nines SLA prior to offboarding operational responsibility to the customer’s team, if desired. This delivery model ensures that not just the software, but also the customer’s team and process are aligned with DevOps best practices.

To learn more about Mirantis Cloud Platform, watch an overview video and sign up for a live demo at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Doubles Down on NFV; Optimizing Mirantis Cloud Platform for Telcos appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Increasing NFV agility in Mirantis Cloud Platform

The post Increasing NFV agility in Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
With the onset of the digital age, the need for agility has become paramount. The network connectivity offered by telecoms is no longer a premium service, and instead has become an on-demand service with instantaneous setup and tear down. The premium service is the set of services offered on top of the network as an application. With the need for flexibility paramount, all of that  purpose-built equipment, which was a strength in the past, has become the biggest hurdle to competing in the digital age, contributing to decreasing revenues and increasing cost.
So change was a necessity, and thus was born Network Functions Virtualization (NFV). NFV makes it possible to do most of what the telcos were doing using specialized hardware, but with disaggregated network functions made up of software that could be adapted for new situations on COTS hardware.
All of this requires fast evolving infrastructure resource management, of course, and for that reason, NFV has become virtually inseparable from the OpenStack cloud platform. Mirantis, along with partners and competitors, has been working steadily with early adopters to make sure that OpenStack has what it needs to be well suited for NFV deployments.  In particular, for the past several years, Mirantis has worked within the telco community to support adoption of OpenStack as the primary Network Functions Virtualization Infrastructure (NFVi) and Virtual Infrastructure Management (VIM).   
These early adopters of NFV and SDN for telco network transformation have spoken about the initial successes of this approach, as well as the areas that need to be addressed in order for NFV to reach the next level — in order words, massive adoption across the globe.
Some areas our customers and partners identified as crucial include:

CI/CD for the infrastructure layer: One of the biggest hurdles to NFV is that NFV and its ecosystem are continuously evolving, so operators need a proven path to seamlessly absorb new innovations into every component of NFV, including the infrastructure layer. To solve that problem, we need to build Infrastructure as Code to enable infrastructure lifecycle management (LCM).
Future proof the infrastructure layer: It’s not enough to be able to manage the infrastructure; we need to make sure that we’re avoiding the need for forklift upgrades when major changes come along, such as the move to support container based, cloud native VNFs.
End-to-End automation including VNF onboarding and monitoring: This is a key requirement for business agility which is critical for lowering time to market and revenue acceleration. It enables optimal resource utilization and prevents stranded/stolen assets.
Strong open source communities:  No single organization can afford to innovate at speeds essential for the transformation of extremely complex telco network infrastructures, so our customers recognize the need for strong community support for the components of the NFV architecture, especially the management and orchestration (MANO) and virtualization layers of NFVI within the ETSI NFV reference architecture.

Our customers include service providers all over the world, so these problems have been top of mind for us for some time, and we’ve been working to solve them.  For example:

Mirantis Cloud Platform (MCP) includes DriveTrain, which provides a platform for managing virtualized networks using infrastructure as code
In addition to DriveTrain, MCP includes the ability to easily add Kubernetes and containers, making it an ideal future-proof platform for telcos. What’s more, DriveTrain makes it possible to add the “next big thing” in a manageable way.
Mirantis is actively contributing to both ONAP and OPNFV, and currently working on solutions for VNF onboarding and monitoring.
Mirantis is an open source company, and as such, all the components are built on open source tools, so service providers can lean on a global pool of resources for innovation in the infrastructure area.

In fact, the latest version of MCP focuses on the specific needs of NFV workloads, including their operationalization, or orchestration and automation within the context of a telco network. For example, MCP includes:

Capacity management of SR-IOV NICs through QoS controls. Bandwidth capping on a per virtual function level permits fine-grained traffic shaping and prevents noisy-neighbor syndrome.
Better reliability, higher bandwidth, and improved load balancing with OVS-DPDK support on bonded NICs. This also enables operators take advantage of existing assets. For example, you can utilize 10G NICs, when available, instead of investing in 40G NICs.
Improved performance for DPDK by pinning individual queues to cores with NUMA affinity
The ability to run telco VNFs that require simultaneous connectivity to multiple networks through support for VLAN aware VMs.

One thing that we know for certain is that telcos and service providers can’t afford to ignore NFV.  As addicted to their phones as many people are now, the “unlimited” network capability that is expected with the upcoming 5G standard has the potential to make connectivity seem like it is the fourth essential ingredient for human survival (after water, air and shelter). And as complex as 5G will be, NFV is critical for it to become a reality. 5G requires a dynamic hierarchical architecture; between that and requirements for network slicing, and cloud-based radio access networks (C-RAN), virtualization of the networking infrastructure is essential.
Accordingly, Mirantis has a rich roadmap that focuses heavily on NFV for 5G enablement over the next 6 to 18 months, and we can’t wait to share it with you.
For further information about NFV or any of the topics mentioned here, please contact Mirantis.
The post Increasing NFV agility in Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Amping up your disaster recovery with Azure Site Recovery

If you are in the process of building or revising your business continuity plans, it’s worth taking a look at Azure Site Recovery (ASR). ASR is a disaster recovery service that allows you to failover on-premises applications running on Linux and Windows and using VMware and Hyper-V to Azure in the event of an outage.

On today’s episode of Microsoft Mechanics, I’ll walk you through how Azure Site Recovery can help you to keep your applications available, including setting up replication for your on-premises applications to Azure and testing that the solution meets your compliance needs.

Getting started with Azure Site Recovery

As discussed on today’s demo-bench, we’ve reduced the complexity traditionally involved in setting up disaster recovery. ASR is built into Azure. As long as you have an Azure subscription, you can get started today, and it's free to use for the first 31 days.

Also with the Azure Hybrid use benefit, you can apply existing Windows Server Licenses toward this effort – which you can learn more about from Chris Van Wesep on his recent demo bench.

Three pivotal steps

There are three pivotal steps to get up and running. The first is preparing your local infrastructure, where depending on which platform you are using, we point you to the Azure Site Recovery on-premises components needed to replicate your applications. In our example today, you’ll see the experience for replicating your applications with VMware ESX using vCenter. This directly connects Azure to your vCenter instance on-premises.

The step after that is to replicate your applications, which is facilitated by a guided experience within the Azure Portal. This includes things like selecting the target where your applications will land in Azure, your virtual machines, configuration properties, and replication settings.

The last step is to create and store your recovery plan. This is also where you can customize your recovery and can test for failover without impacting production workloads or end users. To customize, this means I can sequence the failover of multi-tier applications running on multiple VMs. You can use Azure Automation to automate some of the common post-failover steps.

Of course, once set up, you can then test for failover as I demonstrate today.

As you move forward with your business continuity plan, you’ll want to use Azure Backup to protect your data to mitigate against corruption, accidental deletion, or ransomware. Azure Backup is also fully integrated with Azure and protects data running on Linux and Windows and virtualized with VMware and Hyper-V. You can learn more here.

We hope that you find today’s overview helpful. Please let us know your thoughts and feel free to post your questions.
Quelle: Azure