How organizations are measuring cloud success

Cloud adoption has entered the mainstream.
Organizations are embracing cloud to drive enterprise-wide innovation that improves customer relationships while delivering operational efficiencies.
But how are they measuring the success of their cloud initiatives? What criteria are they using to decide which workloads should move to the cloud? How are they using return on investment (ROI)?  What other key performance indicators (KPIs) are they tracking?
Data collection for our latest IBM Institute for Business Value (IBV) study titled “Tailoring Hybrid Cloud” reveals how organizations are evaluating their progress throughout their cloud adoption journey.
Criteria for which workloads move to cloud
Since not every workload is suited to the cloud, organizations must decide which functions should be migrated to the cloud and which should stay on premises. We discovered four predominant metrics that organizations employ to determine which workloads should move to the cloud: cost, security and compliance requirements, timing/speed to market, and estimated ROI.

ROI calculations for evaluating cloud initiatives
In organizations we surveyed, ROI is used extensively throughout the cloud adoption process. Most enterprises (80 percent) say that ROI is a key input for their decision-making process for future cloud initiatives.  Nearly as many (77 percent) of executives surveyed are confident in their abilities to effectively use ROI to accurately measure their cloud initiatives. Nearly three-fourths (74 percent) of business leaders say they consistently and objectively compare their achieved ROI with the original, expected ROI for their enterprise cloud initiatives.

ROI calculations are important in evaluating cloud initiatives in other ways.  More than half (60 percent) of enterprises use ROI as a process metric to help prioritize their portfolio of cloud initiatives. This popular use of ROI is closely followed by 59 percent of organizations employing ROI as a results metric, measuring the impact of a cloud initiative after it has been implemented.

KPIs for measuring cloud adoption success
More than 50 percent of the executives surveyed rely heavily on financial metrics — principally, cost and ROI— to determine the path to success for their cloud initiatives. We also asked survey respondents to name other key performance indicators they track when measuring the benefits of cloud adoption.
Once again, a financial metric is the most popular. Nearly half (47 percent) of organizations report they track increase in revenue margin.  Yet another financial metric, rate of change in the reduction of total cost of ownership (TCO), comes in a close second with 45 percent of executives emphasizing this key performance indicator.

For more findings on how hybrid cloud can answer an enterprise’s unique needs, including more recommendations for getting started, read “Tailoring hybrid cloud: Designing the right mix for innovation, efficiency and growth.”
The post How organizations are measuring cloud success appeared first on news.
Quelle: Thoughts on Cloud

doAzureParallel: Take advantage of Azure’s flexible compute directly from your R session

Users of the R language often require more compute capacity than their local machines can handle. However, scaling up their work to take advantage of cloud capacity can be complex, troublesome, and can often distract R users from focusing on their algorithms.

We are excited to announce doAzureParallel – a lightweight R package built on top of Azure Batch, that allows you to easily use Azure’s flexible compute resources right from your R session.

At its core, the doAzureParallel package is a parallel backend, for the widely popular foreach package, that lets you execute multiple processes across a cluster of Azure virtual machines. In just a few lines of code, the package helps you create and manage a cluster in Azure, and register it as a parallel backend to be used with the foreach package.

With doAzureParallel, there’s no need to manually create, configure, and manage a cluster of individual virtual machines. Instead, this package makes running your jobs at scale no more complex than running your algorithms on your local machine. With Azure Batch’s autoscaling capabilities, you can also increase or decrease the size of your cluster to fit your workloads, helping you to save time and/or money.

doAzureParallel also uses the Azure Data Science Virtual Machine (DSVM), allowing Azure Batch to easily and quickly configure the appropriate environment in as little time as possible.

There is no additional cost for these capabilities – you only pay for the Azure VMs you use.

doAzureParallel is ideal for running embarrassingly parallel work such as parametric sweeps or Monte Carlo simulations, making it a great fit for many financial modelling algorithms (back-testing, portfolio scenario modelling, etc).

Installation / Pre-requisites

To use doAzureParallel, you need to have a Batch account and a Storage account set up in Azure. More information on setting up your Azure accounts.

You can install the package directly from Github. More information on install instructions and dependencies.

Getting Started

Once you install the package, getting started is as simple as few lines of code:

Load the package:

library(doAzureParallel)

Set up your parallel backend (which is your pool of virtual machines) with Azure:

# 1. Generate a pool configuration json file.
generateClusterConfig(“pool_config.json”)

# 2. Edit your pool configuration file.
# Enter your Batch account & Storage account information and configure your pool settings

# 3. Create your pool. This will create a new pool if your pool hasn’t already been provisioned.
pool <- makeCluster(“pool_config.json”)

# 4. Register the pool as your parallel backend
registerDoAzureParallel(pool)

# 5. Check that your parallel backend has been registered
getDoParWorkers()

Run your parallel foreach loop with the %dopar% keyword. The foreach function will return the results of your parallel code.

number_of_iterations <- 10
results <- foreach(i = 1:number_of_iterations) %dopar% {
    # This code is executed, in parallel, across your Azure pool.
    myAlgorithm(…)
}

When developing at scale, it is always recommended that you test and debug your code locally first. Switch between %dopar% and %do% to toggle between running in parallel on Azure and running in sequence on your local machine.

# run your code sequentially on your local machine
results <- foreach(i = 1:number_of_iterations) %do% { … }

# use the doAzureParallel backend to run your code in parallel across your Azure pool
results <- foreach(i = 1:number_of_iterations) %dopar% {…}

After you finish running your R code at scale, you may want to shut down your pool of VMs to make sure that you aren’t being charged anymore:

# shut down your pool
stopCluster(pool)

Monte Carlo Pricing Simulation Demo

The following demo will show you a simplified version of predicting a stock price after 5 years by simulating 5 million different outcomes of a single stock.

Let&;s imagine Contoso&039;s stock price gains on average 1.001 times its opening price each day, but has a volatility of 0.01. Given a starting price of $100, we can use a Monte Carlo pricing simulation to figure out what price Contoso&039;s stock will be after 5 years.

First, define the assumptions:

mean_change = 1.001
volatility = 0.01
opening_price = 100

Create a function to simulate the movement of the stock price for one possible outcome over 5 years  by taking the cumulative product from a normal distribution using the variables defined above.

simulateMovement <- function() {
    days <- 1825 # ~ 5 years
    movement <- rnorm(days, mean=mean_change, sd=volatility)
    path <- cumprod(c(opening_price, movement))
    return(path)
}

On our local machine, simulate 30 possible outcomes and graph the results:

simulations <- replicate(30, simulateMovement())
matplot(simulations, type=&039;l&039;) # plots all 30 simulations on a graph

To understand where Contoso&039;s stock price will be in 5 years, we need to understand the distribution of the closing price for each simulation (as represented by the lines). But instead of looking at the distribution of just 30 possible outcomes, lets simulate 5 million outcomes to get a massive sample for the distribution.

Create a function to simulate the movement of the stock price for one possible outcome, but only return the closing price.

getClosingPrice <- function() {
    days <- 1825 # ~ 5 years
    movement <- rnorm(days, mean=mean_change, sd=volatility)
    path <- cumprod(c(opening_price, movement))
    closingPrice <- path[days]
    return(closingPrice)
}

Using the foreach package and doAzureParallel, we can simulate 5 million outcomes in Azure. To parallelize this, lets run 50 iterations of 100,000 outcomes:

closingPrices <- foreach(i = 1:50, .combine=&039;c&039;) %dopar% {
    replicate(100000, getClosingPrice())
}

After running the foreach package against the doAzureParallel backend, you can look at your Azure Batch account in the Azure Portal to see your pool of VMs running the simulation.

As the nodes in the heat map changes color, we can see it busy working on the pricing simulation.

When the simulation finishes, the package will automatically merge the results of each simulation and pull it down from the nodes so that you are ready to use the results in your R session.

Finally, we&039;ll plot the results to get a sense of the distribution of closing prices over the 5 million possible outcomes.

# plot the 5 million closing prices in a histogram
hist(closingPrices)

Based on the distribution above, Contoso&039;s stock price will most likely move from the opening price of $100 to a closing price of roughly $500, after a 5 year period.

 

We look forward to you using these capabilities and hearing your feedback. Please contact us at razurebatch@microsoft.com for feedback or feel free to contribute to our Github repository.

Additional information:

Download and get started with doAzureParallel
For questions related to using the doAzureParallel package, please see our docs, or feel free to reach out to razurebatch@microsoft.com
Please submit issues via Github

Additional Resources:

See Azure Batch, the underlying Azure service used by the doAzureParallel package
More general purpose HPC on Azure

Quelle: Azure

3 keys to unlocking unstoppable process transformation

In today’s fast-paced and competitive world, business depends on continual innovation. As customers’ expectations continue to grow, delivering a stellar customer experience is more important than ever before. Eliminating process inefficiencies and increasing productivity and innovation are critical to success.
To accomplish customer-driven innovation, businesses need a process transformation strategy that includes three key factors: automation, augmentation and rapid digital innovation.
Automation
Consider seeking to eliminate repetitive tasks and inflexible processes by automating significant parts of the process. This drives efficiencies at every stage, which leads to savings on costs, faster issue resolution and more opportunities for high-value customer interaction. A great example of automation in action: PNC Financial Services Group reduced the number of loan applications that the bank had to manually review by 80 – 90 percent.
Augmentation
Arming IT teams with cognitive capabilities empowers them to understand and anticipate customer needs early. As a result, they can make better decisions for the company. Augmentation is essentially about making people more effective. It hits at the heart of high-value customer interaction: stronger service, vendor management and knowledge work.
Rapid digital innovation
Processes that are redundant and cannot be adapted quickly to ever-changing business needs can hamper the flow of great ideas. Continual innovation is the lifeblood of growing and thriving organizations that focus on consistently delighting customers.
Accordingly, IBM process transformation capabilities aim to drive faster experimentation, which  favors model-driven environments in the cloud. One example: Travis Perkins plc recognized that its customers’ expectations were shifting alongside growing trends in technology. With a process transformation solution, Travis Perkins created a high-quality customer experience that not only delighted their customers but also improved their own data collection process.
Embrace the cognitive era
Cognitive technologies open new avenues to connect with customers at every point of interaction. You can leverage actionable insights from data not available via traditional tools in business processes and decisions. Just a few examples include:

Digital self-service and engagement solutions that take input, learn from that input with human assistance, put the content into context and make relevant, evidence-based recommendations.
Faster information compilation for agent-assisted client interactions, employee onboarding and adding machine learning to human knowledge in the business moment.
Improved triaging of issues by engaging the right person in the organization for a more efficient workflow.

Please join me at the business process trends and directions session at InterConnect 2017. The session is titled Automating work across the enterprise for stellar customer experience and top-line eesults. We’ll discuss more about infusing cognitive capabilities into your business operations. We will also be introducing you to our latest process automation innovations. NHT Blood and Transplant will be on-hand to share their own amazing success story with IBM Process Transformation. Hope to see you at InterConnect.
 
The post 3 keys to unlocking unstoppable process transformation appeared first on news.
Quelle: Thoughts on Cloud