Azure Analysis Services adds Standard S0 pricing tier

Over the past several months, we have received positive feedback about the Azure Analysis Services preview. Many customers have moved their models to the cloud and have enjoyed the improved manageability of the platform-as-a-service offering. We have expanded regions where Azure Analysis Services is available, and we are working on several of the improvements asked for on the Azure Analysis Services feedback site.

One scenario customers have asked for is a way to support smaller workloads in the cloud. While you can run multiple databases on a single Standard S1 instance, you may want to start with something smaller based on your model size and query volume. We are introducing a new smaller pricing level, Standard S0, which has 40 QPUs and 10 GB of RAM for models.

This new size offers the same features and capabilities as the rest of the Standard tier. You can continue to scale up and down based on your expected load to achieve the best experience for your users. For example, if you need more RAM when processing data, you can scale up during processing and scale down after processing.  You can also scale up during business hours for better query performance, scale back down in off-hours, or even pause when needed for further cost savings. You can track QPU utilization in the Azure portal and through the Azure Monitoring APIs. Note, your model must fit in the available RAM. It is a good idea to check your memory consumption the Azure Portal to ensure Standard S0 is appropriate for your workload.

Please try out the Standard S0 size and let us know how it works for you. You can share your experiences on the Azure Analysis Services MSDN forum. The forum is also a great place to get help from the community.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
 
Quelle: Azure

Application performance blues? Dear Cloudie can help

Interested in adopting application performance monitoring? Let’s take a look at how you can embrace automation in the style of one of my favorite things to read: a classic, therapeutic newspaper advice column.
Dear Cloudie,
It was a beautiful spring Sunday with great weather outside. But it was ruined once again with an alert. My customer was having issues with their web application. The ticket said that the app was loading slowly but doesn’t elaborate further. As a network admin, I had to then go pull wireshark traces and TCP dumps and then comb through those logs. But I was still unsure of the root cause. So I had to guess that it was the network’s fault and take the blame. I have gone on one too many Sunday hunts for a needle in a haystack. Please help.
— Another Network Admin Buried Deep in Haystacks
Dear Not-Just-Another Network Admin,
Those of us with network responsibilities often worry about application deployment and delivery. But many of us desperately lack architectural innovation and access to real-time telemetry.
For future incidents, I recommend you research application performance monitoring technologies. This will equip you well for when incidents occur in future. And they will.
Here’s a simple, three-step methodology that will help you get started.
1. Architecture
Some may try trick you into believing that you can achieve the results you seek with traditional approaches to . But you need infrastructure that not only captures real-time telemetry but also can process millions of data points in real time without any performance impact.
Solutions built on software-defined principles separate the data plane from the control plane. This gives you flexibility. The data plane can just capture real-time application traffic telemetry and feed it to the off-path control plane. Your control plane can analyze these metrics and present the insights in a visual dashboard without impacting performance.
2. Analytics
Of the various elements of application traffic that you can measure, you need to identify the relevant metrics. Then you can configure your tools to collect real-time telemetry from your application instances.
You will need insights into:

End-user performance
Page load times
Media and files accesses
URLs and URIs accessed
Response codes
Client analytics such as location, device types, operating system versions and browsers

Together, all of this can average millions of data points per second. Traditional computing models can neither scale nor process potential petabytes of data without performance degradation.
3. Automate
I’m reminded of an IT joke: “Automate painful processes and now you do stupid things faster.”
We adopt cloud-native architectures to achieve flexibility, agility and continuous delivery. Automation plays a critical role in achieving these benefits. Based on the insights you get from real-time application analytics, your network team can automatically scale their resources to mirror traffic patterns. Application teams win too – they can automate application services thereby shortening development life cycle.
With these three steps to get you and your team started, you will notice that your teams and your infrastructure solutions walk in sync.
And before you know it, sunny days ruined by alerts will be a thing of past.
Yours,
Cloudie
PS:  To learn more about APM, join us at IBM InterConnect March 19 &; 23, 2017. Or download the DevOps APM for Dummies ebook to learn how teams work together to continuously deliver secure, available, application insights.
The post Application performance blues? Dear Cloudie can help appeared first on Cloud computing news.
Quelle: Thoughts on Cloud