Governance setting for cache refreshes from Azure Analysis Services

Built on the proven analytics engine in Microsoft SQL Server Analysis Services, Azure Analysis Services delivers enterprise-grade BI semantic modeling capabilities with the scale, flexibility, and management benefits of the cloud. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Azure Analysis Services helps you transform complex data into actionable insights. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc interactive analysis.

Data visualization and consumption tools over Azure Analysis Services (Azure AS) sometimes store data caches to enhance report interactivity for users. The Power BI service, for example, caches dashboard tile data and report data for initial load for Live Connect reports. However, enterprise BI deployments where semantic models are reused throughout organizations can result in a great deal of dashboards and reports sourcing data from a single Azure AS model. This can cause an excessive number of cache queries being submitted to AS and, in extreme cases, can overload the server. This is especially relevant to Azure AS (as opposed to on-premises SQL Server Analysis Services) because models are often co-located in the same region as the Power BI capacity for faster query response times, so may not even benefit much from caching.

ClientCacheRefreshPolicy governance setting

The new ClientCacheRefreshPolicy property allows IT or the AS practitioner to override this behavior at the Azure AS server level, and disable automatic cache refreshes. All Power BI Live Connect reports and dashboards will observe the setting irrespective of the dataset-level settings, or which Power BI workspace they reside on. You can set this property using SQL Server Management Studio (SSMS) in the Server Properties dialog box. Please see the Analysis Services server properties page for more information on how to make use of this property.

Quelle: Azure

Azure Notification Hubs and Google’s Firebase Cloud Messaging Migration

When Google announced its migration from Google Cloud Messaging (GCM) to Firebase Cloud Messaging (FCM), push services like Azure Notification Hubs had to adjust how we send notifications to Android devices to accommodate the change.

We updated our service backend, then published updates to our API and SDKs as needed. With our implementation, we made the decision to maintain compatibility with existing GCM notification schemas to minimize customer impact. This means that we currently send notifications to Android devices using FCM in FCM Legacy Mode. Ultimately, we want to add true support for FCM, including the new features and payload format. That is a longer-term change and the current migration is focused on maintaining compatibility with existing applications and SDKs. You can use either the GCM or FCM libraries in your app (along with our SDK) and we make sure the notification is sent correctly.

Some customers recently received an email from Google warning about apps using a GCM endpoint for notifications. This was just a warning, and nothing is broken – your app’s Android notifications are still sent to Google for processing and Google still processes them. Some customers who specified the GCM endpoint explicitly in their service configuration were still using the deprecated endpoint. We had already identified this gap and were working on fixing the issue when Google sent the email.

We replaced that deprecated endpoint and the fix is deployed.

If your app uses the GCM library, go ahead and follow Google’s instructions to upgrade to the FCM library in your app. Our SDK is compatible with either, so you won’t have to update anything in your app on our side (as long as you’re up to date with our SDK version).

Now, this isn’t how we want things to stay; so over the next year you’ll see API and SDK updates from us implementing full support for FCM (and likely deprecate GCM support). In the meantime, here’s some answers to common questions we’ve heard from customers:

Q: What do I need to do to be compatible by the cutoff date (Google’s current cutoff date is May 29th and may change)?

A: Nothing. We will maintain compatibility with existing GCM notification schema. Your GCM key will continue to work as normal as will any GCM SDKs and libraries used by your application.
If/when you decide to upgrade to the FCM SDKs and libraries to take advantage of new features, your GCM key will still work. You may switch to using an FCM key if you wish, but ensure you are adding Firebase to your existing GCM project when creating the new Firebase project. This will guarantee backward compatibility with your customers that are running older versions of the app that still use GCM SDKs and libraries.

If you are creating a new FCM project and not attaching to the existing GCM project, once you update Notification Hubs with the new FCM secret you will lose the ability to push notifications to your current app installations, since the new FCM key has no link to the old GCM project.

Q: Why am I getting this email about old GCM endpoints being used? What do I have to do?

A: Nothing. We have been migrating to the new endpoints and will be finished soon, so no change is necessary. Nothing is broken, our one missed endpoint simply caused warning messages from Google.

Q: How can I transition to the new FCM SDKs and libraries without breaking existing users?

A: Upgrade at any time. Google has not yet announced any deprecation of existing GCM SDKs and libraries. To ensure you don't break push notifications to your existing users, make sure when you create the new Firebase project you are associating with your existing GCM project. This will ensure new Firebase secrets will work for users running the older versions of your app with GCM SDKs and libraries, as well as new users of your app with FCM SDKs and libraries.

Q: When can I use new FCM features and schemas for my notifications?

A: Once we publish an update to our API and SDKs, stay tuned – we expect to have something for you in the coming months.

Learn more about Azure Notification Hubs and get started today.
Quelle: Azure

5 tips to get more out of Azure Stream Analytics Visual Studio Tools

Azure Stream Analytics is an on-demand real-time analytics service to power intelligent action. Azure Stream Analytics tools for Visual Studio make it easier for you to develop, manage, and test Stream Analytics jobs. This year we provided two major updates in January and March, unleashing new useful features. In this blog we’ll introduce some of these capabilities and features to help you improve productivity.

Test partial scripts locally

In the latest March update we enhanced local testing capability. Besides running the whole script, now you can select part of the script and run it locally against the local file or live input stream. Click Run Locally or press F5/Ctrl+F5 to trigger the execution. Note that the selected portion of the larger script file must be a logically complete query to execute successfully.

Share inputs, outputs, and functions across multiple scripts

It is very common for multiple Stream Analytics queries to use the same inputs, outputs, or functions. Since these configurations and code are managed as files in Stream Analytics projects, you can define them only once and then use them across multiple projects. Right-click on the project name or folder node (inputs, outputs, functions, etc.) and then choose Add Existing Item to specify the input file you already defined. You can organize the inputs, outputs, and functions in a standalone folder outside your Stream Analytics projects to make it easy to reference in various projects.

Duplicate a job to other regions

All Stream Analytics jobs running in the cloud are listed in Server Explorer under the Stream Analytics node. You can open Server Explorer by choosing from the View menu.

If you want to duplicate a job to another region, just right-click on the job name and export it to a local Stream Analytics project. Since the credentials cannot be downloaded to local environment, you must specify the correct credentials in the job inputs and outputs files. After that, you are ready to submit the job to another region by clicking Submit to Azure in the script editor.

Local input schema auto-completion

If you have specified a local file for an input to your script, the IntelliSense feature will suggest input column names based on the actual schema of your data file.

Testing queries against SQL database as reference data

Azure Stream Analytics supports Azure SQL Database as an input source for reference data. When you add a reference input using SQL Database, two SQL files are generated as code, behind files under your input configuration file.

In Visual Studio 2017 or 2019, if you have already installed SQL Server Data tools, you can directly write the SQL query and test by clicking Execute in the query editor. A wizard window will pop up to help you connect to the SQL database and show the query result in the window at the bottom.

Providing feedback and ideas

The Azure Stream Analytics team is committed to listening to your feedback. We welcome you to join the conversation and make your voice heard via our UserVoice. For tools feedback, you can also reach out to ASAToolsFeedback@microsoft.com.

Also, follow us @AzureStreaming to stay updated on the latest features.
Quelle: Azure

Dear Spark developers: Welcome to Azure Cognitive Services

This post was co-authored by Mark Hamilton, Sudarshan Raghunathan, Chris Hoder, and the MMLSpark contributors.

Integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™

Today at Spark AI Summit 2019, we're excited to introduce a new set of models in the SparkML ecosystem that make it easy to leverage the Azure Cognitive Services at terabyte scales. With only a few lines of code, developers can embed cognitive services within your existing distributed machine learning pipelines in Spark ML. Additionally, these contributions allow Spark users to chain or Pipeline services together with deep networks, gradient boosted trees, and any SparkML model and apply these hybrid models in elastic and serverless distributed systems.

From image recognition to object detection using speech recognition, translation, and text-to-speech, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario. To this date, more than a million developers have already discovered and tried Cognitive Services to accelerate breakthrough experiences in their application.

Azure Cognitive Services on Apache Spark™

Cognitive Services on Spark enable working with Azure’s Intelligent Services at massive scales with the Apache Spark™ distributed computing ecosystem. The Cognitive Services on Spark are compatible with any Spark 2.4 cluster such as Azure Databricks, Azure Distributed Data Engineering Toolkit (AZTK) on Azure Batch, Spark in SQL Server, and Spark clusters on Azure Kubernetes Service. Furthermore, we provide idiomatic bindings in PySpark, Scala, Java, and R (Beta).

Cognitive Services on Spark allows users to embed general purpose and continuously improve intelligent models directly into their Apache Spark™ and SQL computations. This contribution aims to liberate developers from low-level networking details, so they can focus on creating intelligent, distributed applications. Each Cognitive Service is a SparkML transformer, so users can add services to existing SparkML pipelines. We also introduce a new type of API to the SparkML framework that allows users to parameterize models by either a single scalar, or a column of a distributed spark DataFrame. This API yields a succinct, yet powerful fluent query language that offers a full distributed parameterization without clutter. For more information, check out our session.

Use Azure Cognitive Services on Spark in these 3 simple steps:

Create an Azure Cognitive Services Account
Install MMLSpark on your Spark Cluster
Try our example notebook

Low-latency, high-throughput workloads with the cognitive service containers

The cognitive services on Spark are compatible with services from any region of the globe, however many scenarios require low or no-connectivity and ultra-low latency. To tackle these with the cognitive services on Spark, we have recently released several cognitive services as docker containers. These containers enable running cognitive services locally or directly on the worker nodes of your cluster for ultra-low latency workloads. To make it easy to create Spark Clusters with embedded cognitive services, we have created a Helm Chart for deploying a Spark clusters onto the popular container orchestration platform Kubernetes. Simply point the Cognitive Services on Spark at your container’s URL to go local!

Add any web service to Apache Spark™ with HTTP on Spark

The Cognitive Services are just one example of using networking to share software across ecosystems. The web is full of HTTP(S) web services that provide useful tools and serve as one of the standard patterns for making your code accessible in any language. Our goal is to allow Spark developers to tap into this richness from within their existing Spark pipelines.

To this end, we present HTTP on Spark, an integration between the entire HTTP communication protocol and Spark SQL. HTTP on Spark allows Spark users to leverage the parallel networking capabilities of their cluster to integrate any local, docker, or web service. At a high level, HTTP on Spark provides a simple and principled way to integrate any framework into the Spark ecosystem.

With HTTP on Spark, users can create and manipulate their requests and responses using SQL operations, maps, reduces, filters, and any tools from the Spark ecosystem. When combined with SparkML, users can chain services together and use Spark as a distributed micro-service orchestrator. HTTP on Spark provides asynchronous parallelism, batching, throttling, and exponential back-offs for failed requests so that you can focus on the core application logic.

Real world examples

The Metropolitan Museum of Art

At Microsoft, we use HTTP on Spark to power a variety of projects and customers. Our latest project uses the Computer Vision APIs on Spark and Azure Search on Spark to create a searchable database of Art for The Metropolitan Museum of Art (The MET). More Specifically, we load The MET’s Open Access catalog of images, and use the Computer Vision APIs to annotate these images with searchable descriptions in parallel. We also used CNTK on Spark, and SparkML’s Locality Sensitive Hash implementation to futurize these images and create a custom reverse image search engine. For more information on this work, check out our AI Lab or our Github.

The Snow Leopard Trust

We partnered with the Snow Leopard Trust to help track and understand the endangered Snow Leopard population using the Cognitive Services on Spark. We began by creating a fully labelled training dataset for leopard classification by pulling snow leopard images from Bing on Spark. We then used CNTK and Tensorflow on Spark to train a deep classification system. Finally, we interpreted our model using LIME on Spark to refine our leopard classifier into a leopard detector without drawing a single bounding box by hand! For more information, you can check out our blog post.

Conclusion

With only a few lines of code you can start integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™. The Spark bindings offer high throughput and run anywhere you run Spark. The Cognitive Services on Spark fully integrate with containers for high performance, on premises, or low connectivity scenarios. Finally, we have provided a general framework for working with any web service on Spark. You can start leveraging the Cognitive Services for your project

with our open source initiative MMLSpark on Azure Databricks.

Learn more

Web

Github

Email: mmlspark-support@microsoft.com
Quelle: Azure

AI for Good: Developer challenge!

Do you have an idea that could improve and empower the lives of everyone in a more accessible way? Or perhaps you have an idea that would help create a sustainable balance between modern society and the environment? Even if it’s just the kernel of an idea, it’s a concept worth exploring with the AI for Good Idea Challenge!
Quelle: Azure

Customize your Azure best practice recommendations in Azure Advisor

Cloud optimization is critical to ensuring you get the most out of your Azure investment, especially in complex environments with many Azure subscriptions and resource groups. Azure Advisor helps you optimize your Azure resources for high availability, security, performance, and cost by providing free, personalized recommendations based on your Azure usage and configurations.

In addition to consolidating your Azure recommendations into a single place, Azure Advisor has a configuration feature that can help you focus exclusively on your most important resources, such as those in production, and save you remediation time. You can also configure thresholds for certain recommendations based on your business needs.

Save time by configuring Advisor to display recommendations only for resources that matter to you

You can configure Azure Advisor to provide recommendations exclusively for the subscriptions and resource groups you specify. By narrowing your Advisor recommendations down to the resources that matter the most to you, you can save time optimizing your Azure workloads. To get you started we’ve created a step-by-step guide on how to configure Advisor in the Azure portal (UI). To learn how to configure Advisor in the command line (CLI), see our documentation, “az advisor configuration.”

Please note that there’s a difference between Advisor configuration and the filtering options available in the Azure portal. Configuration is persistent and prevents recommendations from showing for the unselected scope (shown in the screenshot above). Filtering in the UI (shown in the screenshot below) temporarily displays a subset of recommendations. Available UI filters include subscription, service, and active versus postponed recommendations.

Configuring thresholds for cost recommendations to find savings

You can also customize the CPU threshold for one of our most popular recommendations, “Right-size or shutdown underutilized virtual machines,” which analyzes your usage patterns and identifies virtual machines (VMs) with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of your VMs.

You can modify the average CPU utilization threshold Advisor uses for this recommendation to a higher or lower value so you can find more savings depending on your business needs.

Get started with Azure Advisor

Review your Azure Advisor recommendations and customize your Advisor configurations now. If you need help getting started, check our Advisor documentation. We always welcome feedback. Submit your ideas or email us with any questions or comments at advisorfeedback@microsoft.com.
Quelle: Azure

Migrating SAP applications to Azure: Introduction and our partnership with SAP

Just over 25 years ago, Bill Gates and Hasso Plattner met to form an alliance between Microsoft and SAP that has become one of our industry’s longest lasting alliances. At the time their conversation was focused on how Windows could be the leading operating system for SAP’s SAPGUI desktop client and when released a few years later, how Windows NT could be a server operating system of choice for running SAP R/3. Not long after in 1996 we started our own SAP project based on Windows NT/SQL Server and complimented our SAP alliance that has continued to evolve since then, while meeting the needs of SAP customers of all sizes.

That said, with 90 percent of today’s Fortune 500 customers using Microsoft Azure and an estimated 80 percent of Fortune customers running SAP solutions, it makes sense why SAP running on Azure is a key joint initiative between Microsoft and SAP. At the SAPPHIRENOW conference in 2016, Microsoft CEO Satya Nadella and SAP CEO Bill McDermott were on stage talking about the significant progress of SAP and Azure, especially with the release of SAP HANA on Azure Large Instances. Most of our conversations with large scale SAP customers at the time were about us providing basic SAP on Azure information (i.e. kicking the tires). We’ve made continued progress since then as we released the M-Series virtual machine size (up to 4TB of memory), SAP HANA Large Instances (up to 20 TB memory) and then provided support for the SAP Cloud Platform, SAP HANA Enterprise Cloud on Azure, and Active Directory Single-Sign-On (SSO). Last year we announced our plans to release larger sizes of the M-Series (up to 12TB) and our conversations with customers have also evolved beyond cursory information gathering and into discussions about SAP on Azure productive use.

Today more and more SAP customers are simply choosing Azure for running SAP as we continue to demonstrate successful deployments of SAP on Azure and make progress of Azure as a mission critical cloud platform with features such as Azure Site Recovery (ASR) and Availability Zones. Customer conversations happen at both executive and technical levels as we discuss not just advantages of running SAP on Azure like cost (e.g. shifting from CapEx to OpEx and utilizing Azure Reserved Instances), but also other key aspects such as scalability, flexibility, and security.

As an example of scalability, SAP customers have the ability to scale their SAP environment during a month-end financial closing when more computing capacity is typically needed, and then right-size immediately after month-end for typical operations during the month. From a flexibility and agility perspective, one of the more frequent topics of conversation we’ve had with customer’s SAP Basis teams has been about one of their biggest pains, their current on-premises experience of ordering and provisioning new hardware for their SAP landscape. Typically this is an on-premises process that can take weeks, if not several months depending on size and type of customer, and all the while SAP application teams are chomping at the bit waiting to make progress during their phase of an SAP project. With SAP on Azure agile provisioning is possible by leveraging new features like shared images, and the integration of the provisioning process with automation capabilities like Terraform, Ansible, Puppet, and Chef. This leads to a faster and more dependable provisioning process.

SAP customers also are deploying initial SAP S/4HANA environments by leveraging the SAP Cloud Appliance Library which copies and deploys pre-built images into a customer’s Azure subscription. For example, deployment of SAP Model Companies via SAP CAL has become popular during the blueprinting phase of SAP S/4 projects and this helps application teams by providing a reference S/4HANA implementation to then jumpstart their own custom implementation of S/4.

From a development perspective we’ve also offered more flexibility for SAP developers with solutions such as SAP Cloud Platform on Azure. SAP application developers can now use Azure to co-locate application development next to SAP ERP data and boost development productivity, while accessing SAP ERP data at low latencies for faster application performance. This can be done with Azure’s platform services such as Azure Event Hubs for event data processing and Azure Storage for unlimited inexpensive storage. It’s also been impressive to see customers like Coats,the world’s oldest thread manufacturer, integrate other Azure services like Internet of Things (IoT) capabilities on their manufacturing floors with their SAP environments also running on Azure.

For security and compliance Microsoft spends over $1B per year in R&D on security that typical customers cannot. This has led to Azure having inherent security capabilities such as Azure Security Center and allows customers to have the confidence that their cloud provider meets applicable government and industry compliance standards.

This takes us to the release of an SAP on Azure technical blog series over the next 3 weeks leading up to this year’s SAPPHIRENOW conference in Orlando. With more and more customers having chosen Azure as the cloud platform for running SAP, they’re wanting more detailed technical guidance. This is one of the reasons a new team within our Azure Global Customer Advisory Team (AzureCAT) organization was formed called AzureCAT SAP Deployment Engineering. Our team is focused on working with the largest and most complex SAP customers running their SAP environments in Azure. Working with these customers enables us to provide more direct customer SAP-related feedback to our Azure engineering teams and further enhance our SAP on Azure technical roadmap ensuring we provide the best features for SAP customers of all sizes.

Our first SAP on Azure technical blog post of this series is by my colleague, Will Bratton, who will step you through key technical design considerations for deploying and running SAP on Microsoft Azure. These important considerations include security, performance, scalability, availability, recoverability, operations, and efficiency.

Next week my colleague Marshal Whatley dives into the world of migrating SAP ERP and SAP S/4HANA to Azure, much as our own internal SAP implementation has moved to Azure and is moving to S/4HANA. The week after next my colleague Troy Shane will cover migration of SAP BW4/HANA, as well as a view on how best to deploy BW4/HANA in a scale-out architecture today and in the near future with the new Azure NetApp Files.

Finally, to all of our existing SAP on Azure customers, we thank you for betting your business on Azure and we look forward to continuing to meet your needs as a mission critical cloud platform for SAP. To prospective SAP customers looking at Azure, we look forward to answering all of your questions at SAPPHIRENOW and beyond.
Quelle: Azure

Best practices in migrating SAP applications to Azure – part 1

This is the first blog in a three-part blog post series on best practices for migrating SAP to Azure.

Designing a great SAP on Azure architecture

In this blog post we will touch upon the principles outlined in “Pillars of a great Azure architecture” as they pertain to building your SAP on Azure architecture in readiness for your migration.

A great SAP architecture on Azure starts with a solid foundation built on four pillars:

Security
Performance and scalability
Availability and recoverability
Efficiency and operations

Designing for security

Your SAP data is likely the treasure of your organization's technical footprint. Therefore, you need to focus on securing access to your SAP architecture by way of secure authentication, protecting your application and data from network vulnerabilities, and maintaining data integrity through encryption methods.

SAP on Azure is delivered in the Infrastructure-as-a-Service (IaaS) cloud model. This means security protections are built into the service by Microsoft at the physical datacenter, physical network, physical host level, and the hypervisor. Therefore, for those areas above the hypervisor (e.g. the guest operating system for SAP), you need to undertake a careful evaluation of the services and technologies you select to ensure you are providing the proper security controls for your architecture.

In terms of authentication, you can take advantage of Azure Active Directory (Azure AD) to enable single-sign-on (SSO) to your S/4HANA Fiori Launchpad. Azure AD can also be integrated with the SAP Cloud Platform (SCP) to provide single-sign-on to your SCP services which can also be run on Azure.

Network Security Groups (NSG) allow you to filter network traffic to and from resources in you virtual network. NSG rules can be defined to allow or deny access to your SAP services, for instance, allowing access to the SAP Application ports from on-premises IP addresses ranges and deny public Internet access.

With regards to data integrity, Azure Disk Encryption helps you encrypt your SAP virtual machine disks where both the operating system and data volumes can be encrypted at rest in storage. Azure Disk Encryption is integrated with Azure Key Vault which controls and manages your encryption keys. Many of our SAP customers choose Azure Disk Encryption for their operating system disks and transparent DBMS data encryption for their SAP database files. This approach secures the integrity of the operating system and ensures database backups are also encrypted.

To dig further into topics of interest in the security area, you can refer to our Azure Security documentation.

Designing for performance and scalability

Performance is a key driver for digitizing business processes, and having a performant SAP application is crucial for end users to work efficiently without frustration. Therefore, it is important to undertake a quality sizing exercise for your SAP deployment and to right-size your Azure components – compute, storage, and network.

SAP Note #1928533 details the SAPS value for Azure Virtual Machines supported to run SAP Applications, and within the links below you can attain the network and storage throughput per Azure VM type:

Sizes for Windows Virtual Machines in Azure
Sizes for Linux Virtual Machines in Azure

The agility of Azure allows you to scale your SAP system with ease, for example, increasing the compute capacity of the database server or horizontally scaling through the addition of application servers when demand arises. This includes temporarily beefing up the infrastructure to accelerate your SAP migration throughput and reduce the downtime.

We recommend you leverage virtual machine accelerators for your SAP application and database layers. Enable Accelerated Networking on your virtual machines to accelerate network performance. In scenarios where you will run your SAP database on M-Series virtual machines, consider enabling the Write Accelerator durable write cache on your database log volumes to improve write I/O latency. Write Accelerator is mandatory for productive SAP HANA workloads to ensure a low write latency (sub ms) to the /hana/log volume.

Use Premium Storage Managed Disks for the SAP database server to benefit from high-performance and low-latency I/O. Be mindful, that you may need build a RAID-0 stripe to aggregate IOPS and throughput to meet your application needs. In the case of SAP HANA workloads, we cover storage best practice within our documentation, “SAP HANA infrastructure configurations and operations on Azure.”

ExpressRoute or VPN facilitates connectivity for on-premises SAP end users and application interfaces connecting to your SAP applications in Azure. For production SAP applications in Azure, we recommend ExpressRoute for a private, dedicated connection which offers reliability, faster speed, lower latency, and tighter security. Be mindful of latency sensitive interfaces between SAP and non-SAP applications, you may need to define migration “move groups” where groups of SAP applications and non-SAP applications are landed on Azure together.

Designing for availability and recoverability

Operational stability and business continuity are crucial for mission critical, tier-1 SAP applications. Designing for availability ensures that SAP application uptime is secured in the event of localized software or hardware failures. In the case of productive SAP applications, we recommend the virtual machines which run the SAP single points of failure, such as the system central services A(SCS) and database are deployed in Availability Sets or Availability Zones, to protect against planned and unplanned maintenance events. This also applies to the SAP Application servers where a few smaller servers are recommended instead of one larger application server. Operating system cluster technologies such as Windows Failover cluster or Linux Pacemaker would be configured on the guest OS to ensure short failover times of the A(SCS) and DBMS. DBMS synchronous replication would be configured to ensure no loss of data.

Designing for recoverability means recovering from data loss, such as a logical error on the SAP database or from large scale disasters, or loss of a complete Azure region. When designing for recoverability, it is necessary to understand the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) of your SAP Application. Azure Regional Pairs are recommended for disaster recovery which offer isolation and availability to hedge against the risks of natural or man disasters impacting a single region.

On the DBMS layer, asynchronous replication can be used to replicate your production data from your primary region to your disaster recovery region. On the SAP application layer, Azure-to-Azure Site Recovery can be used as part of an efficient, cost-conscious disaster recovery solution.

It is essential to carefully consider both availability and recoverability within the design of the SAP deployment architecture. This will protect your business from financial losses resulting in downtime and data loss.

Designing operations and efficiency

Your move to Azure also presents an opportunity to undertake an SAP system rationalization assessment. Do you need to move all SAP systems or can you decommission those which are no longer used? For example, Microsoft-IT decommissioned approximately 60 virtual machines as part of our SAP migration to Azure.

In terms of efficiency, focus on eliminating waste within your SAP on Azure deployment. Post go-live, review the sizing. Can you reduce the size of your virtual machine based on utilization? Can you drop disks which are not being used?  

De-allocating or “snoozing” of virtual machines can bring you tremendous cost savings. For example, running your SAP Sandbox systems 10 hours x 5 days, instead of 24 hours x 7 days would reduce your costs by approximately 70 percent in a pay-as-you-go model. Where your SAP application needs to run 24 x 7 opt for Azure Reserved Instances to drive down your costs.

Establishing infrastructure manually for each SAP deployment can be tedious and error prone, often costing hours or days if multiple SAP installation are required. Therefore, to improve efficiency it makes sense to automate your SAP infrastructure deployment and software installation as much as possible. Embrace the DevOps paradigm using infrastructure-as-code to build new SAP environments as needed, such as in SAP project landscapes. Below, some links to give you a head start on automation.

Automating SAP deployments in Microsoft Azure using Terraform and Ansible
Accelerate your SAP on Azure HANA project with SUSE Microsoft Solution Templates

As you embark on your SAP to Azure journey, we recommend that you dive into our official documentation to deepen your understanding of using Azure for hosting and running your SAP workloads. 

Use our SAP Workload on Azure Planning and Deployment Checklist as a compass to navigate through the various phases of your SAP migration project. Our checklist will steer you in the right direction for a quality SAP deployment on Azure.

We also recommend that you explore our whitepaper, “Migration Methodologies for SAP on Azure” where we dig into the various migration options to land your SAP estate on Azure. In scenarios where your SAP application has a giant database footprint we also have your covered. For more information refer to the blog post, “Very Large Database Migration to Azure.”

The next blog in our series will focus on the migration to Suite-on-HANA and S/4HANA on Azure.
Quelle: Azure

Azure Marketplace new offers – Volume 35

We continue to expand the Azure Marketplace ecosystem. From March 1 to March 15, 2019, 68 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

(Basic) Apache NiFi 1.9 on Centos 7.6: This is a CentOS 7.6 virtual machine running an Apache NiFi 1.9 installation using default configurations. Once the VM is deployed and running, Apache NiFi can be accessed via a web browser.

Centos 6.8: This distribution of Linux is based on CentOS and is provided by Northbridge Secure Systems. NetConnect by Northbridge Secure Systems is an optimal solution to deliver Azure servers and applications to your device of choice.

Web applications

4insight.io: 4Subsea's digital service 4insight.io provides key decision support to personnel for oil and gas and offshore wind operations. Digital twins delivered on 4insight.io are designed to improve data quality and reduce operational costs and risk.

Additio App – Classroom management: Streamline formative assessment and engage with parents through this K-12 classroom management platform. Additio App is available in the following languages: English, Spanish, French, Portuguese, Italian, Catalan, and Galician.

Aether Engine: This distributed simulation engine will enable you to build and run dynamically scaling spatial simulations on Azure. Hadean will provide you access to a managed instance of Aether Engine and help you create a proof-of-concept simulation that runs on Azure.

Agility Metrics: Agility Metrics is a dashboard that allows you to measure deployment frequency, production failure rate, average recovery time, and other KPIs of the Azure DevOps managed development lifecycle. This application is available only in Spanish.

AlphaPoint Asset Digitization (APAD): AlphaPoint Asset Digitization enables institutions to tokenize illiquid assets and trade those assets on an exchange.

AlphaPoint Exchange (APEX): AlphaPoint Exchange is a full-stack digital asset trading platform. It delivers a ready-made UI/UX tool set; robust risk management with real-time error checking; and a secure, stable, white-label back-end solution that safeguards digital exchange data.

Auto Asystent: Leaware S.A.'s user-friendly SaaS platform, Auto Asystent, enhances the relationship between car dealers and their customers through efficient communication, appointment management, and more. This solution is available only in Polish.

buildwagon – Hololens Development Platform: Develop for the Microsoft HoloLens faster on this cloud-based platform. buildwagon allows you to write code in JavaScript and view the results on the same screen or directly on the HoloLens.

Canopy Manage – Virtual Asset Management: Canopy Manage collates business, IT, and IoT virtual and physical assets from disparate management systems and data sources into a single control portal.

Cloud Snapshot Manager: With Dell EMC's Cloud Snapshot Manager, customers can discover, orchestrate, and automate the protection of workloads across multiple clouds based on policies for seamless backup and disaster recovery.

Digital Asset Management – Managed Video Portal: This application offers a secure and centralized repository to manage videos. It offers capabilities for advanced embed, review, approval, publishing, and distribution. Deliver consistently high-quality video.

Formiik Engine: Formiik Engine optimizes business processes by facilitating the work of managers, credit officers, and supervisors. It's omnichannel and specializes in financial products. This solution is offered only in Spanish.

Fulcrum – Enabling Smart Construction Management: Fulcrum, LeapThought’s construction management system, enables consistent, streamlined, transparent, and compliant project delivery. Fulcrum offers a 360-degree capability for all project collaboration needs.

GLASIAOUS Trial Edition: Boost your global business with GLASIAOUS, a cutting-edge accounting app that covers seven languages and multiple accounting standards.

Grace Platform: The Grace platform supports the entire data science workflow and is built both for organizations in the beginning of their AI and machine learning journey and for organizations with an established data science team.

I/O Surg: Minimize costly errors in pre-op patient scheduling. I/O Surg, a front-end two-click search engine, quickly and accurately identifies the correct billing code and patient status for Medicare procedures.

Informatica Data Quality BYOL: With the Informatica Data Quality and Governance portfolio, you can increase business value by ensuring that all key initiatives and processes are fueled with relevant, timely, trustworthy data.

Instec Billing: Instec Billing comes with the same flexibility as Instec's policy management system. Self-configuration allows you to customize the system to fit your business, and highly automated workflows reflect a low-touch approach that maximizes efficiency.

Intelligent Store – Behavior Triggers: The Intelligent Store suite provides tools for efficient communication between online retailers and customers. The Behavior Triggers tool follows customers' interests, personalizing their experience. This app is available only in Portuguese.

Intelligent Store – Personal Shop: The Personal Shop tool enables automation of personalized digital interfaces based on customer behavior and semantic elements. This app is available only in Portuguese.

Intelligent Store – Semantic Search: The Semantic Search tool combines semantics with personalization, helping online retailers better understand the context and purchase time of each potential customer. This app is available only in Portuguese.

mapul: Mapul is a web application that allows you to create visual diagrams to capture your ideas and then share and present them online. Generate, visualize, and present your ideas in new ways.

MATLAB (BYOL): MATLAB is a programming platform designed for engineers and scientists. It combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly.

Mobile Coupon: Mobile Coupon encourages customer activity through coupons and push notifications. Its pre-built functions enable you to deploy your own branded application quickly and inexpensively. This application is available only in Japanese.

NiceLabel Label Cloud: Label Cloud is a cloud-based version of the NiceLabel Label Management System. It enables you to digitally transform your labeling to achieve lower costs, improved quality assurance, and a faster time-to-market.

Plastic SCM: Plastic SCM is a full version control stack that includes native GUIs, branching, and merge tools. It integrates with almost any issue tracker, code review, and continuous integration/continuous delivery tool, and it also incorporates build automation.

Precedence: The Precedence open-source ledger allows the non-blockchain specialist to easily put in place a transparent, immutable, and cryptographically verifiable transaction log that fully integrates with an existing database or file system.

Preservica: Preservica provides digital preservation for unstructured content that needs to be kept safe, secure, and readable long-term (10 years or more), or perhaps indefinitely. Preservica preserves readable and accessible versions of every file, tagging and migrating each one.

Prime: With Prime, offer a better experience for your banking customers. Issue and personalize cards within minutes. Improve efficiency by automating back-office operations and streamlining activities.

Proctorio | Learning Integrity Platform: Ensure the learning integrity of every assessment every time. Eliminate human error, bias, and much of the expense associated with remote proctoring, identity verification, and originality verification.

Product Cloud – Advanced Filters: The Product Cloud Suite provides tools to help online retailers organize their catalogs. The Advanced Filters tool extracts product characteristics, resulting in easier browsing for customers. This app is available only in Portuguese.

Product Cloud – Automatic Categorization: The Automatic Categorization tool sorts according to the category tree fixed by online retailers and provides benchmark suggestions, looking at customer patterns and market trends. This app is available only in Portuguese.

R3S _Process Manager: R3S Process Manager enables you to publish run archive files to a security-enhanced web server. You can use R3S Worker or a third-party grid computing system to perform the execution.

Retina – AI based Retail Analytics Suite: Retina is an AI-led analytics product that provides a single view of all customer transactions for retailers to drive actionable insights. Retina supplies the customer with personalized products, promotions, and services.

Seymour: Through automated processing and publishing of Excel and CSV data, Seymour produces great-looking charts and tables on your website. The charts and tables will update automatically in real time, and Seymour is fully responsive for mobile and other devices.

SIOS Billing Management Solution for EA-Azure: Facilitate the management of Azure usage charges at universities and government agencies in cooperation with the Microsoft Azure Enterprise Agreement portal. This solution is available only in Japanese.

Spinbackup for Office 365: Spinbackup provides you with an enterprise-ready backup and recovery solution for Office 365. It offers migration, reports, top-level encryption, automated daily backups, diversity in data storage locations, and more.

Sustainability Suite (Cloud Version): Cogneum's Sustainability Suite improves governance and mitigates financial and reputational risks associated with sustainability.

Switch Automation: Switch Automation's comprehensive smart building platform integrates with traditional building systems as well as Internet of Things (IoT) technologies to analyze, automate, and control assets in real time.

UpSafe Office 365 Backup: UpSafe Office 365 Backup helps you secure the critical data from your Software-as-a-Service application. Set it up and start your Office 365 backup in just a few clicks. When necessary, restore the files you need through granular or full recovery.

Vexor: Vexor's continuous integration service can run an unlimited number of parallel builds because it works in the cloud. Tests are executed in parallel in each build to make your testing faster. Vexor uses a pay-per-minute billing model.

Virtual Vaults Dataroom: Virtual Vaults delivers a professional virtual data room platform to support transactional projects within capital markets such as mergers and acquisitions and real estate.

Container solutions

Joomla! Helm Chart: Joomla! is an award-winning open-source CMS platform for building websites and applications. Deploying Bitnami applications as Helm charts is the easiest way to get started with our applications on Kubernetes.

Kubewatch Helm Chart: Kubewatch is a Kubernetes watcher that currently publishes notification to Slack. Run it in your Kubernetes cluster and you will get event notifications in a Slack channel.

MediaWiki Helm Chart: MediaWiki is the free and open-source wiki software that powers Wikipedia. Used by thousands of organizations, it is extremely powerful, scalable software and a feature-rich wiki implementation.

Memcached Helm Chart: Memcached is a high-performance distributed memory object caching system. It's generic in nature but intended for use in speeding up dynamic web applications by alleviating database load.

minideb Container Image: This is a minimalist Debian-based image built specifically to be used as a base image for containers.

Moodle Helm Chart: Moodle is an open-source online learning management system (LMS) widely used at universities, schools, and corporations worldwide. It’s modular and highly adaptable to any type of online learning.

NGINX Ingress Controller Helm Chart: NGINX Ingress Controller is an ingress controller that manages external access to HTTP services in a Kubernetes cluster using NGINX.

NGINX Open Source Helm Chart: NGINX Open Source is a popular web server that can also be used as a reverse proxy, load balancer, and http cache.

phpMyAdmin Helm Chart: phpMyAdmin is a free software tool written in PHP and intended to handle the administration of MySQL over the web. phpMyAdmin supports a wide range of operations on MySQL and MariaDB.

PrestaShop Helm Chart: PrestaShop is a powerful open-source e-commerce platform used by more than 250,000 online storefronts worldwide. It’s easily customizable, responsive, and includes powerful tools to drive online sales.

WildFly Helm Chart: Wildfly is a lightweight open-source application server, formerly known as JBoss, that implements the latest enterprise Java standards.

Consulting services

Authentication and Secure Data: 1-day Workshop: After completing this workshop by Dynamics Edge, students will understand how to implement authentication in applications, implement secure data (SSL and TLS), and manage cryptographic keys in Azure Key Vault.

Azure PaaS: 3-Day Proof of Concept: This three-day engagement will allow your team to work with Tallan to educate your organization on what is possible in Microsoft Azure and to build out a proof of concept utilizing Azure Platform-as-a-Service.

Azure Readiness Assessment: 2 Weeks: With cloud migration assessment tools from Oakwood Systems Group Inc., you’ll have a complete inventory of servers with metadata for each, allowing you to build a cloud migration plan for your organization.

Blue Chip Migrator for Azure Adoption: Blue Chip Consulting can help you efficiently and strategically adopt Microsoft Azure and eliminate the guesswork associated with complex cloud migrations and modernization projects.

Creating and Deploying Apps-1 day Workshop: This workshop by Dynamics Edge will teach IT professionals how to build logic app solutions that integrate apps, data, systems, and services by automating tasks and business processes as workflows.

Deploy/Configure Infrastructure: 1-day Workshop: This workshop by Dynamics Edge will teach IT professionals how to manage Azure resources, including deployment and configuration of virtual machines, virtual networks, storage accounts, and Azure Active Directory.

Develop Azure Platform as Service: 1-day Workshop: Dynamics Edge's trainer-led workshop will help you create an Azure Container Service (ACS/AKS) cluster using Azure CLI and Azure Portal.

Develop for Azure Storage: 1-day Workshop: This workshop by Dynamics Edge will cover developing solutions using Azure Storage options: Azure Cosmos DB, Azure Storage tables, file storage, Blob storage, relational databases, and caching and content delivery network.

Developing for the Cloud: 1-day Workshop: Learn how to configure a message-based integration architecture, develop for asynchronous processing, create apps for auto scaling, and better understand Azure Cognitive Services solutions.

Implement Security in Azure Devt: 1-day Workshop: This trainer-led workshop by Dynamics Edge is part of a series of four courses to help you prepare for Microsoft’s Azure developer certification exam AZ-200: Develop Core Microsoft Azure Cloud Solutions.

QuickBooks Desktop on Azure: 5hr Assessment: Noobeh’s experienced consultants will perform an assessment of the requirements for your QuickBooks delivery on the Microsoft Azure platform, then develop a deployment plan.

Secure Identities: 1-day Workshop: This workshop by Dynamics Edge will teach IT professionals about keeping modern IT environments secure, focusing on role-based access control, multi-factor authentication, and privileged identity management.

Select Appropriate Azure Devt: 1-day Workshop: This is for developers who know how to code in at least one of the Azure-supported languages. It will cover Azure architecture, design and connectivity patterns, and choosing the right storage solution for your development

Quelle: Azure