Azure Tips and Tricks – Become more productive with Azure

Today, we’re pleased to re-introduce a web resource called “Azure Tips and Tricks” that helps existing developers using Azure learn something new within a couple of minutes. Since inception in 2017, the collection has grown to over 200+ tips as well as videos, conference talks, and several eBooks spanning the entire universe of the Azure platform. Featuring a new weekly tip and video it is designed to help you boost your productivity with Azure, and all tips are based off of practical real-world scenarios. The series spans the entire universe of the Azure platform from App Services, to containers, and more!

 

Figure 1: The Azure Tips and Tricks homepage.

 

With the new site, we’ve included the much-needed ability to navigate between Azure services, so that you can quickly browse your favorite categories.

 

Figure 2: The new Azure Tips and Tricks navigation capabilities.

 

There is search functionality to assist you to quickly find what you are looking for.

 

 

Figure 3: The new Azure Tips and Tricks search function.

 

The site is also open-source on GitHub, so anyone can help contribute to the site, ask questions, and jump-in wherever they want! While you are on the page go ahead and star us to keep up to date.

 

Figure 4: The Azure Tips and Tricks GitHub repo.

 

What are you waiting for? Visit the site and star the repo so that you don’t miss future updates to the site, and to ensure you make the most of the Azure platform’s constantly evolving services and features. I’ll also be presenting a session on the series at Microsoft Build on Monday, May 6th from 2:30-2:50pm. I hope to meet you in person.

Thanks for reading and keep in mind that you can learn more about Azure by following our official blog and Twitter account. You can also reach the author of this post on Twitter.
Quelle: Azure

Announcing Azure Backup support to move Recovery Services vaults

A Recovery Services vault is an Azure Resource Manager resource to manage your backup and disaster recovery needs natively in the cloud. Today, we are pleased to announce the general availability support of the move functionality for recovery services vaults. With this feature you can migrate a vault between subscriptions and resource groups with a few steps, in minimal downtime and without any data-loss of old backups. You can move a Recovery Services vault and retain recovery points of protected virtual machines (VMs) to restore to any point in time later.

Key benefits

Flexibility to move across subscriptions and resource groups: You can move the vault across resource groups and subscriptions. This is very helpful in scenarios like expiry of old subscription, moving from an Enterprise Agreement to Cloud Solution Provider subscription, organizational and departmental changes, or separation between QA environment and production environment.
No impact to old restore points and settings in the vault: Post migration, all the settings, backup policies, and configurations in the vault are retained, including all backup and recovery points created in the past inside the vault.
Restore support independent of the VM in the old subscription: You can restore from retained backup history in the vault regardless of whether the VM is moved with the vault or not to the target subscription.

For step-by-step guidance on moving your recovery services vaults, refer the documentation, “Move a Recovery Services vault across Azure Subscriptions and Resource Groups.”

Related links and additional content

Learn more about moving a Recovery Services vault.
Learn more about Azure Backup.
Want more details? Check out Azure Backup documentation.
Need help? Reach out to Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates.
 

Quelle: Azure

Get ready for Global Azure Bootcamp 2019

Early this Saturday morning—while it’s still Friday in many parts of the world—people will gather at several locations in New Zealand to kick off Global Azure Bootcamp 2019 to learn about Microsoft Azure and cloud computing.

Truly global

Throughout the day, thousands more will attend these free events to expand their knowledge about Azure using a variety of formats as chosen by each location. As Saturday makes its way around the globe, more people will gather in groups of 5 to 500 at hundreds of other locations to do the same until the final locations on the west coast of North America get underway. Chances are, there’s a location near you. Check out the map to find your local event.

Global Azure Bootcamp is a 100 percent community-driven event driven by MVPs, regional directors, and user group leaders around the world working in collaboration to deliver the largest one-day, global Azure event.

Learn more about Global Azure Bootcamp from this recent episode of Azure Friday, in which Scott Hanselman spoke with Global Azure Bootcamp co-founder and Azure MVP, Magnus Mårtensson:

Get ready for Global Azure Bootcamp 2019

Science lab

This year, the organizers teamed up once again with the Instituto de Astrofisica de Canarias in Tenerife, Spain in the Canary Islands to provide those seeking a real-world challenge with a science lab to systematically analyze data from NASA’s new Transiting Exoplanet Survey Satellite (TESS) using a machine learning (ML) algorithm to help search for new planets from millions of stellar light curves.

TESS is an MIT-led NASA mission, an all-sky survey for transiting exoplanets. Transiting planets are those that go in front of the star as seen from the telescope and, to date, is the most successful discovery technique for finding small exoplanets. – TESS

You can monitor the Science Lab Dashboard to track the progress of your deployment and how is performing, comparing with other community members around the world. For more information, see The IAC, re-elected to the Science Lab of Microsoft's Azure computer network.

Sponsors

Global Azure Bootcamp organizers work closely with Microsoft to ensure both bellies and brains are well fed with free lunches at many locations. Global Azure Bootcamp is also sponsored by CloudMonix, Serverless360, JetBrains, RevDeBug, Skill Me Up, Kemp, Progate, and Enzo Online.

See you Saturday!

Scott Hanselman, Partner Program Manager at Microsoft noted, "If you've been looking for an excuse to learn about Azure, you should take advantage of this free opportunity to grab lunch while you learn about the cloud without having to travel far from home. Heck, you may even help discover a new exoplanet. That's not a bad way to spend a Saturday."

And if you can’t join us at one of the hundreds of events on Saturday, check out the Global Online Azure Bootcamp that’ll take stream throughout the day on YouTube and be sure to follow #GlobalAzure on Twitter throughout the day.
Quelle: Azure

Connecting Global Azure Bootcampers with a cosmic chat app

Every year on the same day, the Global Azure Bootcamp brings together cloud enthusiasts who are eager to learn about Microsoft Azure. This year, the Bootcamp will be held on April 27, 2019 in more than 300 locations worldwide where local communities will welcome developers and IT professionals for a day of workshops, labs, and talks.

It will be a great experience for tens of thousands of Azure Bootcampers all around the world to simultaneously participate in this event. We decided to add a little “cosmic touch” to the Bootcamp by letting the attendees greet each other with a chat app that’s powered by Azure Cosmos DB. The chat app will be made available on April 27, 2019 by following a link displayed in each Bootcamp location!

A global database for a global event

If you think about it, this marriage just makes sense. It’s a planet-scale bootcamp, and what better technology than a globally distributed database to serve a global event?

From the beginning, we tackled this project with the goal of a great user experience. For a chat app, this means low latency in the ingestion and delivery of messages. To achieve that, we deployed our web chat over several Azure regions worldwide and let Azure Traffic Manager route users’ requests to the nearest region where our Cosmos database was deployed to bring data close to the compute and the users being served. That was enough to yield near real-time message delivery performance as we let Azure Cosmos DB replicate new messages to each covered region.

To minimize ingestion latency, we also configured our Cosmos database to be deployed in multi-master mode, which makes it possible to write data to any region the database is deployed to.

The Azure Cosmos DB change feed as a publication system

You expect a chat app to display new messages in real-time, so these messages have to be pushed to the clients without the need for a slow and costly polling mechanism. We used Azure Cosmos DB’s change feed to subscribe to new messages being received and delivered them to each client through the Azure SignalR service.

We also chose to consume the change feed locally in each region to further minimize delivery latency, as having a single point of dispatch would have effectively reduced the benefits we got from being globally distributed and multi-mastered.

Here’s a high-level diagram showing the Azure services in play and how they interact:

Planet-scale, batteries included

So, what does it take to build such a distributed app? Nothing much really, as Azure Cosmos DB takes care of all the rocket science required! Since all the complexity involved with global data distribution is effectively hidden from the developer, what’s left are simple concepts that are easy to reason about:

You write messages to the region that’s closest to you.
You receive messages through the change feed, whatever region they were written from.

These concepts are straightforward to implement, and it only took around a day of development time to have the chat app up and running.

Keeping it friendly with automatic moderation

As our chat app was about to be made available for anyone to use, we also had to make sure to filter out any inappropriate message that may spoil the friendly spirit of the Global Azure Bootcamp. To achieve that, we used the content validation service powering the Content Moderator APIs from Azure Cognitive Services to scan each incoming message. This turnkey solution allows us to detect potential profanity or presence of personally identifiable information, and instantly block the related messages to ensure a positive and safe experience for everyone.

The database for real-time, global workloads

By ensuring the lowest latency in both the ingestion and delivery of messages, the combination of Azure Cosmos DB’s global footprint, multi-master capability, and change feed API let us achieve the best experience for anyone using the chat app, wherever they are on the planet.

Want to find out how Azure Cosmos DB’s unique characteristics can help you solve your next data challenges?

Head over to the Azure Cosmos DB website and learn more about the service.
Follow us on Twitter @AzureCosmosDB.
Ask your questions on StackOverflow with the tag [azure-cosmosdb].
Get started today using Azure Cosmos DB’s local emulator or try the service!

Quelle: Azure

Best practices in migrating SAP applications to Azure – part 2

This is the second blog in a three-part blog post series on best practices in migrating SAP to Azure.

Journey to SAP S/4HANA from SAP Business Suite

A common scenario where SAP customers can experience the speed and agility of the Azure platform is the ability to migrate from a SAP Business Suite running on-premises to SAP S/4HANA in the cloud. This scenario is a two-step process. The first step being migration from enterprise resource planning (ERP) on-premises to the Suite on HANA in Azure, and then converting from Suite on HANA to S/4HANA.

Using the cloud as the destination for such a migration project has the potential to save organizations millions of dollars in upfront cost for infrastructure equipment, and can shave roughly around 12 to 16 weeks off the project schedule as the most complex components of the infrastructure are already in place and can easily be provisioned when required. You can see where these time savings come from by looking at the time it takes to go through the request for proposal (RFP) process, to procure expensive servers with large memories, or potentially to procure dedicated appliances with only a five year lifespans such as storage arrays, network switches, backup appliances, and other data center enhancements. Not to mention the time and cost associated with an expertly trained team that can deploy S4/HANA based on tailored data center integration (TDI) standards as required by SAP. These are the same obstacles the consulting firm Alegri was running into during their experience migrating to S/4HANA on Azure. To read more about their experience, refer to Alegri's customer story, “Modern corporate management without any burden.”

While planning for a migration, you want to have a solid project plan that has all the tasks, milestones, dependencies, and stakeholders identified. You will also want a defined governance model and a RACI chart showing which group owns which tasks. Projects of this size have so many moving parts and dependencies between multiple internal and external teams. Disciple in governance is a must.

When getting started with the actual execution, you would typically want to identify all the SAP prerequisites specific to the versions that are currently in use. Larger SAP migration projects will typically have a number of parallel efforts going on at the same time. You want to be sure the infrastructure design on Azure is fully compliant with SAP’s support matrix. To do so, start by reviewing the SAP documentation and deployment guides on Azure,  Azure's SAP deployment checklist, and the product availability matrix (PAM). This is a great starting point because while you’re working on the initial landscape design, the basis admins can start running the readiness check looking at impact of the legacy custom code in the SAP Business Suite system, the S/4HANA sizing, and other S/4 perquisites required before the upgrade can begin. Together you’ll group on any new items that were uncovered before finalizing the design.

Next is the infrastructure deployment on Azure. The initial deployment is minimal, just enough infrastructure to support the initial sandbox and development tiers. Requirements at this state include:

An Azure subscription
An established VPN connection or Express Route
A deployed Azure network (VNets and Subnets) and security configurations (Network Security Groups)
The required number of virtual or bare metal servers and associated storage

The above requirements will comprise the new target environment. It is important to note that an ExpressRoute will be required to provide enough bandwidth for migrating medium to large databases. The basis admins can start installing the Suite on HANA database and the initial application servers. This will give the broader project team all the working area they need to run through as many migration runs as necessary to iron out any issues.

We get a lot of questions around the best migration method to use when migrating SAP from on-premises to Azure, which is understandable given the number of options including backup/restore, export/import, SUM and DMO, and more. When migrating the SAP Business Suite to S/4HANA on Azure, it’s easier to use the SUM and DMO option from SAP to handle the database migration and installing new application servers in Azure. This also provides an opportunity to have the new landscape in Azure on the latest supported operating system versions for optimal capabilities, performance, and support. My fellow Azure colleague Kiran Musunuru just recently authored a white paper on “Migration Methodologies for SAP on Azure” that outlines using the SUM and DMO option to migrate SAP Business Suite to S/4HANA as well as tools, options, individual steps, and the overall process of the migration.

Here are some tips that I’ve learned through my migration experience to help you avoid any last minute surprises.

Check the minimum required SAP versions and release levels across the landscape.
Check the PAM for Java components to see if they are required for your S/4HANA deployment.
Use the benchmark tool in DMO to get realistic migration times for the system. This helps with determining the best ExpressRoute sizing for the migration, and sets expectations for required downtimes.
Know the Unicode requirement to upgrade to S/4HANA.
Know the dual stack systems remediation.
Know that OS flavor and versions might differ from existing internal IT standards.
Plan for the Fiori deployment.
Be prepared to perform multiple migration mock runs for complex or very large landscapes.

Once the migration from the on-premises system to the Azure environment is complete, the conversion from the SAP Business Suite on HANA to S/4HANA can begin. Before starting I would review SAP’s conversion guide for the S/4HANA version that you will be upgrading to. For example, here is the conversion guide for S/4HANA 1809.

Deploying a new S/4HANA landscape can all be done in Azure by the basis team and eliminates any dependencies on the many internal IT teams that would be required with a deployment in a traditional on-premises environment. Most customers will likely start with a simple sandbox or new development system from a system copy of the newly migrated suite on the HANA system. This provides a staging area to run through all the simplification checks, custom code migrations, and actual S/4 conversion mock runs. In parallel, the basis team can start deploying the QAS and production tiers including the new VM’s and configuration for Fiori. Once the necessary checks and conversion have been completed, the process is simply repeated for QAS and production.

This blog post is meant as a general overview of the process and I didn’t touch on any of the functional areas as I wanted to focus on the items that are connected to the Azure platform. We also have a number of scripting and automation options around deploying S/4HANA landscapes on Azure. For an overview you can visit the blog post, “Automating SAP deployments in Microsoft Azure using Terraform and Ansible” written by Tobias Niekamp from the Azure Compute engineering team.

Next week, my colleague Troy Shane will cover the migration of SAP BW/4HANA. It will also cover how best to deploy SAP BW/4HANA in a scale-out architecture today and in the near future when our new Azure NetApp Files becomes generally available for SAP HANA workloads.
Quelle: Azure

Governance setting for cache refreshes from Azure Analysis Services

Built on the proven analytics engine in Microsoft SQL Server Analysis Services, Azure Analysis Services delivers enterprise-grade BI semantic modeling capabilities with the scale, flexibility, and management benefits of the cloud. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Azure Analysis Services helps you transform complex data into actionable insights. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc interactive analysis.

Data visualization and consumption tools over Azure Analysis Services (Azure AS) sometimes store data caches to enhance report interactivity for users. The Power BI service, for example, caches dashboard tile data and report data for initial load for Live Connect reports. However, enterprise BI deployments where semantic models are reused throughout organizations can result in a great deal of dashboards and reports sourcing data from a single Azure AS model. This can cause an excessive number of cache queries being submitted to AS and, in extreme cases, can overload the server. This is especially relevant to Azure AS (as opposed to on-premises SQL Server Analysis Services) because models are often co-located in the same region as the Power BI capacity for faster query response times, so may not even benefit much from caching.

ClientCacheRefreshPolicy governance setting

The new ClientCacheRefreshPolicy property allows IT or the AS practitioner to override this behavior at the Azure AS server level, and disable automatic cache refreshes. All Power BI Live Connect reports and dashboards will observe the setting irrespective of the dataset-level settings, or which Power BI workspace they reside on. You can set this property using SQL Server Management Studio (SSMS) in the Server Properties dialog box. Please see the Analysis Services server properties page for more information on how to make use of this property.

Quelle: Azure

Azure Notification Hubs and Google’s Firebase Cloud Messaging Migration

When Google announced its migration from Google Cloud Messaging (GCM) to Firebase Cloud Messaging (FCM), push services like Azure Notification Hubs had to adjust how we send notifications to Android devices to accommodate the change.

We updated our service backend, then published updates to our API and SDKs as needed. With our implementation, we made the decision to maintain compatibility with existing GCM notification schemas to minimize customer impact. This means that we currently send notifications to Android devices using FCM in FCM Legacy Mode. Ultimately, we want to add true support for FCM, including the new features and payload format. That is a longer-term change and the current migration is focused on maintaining compatibility with existing applications and SDKs. You can use either the GCM or FCM libraries in your app (along with our SDK) and we make sure the notification is sent correctly.

Some customers recently received an email from Google warning about apps using a GCM endpoint for notifications. This was just a warning, and nothing is broken – your app’s Android notifications are still sent to Google for processing and Google still processes them. Some customers who specified the GCM endpoint explicitly in their service configuration were still using the deprecated endpoint. We had already identified this gap and were working on fixing the issue when Google sent the email.

We replaced that deprecated endpoint and the fix is deployed.

If your app uses the GCM library, go ahead and follow Google’s instructions to upgrade to the FCM library in your app. Our SDK is compatible with either, so you won’t have to update anything in your app on our side (as long as you’re up to date with our SDK version).

Now, this isn’t how we want things to stay; so over the next year you’ll see API and SDK updates from us implementing full support for FCM (and likely deprecate GCM support). In the meantime, here’s some answers to common questions we’ve heard from customers:

Q: What do I need to do to be compatible by the cutoff date (Google’s current cutoff date is May 29th and may change)?

A: Nothing. We will maintain compatibility with existing GCM notification schema. Your GCM key will continue to work as normal as will any GCM SDKs and libraries used by your application.
If/when you decide to upgrade to the FCM SDKs and libraries to take advantage of new features, your GCM key will still work. You may switch to using an FCM key if you wish, but ensure you are adding Firebase to your existing GCM project when creating the new Firebase project. This will guarantee backward compatibility with your customers that are running older versions of the app that still use GCM SDKs and libraries.

If you are creating a new FCM project and not attaching to the existing GCM project, once you update Notification Hubs with the new FCM secret you will lose the ability to push notifications to your current app installations, since the new FCM key has no link to the old GCM project.

Q: Why am I getting this email about old GCM endpoints being used? What do I have to do?

A: Nothing. We have been migrating to the new endpoints and will be finished soon, so no change is necessary. Nothing is broken, our one missed endpoint simply caused warning messages from Google.

Q: How can I transition to the new FCM SDKs and libraries without breaking existing users?

A: Upgrade at any time. Google has not yet announced any deprecation of existing GCM SDKs and libraries. To ensure you don't break push notifications to your existing users, make sure when you create the new Firebase project you are associating with your existing GCM project. This will ensure new Firebase secrets will work for users running the older versions of your app with GCM SDKs and libraries, as well as new users of your app with FCM SDKs and libraries.

Q: When can I use new FCM features and schemas for my notifications?

A: Once we publish an update to our API and SDKs, stay tuned – we expect to have something for you in the coming months.

Learn more about Azure Notification Hubs and get started today.
Quelle: Azure