Gain business insights using Power BI reports for Azure Backup

Azure Backup announced the support for alerting and monitoring in August 2016. Taking it a step further, we are excited to announce preview of Azure Backup Reports using Power BI. Azure Backup Reports provide the ability to gauge backup health, view restore trends, and understand storage usage patterns across subscriptions and across vaults. More importantly, this feature provides complete power to you to generate your own reports and build customizations using Power BI.

Key Benefits

This feature provides the following capabilities and gives complete control to the customers for building reports:

Cloud based reports – You do not need to setup a reporting server, database, or any other infrastructure since everything is completely managed on the cloud. All you need is a storage account and Power BI subscription. The Power BI free tier supports reports for backup reporting data under 1 GB per user.
Cross subscription and cross vault reports – You can view data across subscriptions and vaults to get a big picture view, track organization SLAs, meet compliance requirements across departments, etc.
Open data model –  You can now create your own reports and customize existing reports since the Azure Backup management data model is publicly available.
Data visualization –  You can take advantage of Power BI’s data visualization capabilities to perform business analytics and share rich data insights.
Access control – Power BI also provides the capability to create organizational content packs, which can be used for sharing selected reports inside the organization and restricting access to reports based on their requirements.
Export to Event Hub and Log Analytics – Besides the ability to export the data to storage account and connect to it using Azure Backup content pack in Power BI, you can also export reporting data to Event Hub and Log Analytics to leverage it in OMS and other tools for further analysis.

Analyzing data using Power BI

You can configure Azure Backup reports using recovery services vault and import Azure Backup content pack in just a few steps. Once done, use the Azure Backup dashboard in Power BI to create new reports and customize existing ones using filters like vault, protected server, backup items, and time.

1. Storage

Storage reports track backup storage and protected instances over time. You can answer the following business questions using these reports:

Which protected servers had the highest protected instances last month?
Which protected servers use the most backup storage and have highest impact on billing?

2. Job Health

Job Health reports provide trends about backups and restores impacted by job failures, cause of these failures, and statistics about failed and successful jobs. You can answer the following business questions using these reports:

Was backup job failure higher than 10% last week?
What were the top causes of job failures yesterday?
Which backup items were impacted by these job failures?

In the image below, Backup has been selected in Distribution of Failed Jobs donut chart; this updates all other visuals on this tab with backup failures related data.

3. Backup Items

Backup Item reports provide details about backup schedule, data transferred, failure percentage, and last successful backup time. You can answer the following business questions using these reports:

Which backup items had maximum data transferred last week?
Are these items backed up during the same time period in a day?
Which backup items had no successful backups yesterday?

4. Job Duration

Job Duration reports provide insights into backup and restore job duration. You can answer the following business questions using these reports:

Which virtual machines had longest running jobs last week?
Which folders take longest time to restore?

5. Alerts

Alert reports provide distribution of active alerts, backup items that generate these alerts, and trends about alert resolution time. You can answer the following business questions using these reports:

Which data sources generated the most critical alerts in last week?
What is the trend of alert resolution since last few months?
Has the critical alert count reduced after latest version update?

Related links and additional content

Getting started with Azure Backup Reports.
View Azure Backup data model to create custom reports.
New to Azure Backup, sign up for a free Azure trial subscription.
Need help? Reach out to Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for latest news and updates.

Quelle: Azure

Azure Site Recovery now supports Ubuntu

Azure Site Recovery makes business continuity accessible for all your IT applications by letting you use Azure as your recovery site. This offers a solution where you only pay for the resources you consume, alleviating the need to spend on upfront capital investments for a recovery location or resources.

We recognize our customer’s need to have flexibility in the choice of platforms and application stacks they use. That is why Azure Site Recovery supports a wide variety of platforms and operating systems. We’ve now added support for another very popular Linux distribution. Azure Site Recovery now supports disaster recovery and migration to Azure for servers running Ubuntu on Azure virtual machines or in a VMware virtualized environment. Azure Site Recovery currently supports disaster recovery and migration to Azure for applications on Ubuntu Server 14.04 LTS.

Let’s see how easy it is to achieve business continuity objectives for your Ubuntu workloads in the context of the fictional Bellows College.

A business continuity plan for Bellows College

Bellows College’s Moodle learning management system(LMS) is configured in a standard two-tier deployment, with a web server and a MySQL database running on VMware virtual machines running Ubuntu server 14.04 LTS.

Last year, a faulty surge protector in their datacenter caused an outage to their learning management system. Bellows College’s application and infrastructure administrators scampered to bring the system back up on an alternate storage unit by restoring data from their database backup. This experience taught them a costly lesson and left them with the realization that periodic backups are not a replacement for a business continuity plan.

Realizing they needed a reliable business continuity plan, Bellows College’s CIO decided to use Azure Site Recovery. Going to Azure was an easy choice for them, as they were already planning on migrating some of their applications to Azure to consolidate their datacenter costs.

With a few simple steps, Bellows College setup Azure Site Recovery and got their learning management system protected to Azure.

Bellows College built a recovery plan to sequence the order in which the various application tiers are brought up during a failover. For example, they specified that the database tier would be brought up before the web tier so that the web server could start serving requests immediately post failover. Within the recovery plan, Bellows College used Azure Automation runbooks to automate some of the common post-failover steps, like assigning an IP address to the failed over web server. By using automation, they were able to achieve a better RTO by avoiding the need to perform this step manually.

With their Moodle servers protected and the recovery plan setup, it was time to test their recovery plan. They did this using the test failover feature of ASR that lets them test failing over their applications without impacting production workloads or end users.

The test failover brought the application up in a test network in Azure with all the latest changes, and let them connect to the application in the test environment and validate that the application was working in a few minutes.

Being able to test the failover of the application to Azure without impacting production gave Bellows College the confidence that their business continuity plan gives them the necessary protection from unplanned events.

Having experienced how simple and cost-effective it is to use Azure Site Recovery to achieve business continuity, Bellows College is now planning to onboard some of their other supporting applications running on Ubuntu.

Azure Site Recovery is an all-encompassing service for your migration and disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure so that you have a disaster recovery plan that covers all of you organization's IT applications.

Check out the list of configurations supported with ASR and get started today.
Quelle: Azure

Azure Redis Cache Geo-Replication in Public Preview

We are pleased to announce a public preview of the geo-replication functionality in the Azure Redis service. Geo-replication gives you the ability to link Redis caches across different Azure regions to form a primary-replica relationship. It provides an ingredient necessary for any disaster recovery design in your application. Geo-replication preview is available now for all premium Azure Redis caches. At present, we support having one primary cache and one replica in a geo-redundant configuration.

Configuring Azure Redis Caches for Geo-Replication

To setup geo-replication, you need two instances of Azure Redis premium cache belonging to one Azure subscription, one to be used as the primary and the other as the replica. You create the replica cache in exactly the same way that you have created the primary. The replica cache needs to be equal or bigger in size than the primary cache and, additionally, have a matching number of shards if the primary cache is clustered. While there is no requirement for the two caches to be located in different Azure regions, that is the most commonly expected case. If they reside in VNET(s), the caches must be able to reach each other. You can refer to this GitHub document for the current list requirements and restrictions for using geo-replication.

Once you have the Redis caches that you want to use for geo-replication, you can link them together in the Azure management portal. You start by selecting your primary cache. You then find the geo-replication option under the Settings for that cache. By default, when you choose geo-replication, it will show you any replica cache that has been linked to the primary cache. Since we are setting up geo-replication for the first time, no caches are associated yet.

You use “Add cache replication link” to create a uni-directional replicating relationship from the primary cache to the replica. Clicking on that button gives you a list of eligible caches that can be used for geo-replication, grouped by Azure regions. You can switch to a specific Azure region using either the world map or the Location list.

After you choose which cache to be used as the replica, you need to click on the “Link” button to finish the setup.

This will configure the primary cache to replicate data to the replica and disable all other “write” operations to the latter. It takes some time to set up everything and copy existing data in the primary cache. When that whole process is completed, you will see the change in the Azure portal.

Failing over to the replica

The Azure Redis service does not support automatic failover across Azure regions in the first release. Geo-replication is used primarily in a disaster recovery scenario. In such an event, customers will want to bring up the entire application stack in a backup region in a coordinated manner rather than letting individual application components decide when to switch to their backups on their own. This is especially relevant to Redis. One of the key benefits of Redis is being a very low-latency store. If Redis used by an application fails over to a different Azure region but not the compute tier, the added roundtrip time will have a noticeable impact on performance. For this reason, we would like to avoid Redis failing over automatically due to transient availability issues.

Currently, to initiate the failover, you need to unlink the replica cache from the primary in the Azure portal and change the connection end-point in the Redis client from the primary cache to the replica. You will be able to do this using the Azure management SDK’s and command-line tools soon, so that you can script and automate the sequence if needed. When the two caches are disassociated, the replica becomes a regular read-write cache again and accepts requests directly from Redis clients.

Understanding additional costs

There is no extra charge for using the geo-replication functionality in the Azure Redis service. Having said that, there will be additional costs associated with sending network traffic between the two Azure regions should your primary and replica caches be located in different regions. You should be aware of this standard charge for network communication that Azure applies.

We hope that you will find Azure Redis geo-replication useful to your application. If you have any feedback about this new feature, please feel free to reach out to us at AzureCache@microsoft.com.
Quelle: Azure

Mesosphere DCOS, Azure, Docker, VMware and everything between – Security & Docker Engine Installation

In part 2 of this series, we will start to dive into DC/OS 1.9 installation on top of vSphere. Mesosphere offers few ways to deploy a fully working cluster and since I wanted to see how everything is really connected, I have chosen the advanced installation method. We will start with some Linux related adjustments and the Docker engine deployment.

I really like how the DC/OS team organized their online installation guides. It’s clean, pretty comprehensive, and well understood. You basically have 3 local installation methods for “production scale” deployment – GUI, CLI & the Advanced way which was the one I went with.

Although the installation guide is very good, it misses some stuff which is mostly Linux related. In this series, I will try to demystify those, so everything, as the Zohan would say, will be “silky smooth”.

Throughout the deployment process, you will notice that I put VMware snapshots interrupts which will warn you when I’m recommending taking a snapshot on all the VMs, so you will be able to maintain consistency.

Read more about all the details around DC/OS 1.9, Docker and the Security perquisites on top of VMware vSphere in my personal blog.
Quelle: Azure

How we built it: Next Games global online gaming platform on Azure

Today on Microsoft Mechanics, we are joined by the Chief Technology Officer and co-founder of Finnish mobile gaming company Next Games, Kalle Hiitola, for the next episode of "How we built it."

Next Games has built a successful global connected gaming platform on Azure with users spanning 166 countries and growing.

Their popular Walking Dead No Man's Land game is aligned to the popular Walking Dead TV series. The game releases a rapid cadence of new chapters and characters to compliment the release of each new episode.

Built for Cloud

Their Cloud-native approach comprises carefully crafted loosely-coupled core back end services for their Driller TM gaming platform which gives them the flexibility to change every aspect of the mobile games running on their platform on the fly.

These run on a robust infrastructure of Azure services to handle everything from player compute, gaming content updates, all gaming interactions such as in-app purchases, the storing of player states, metadata such as scores, gaming analytics to manage the in-game experience and notifications.

Their Azure-based architecture includes load balancing to support simultaneous gaming sessions and compute instances are designed to be stateless and scale out infinitely based on demand.

Solutions for Gamer Fraud and Content Delivery

Two important areas of innovation include gamer fraud protection and dynamic content delivery.

As Kalle explains, the same logic is run on the client and server enabling fast detection and prevention of fraud if the client tries to write back an invalid value based on the last known player state.

Dynamic content delivery is made possible by publishing video game trailers to Azure Media Services and new gaming content is published via their core gaming services triggering notifications to players.

Join us for the Next Games AMA

To learn more about Next Games online gaming platform on Azure, please check out today's episode.

The Next Games team with Kalle Hiitola, will also be joining our AMA session on the Microsoft Technical Community, July 11th at 9am PDT at http://aka.ms/how-we-built-it-NextGames
Quelle: Azure

Microsoft Cognitive Services hack: Line Messenger

After the devastating earthquake in Japan in 2011, it became apparent that there was a need for a communication platform that would help better connect people in such situations. Out of this need, the Line Messenger platform was launched and has since evolved into a popular social platform with users spread across the globe. The platform also has a well-supported SDK and API, enabling developers to extend their creations to hundreds of millions of users.

We recently worked with the development team at Line in Tokyo, building out several bot related scenarios that use Cognitive Services on the Line Messenger platform. The resulting hacks ranged from bots that detect what’s on your plate, to bots that can recognize a user’s face, and even a bot that helps people learn and practice new languages.

 

The group of around 20 developers had little experience on Azure going in, but within a few hours they were able to develop, host, and launch their Cognitive Service enabled bots. If you’re interested in trying it out, Line have released a Getting Started kit on their Github, and more info can found on the LINE Engineering Blog.

Have a chat with one of their bots!
Quelle: Azure

Bright Cluster Manager now integrates seamlessly with Azure HPC capabilities

One of our key HPC partners, Bright Computing, has just announced the release of Bright Cluster Manager 8.0, the latest version of their flagship HPC cluster management solution. This new version gives customers the ability to extend an on-premises cluster into Azure for added capacity, or to easily build a cluster entirely in Azure.

Martijn de Vries, CTO at Bright Computing, commented about this release:

“We are pleased to offer this new integration to our customers and we are confident that the solution will be very popular with our user base. Cloud bursting from an on-premises cluster to Microsoft Azure offers companies an efficient, cost-effective, secure and flexible way to add additional resources to their HPC infrastructure. Bright's integration with Azure also gives our clients the ability to build an entire off-premises cluster for compute-intensive workloads in the Azure cloud platform.”

The cloud bursting scenario tends to be the one that our customers are interested in exploring first, and this release of Bright Cluster Manager makes it possible to do this effortlessly and within the cluster environment that Bright Computing customers are already familiar with. By bursting to the cloud, customers can continue to use their existing on-premises resources, while taking advantage of the flexibility and the elasticity of Azure to dynamically grow and shrink their HPC clusters.

Similarly, being able to stand up a cluster in the cloud that behaves and feels exactly like the one customers are used to having on premises is a perfect way to readily shift workloads to the cloud, deploy on-demand resources for a special project, or experiment with new technologies.

The cloud really does open many new possibilities, not just of scale, but also for access to resources that might not always be available on premises. With Bright Cluster Manager and Azure, you can utilize the latest InfiniBand network technologies, or run your workload on the most current GPUs, all on demand and paying only for what you use.

If you want to learn more about HPC technologies, and in particular about this new release of Bright Cluster Manager, you can listen to this episode of The Azure Podcast, which features Martijn de Vries, the CTO of Bright Computing.

Additionally, you can visit the Bright Computing website to learn about their HPC solution for Azure, or to arrange a live demo.
Quelle: Azure

Java: Manage Azure Container Service, Cosmos DB, Active Directory Graph and more

We released 1.1 of the Azure Management Libraries for Java. This release adds support for: Cosmos DB Azure Container Service and Registry Active Directory Graph https://github.com/Azure/azure-sdk-for-java Getting started Add the following dependency fragment to your Maven POM file to use 1.1 version of the libraries:<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure</artifactId>
<version>1.1.0</version>
</dependency>

Create a Cosmos DB with DocumentDB API
You can create a Cosmos DB account by using a define() … create() method chain.DocumentDBAccount documentDBAccount = azure.documentDBs().define(“myDocumentDB”)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withKind(DatabaseAccountKind.GLOBAL_DOCUMENT_DB)
.withSessionConsistency()
.withWriteReplication(Region.US_WEST)
.withReadReplication(Region.US_CENTRAL)
.create();

In addition, you can:

Create Cosmos DB with DocumentDB API and configure for high availability
Create Cosmos DB with DocumentDB API and configure with eventual consistency
Create Cosmos DB with DocumentDB API, configure for high availability and create a firewall to limit access from an approved set of IP addresses
Create Cosmos DB with MongoDB API and get connection string
Create an Azure Container Registry
You can create an Azure Container Registry by using a define() … create() method chain.Registry azureRegistry = azure.containerRegistries().define(“acrdemo”)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withNewStorageAccount(saName)
.withRegistryNameAsAdminUser()
.create();

You can get Azure Container Registry credentials by using listCredentials().RegistryListCredentials acrCredentials = azureRegistry.listCredentials();
Create an Azure Container Service with Kubernetes Orchestration
You can create an Azure Container Service by using a define() … create() method chain.ContainerService azureContainerService = azure.containerServices().define(“myK8S”)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withKubernetesOrchestration()
.withServicePrincipal(servicePrincipalClientId, servicePrincipalSecret)
.withLinux()
.withRootUsername(rootUserName)
.withSshKey(sshKeys.getSshPublicKey())
.withMasterNodeCount(ContainerServiceMasterProfileCount.MIN)
.withMasterLeafDomainLabel(“dns-myK8S”)
.defineAgentPool(“agentpool”)
.withVMCount(1)
.withVMSize(ContainerServiceVMSizeTypes.STANDARD_D1_V2)
.withLeafDomainLabel(“dns-ap-myK8S”)
.attach()
.create();

You can instantiate a Kubernetes client using a community developed Kubernetes client library.KubernetesClient kubernetesClient = new DefaultKubernetesClient(config);
Deploy from Container Registry to Kubernetes in Container Service
You can deploy an image from Azure Container Registry to a Kubernetes cluster using the same community developed Kubernetes client library and an image pull secret associated with the Container Registry.ReplicationController rc = new ReplicationControllerBuilder()
.withNewMetadata()
.withName(“acssample-rc”)
.withNamespace(acsNamespace)
.addToLabels(“acssample-nginx”, “nginx”)
.endMetadata()
.withNewSpec()
.withReplicas(2)
.withNewTemplate()
.withNewMetadata()
.addToLabels(“acssample-nginx”, “nginx”)
.endMetadata()
.withNewSpec()
.addNewImagePullSecret(acsSecretName)
.addNewContainer()
.withName(“acssample-pod-nginx”)
.withImage(“acrdemo.azurecr.io/samples/acssample-nginx”)
.addNewPort()
.withContainerPort(80)
.endPort()
.endContainer()
.endSpec()
.endTemplate()
.endSpec()
.build();

kubernetesClient.replicationControllers().inNamespace(acsNamespace).create(rc);

You can find the full sample code to deploy an image from container registry to Kubernetes in Container ServiceSimilarly, you can deploy an image from Azure Container Registry to Linux containers in App Service.
Create Service Principal with Subscription Access
You can create a service principal and assign it to a subscription with contributor role by using a define() … create() method chain.ServicePrincipal servicePrincipal = authenticated.servicePrincipals().define(“spName”)
.withExistingApplication(activeDirectoryApplication)
// define credentials
.definePasswordCredential(“ServicePrincipalAzureSample”)
.withPasswordValue(“StrongPass!12″)
.attach()
// define certificate credentials
.defineCertificateCredential(“spcert”)
.withAsymmetricX509Certificate()
.withPublicKey(Files.readAllBytes(Paths.get(certificate.getCerPath())))
.withDuration(Duration.standardDays(7))
// export credentials to a file
.withAuthFileToExport(new FileOutputStream(authFilePath))
.withPrivateKeyFile(certificate.getPfxPath())
.withPrivateKeyPassword(certPassword)
.attach()
.withNewRoleInSubscription(role, subscriptionId)
.create();

Similarly, you can:

Manage service principals
Browse graph (users, groups and members) and managing roles
Manage passwords
Try it
You can get more samples from  https://github.com/azure/azure-sdk-for-java#sample-code. Give it a try and let us know what do you think (via e-mail or comments below).
You can find plenty of additional info about Java on Azure at http://azure.com/java.
Quelle: Azure