Now Supported in Cloud Foundry: Azure Blob Storage and Managed Disks

Cloud Foundry on Azure keeps getting better.

We now support the use of Azure Blob Storage and Managed Disks with Cloud Foundry.

These enhancements come on the heels of the launch of Pivotal Cloud Foundry on Azure and a series of Azure Service Broker releases. We continue to invest in deeper integration of Azure’s enterprise grade services with the open source Cloud Foundry platform.

Here’s how to get started with these new capabilities!

1.Use Azure Blob Storage for the Cloud Foundry Cloud Controller Blobstore

The Cloud Controller blobstore is a critical data store. Buildpacks, droplets, packages, and resource pools are all hosted this way. Operators can now use Azure Blob Storage for this component. Consequently,they will enjoy greater availability and scalability. Previously, an NFS server was required.

By default, the blobstore configuration uses the Fog Ruby gem. The Azure team worked with Fog community updating the Fog Azure RM gem to support this new feature.

Check out the Cloud Foundry documentation for background and configuration instructions. The BOSH deployment template (multi-node) is updated, using Azure Blob storage by default. This is also integrated with the upcoming Pivotal Cloud Foundry 1.10 release.

2. Use Azure Managed Disks

The Azure CPI V21 now supports  the Azure Managed Disk Service in BOSH.

This simplifies VM/disk deployment and management. It also provides superior scalability, security and reliability.

Operators can choose to create new deployments using managed disks. They can also migrate existing deployments to managed disks. Just make a quick edit to the BOSH manifest file and you’re done! Check the guidance for Using Managed Disks for detailed steps.

This enhancement will be baked into the BOSH and Pivotal Cloud Foundry deployment templates soon. Look for those to be published in the coming months.

We’ve seen tremendous interest in Cloud Foundry running atop Azure. As a result, we are making additional investments. Engineers are working to bring more Azure database services to the Cloud Foundry runtime and service broker. And soon, you&;ll be able to interact with logs and metrics from your Cloud Foundry apps using Azure OMS. Let us know if you have any suggestions by entering your ideas here.

 
Quelle: Azure

Disaster recovery for applications, not just virtual machines using Azure Site Recovery

Let’s say your CIO stops by one day and asks you, “What if we are hit by an unforeseen disaster tomorrow? Do you have the confidence to be able to run our critical applications on the recovery site, and guarantee that our users will be able to connect to their apps and conduct business as usual?” Note that your CIO is not going to ask you about just recovering your servers or virtual machines, the question is always going to be about recovering your applications successfully. So why is it that many disaster recovery offerings stop at just booting up your servers, and offer no promise of actual end to end application recovery? What makes Azure Site Recovery different that allows you as the business continuity owner to sleep better? To answer this, let’s first understand what an application constitutes: A typical enterprise application comprises of multiple virtual machines spanning different application tiers. These different application tiers mandate write-order fidelity for data correctness. The application may also require its virtual machines to boot up in a particular sequence for proper functioning. A single tier will likely have two or more virtual machines for redundancy and load balancing. The application may have different IP address requirements, either use DHCP or require static IP addresses. Few virtual machines may require a public IP address or DNS routing for end user internet access. Few virtual machines may need specific ports to be open or have security certificate bindings. The application may rely on user authentication via an identity service like Active Directory. To recover your applications in the event of a disaster, you need a solution that facilitates all of the above, gives you the flexibility to potentially do more application specific customizations post recovery, and do everything at an RPO and RTO that meets your business needs. Using traditional backup solutions to achieve true application disaster recovery is extremely cumbersome, error prone and not scalable. Even many replication based software only recover individual virtual machines and cannot handle the complexity of bringing up a functioning enterprise application. Azure Site Recovery combines a unique cloud-first design with a simple user experience to offer a powerful solution that lets you recover entire applications in the event of a disaster. How do we achieve this? With support for single and multi-tier application consistency and near continuous replication, Azure Site Recovery ensures that no matter what application you are running, shrink-wrapped or homegrown, you are assured of a working application when a failover is issued. Many vendors will tell you that having a crash-consistent disaster recovery solution is good enough, but is it really? With crash consistency, in most cases, the operating system will boot. However, there are no guarantees that the application running in the virtual machines will work because a crash-consistent recovery point does not ensure correctness of application data. As an example, if a transaction log has entries that are not present in the database, then the database software needs to rollback until the data is consistent, in the process significantly increasing your RPO. This will cause a multi-tier application like SharePoint to have very high RTO, and even after the long wait it is still uncertain that all features of the application will work properly. To avoid these problems, Azure Site Recovery not only supports application consistency for a single virtual machine (application boundary is the single virtual machine), we also support application consistency across multiple virtual machines that compose the application. Most multi-tier real-world applications have dependencies, e.g. the database tier should come up before the app and web tiers. The heart and soul of the Azure Site Recovery application recovery promise is extensible recovery plans, that allow you to model entire applications and organize application aware recovery workflows. Recovery plans are comprised of the following powerful constructs: Parallelism and sequencing of virtual machine boot up to ensure the right recovery order of your n-tier application. Integration with Azure Automation runbooks that automate necessary tasks both outside of and inside the recovered virtual machines. The ability to perform manual actions to validate recovered application aspects that cannot be automated. Your recovery plan is what you will use when you push the big red button and trigger a single-click stress free end to end application recovery when needed, with a low RTO. Another key challenge for many of these multi-tier applications to function properly is network configuration post recovery. With advanced network management options to provide static IP addresses, configure load balancers, or use traffic manager to achieve low RTOs, Azure Site Recovery ensures that user access to the application in the event of a failover is seamless. A common myth around protecting your applications is the fact that many applications come with in-built replication technologies – hence the question, why do you need Azure Site Recovery? The simple answer: Replication != Disaster Recovery Azure Site Recovery is Microsoft’s single disaster recovery product that offers you a choice to work with different first and third-party replication technologies, while providing an in-built replication solution for those applications where there is no native replication construct, or native replication does not meet your needs. As mentioned earlier, getting application data and virtual machines to the recovery site is only a piece of what is takes to bring up a working application. Whether Azure Site Recovery replicates the data or you use the application’s built-in capability for this, Azure Site Recovery does the complex job of stitching together the application, including boot sequence, network configurations, etc., so that you can failover with the single click. In addition, Azure Site Recovery allows you to perform test failovers (disaster recovery drills) without production downtime or replication impact, as well as failback to the original location. All these features work with both Azure Site Recovery replication and with application level replication technologies. Here are a few examples of application level replication technologies Azure Site Recovery integrates with: Active Directory replication SQL Server Always On Availability Groups Exchange Database Availability Groups Oracle Data Guard So, you ask, what does this really mean? Azure Site Recovery provides you with powerful disaster recovery application orchestration no matter whether you choose to use its built-in replication for all application tiers or mix and match native application level replication technologies for specific tiers, e.g. Active Directory or SQL Server. Enterprises have various reasons why they may go with one or the other replication choice, e.g. tradeoffs between no data loss and cost and overhead of having an active-active standby deployment. The next time you get asked, why do you need Azure Site Recovery when say you already have SQL Server Always On Availability Groups, do make sure you clarify that having application data replicated is necessary but not sufficient for disaster recovery, and Azure Site Recovery complements native application level replication technologies to provide you a full end to end disaster recovery solution. We have learnt from our enterprise customers who are protecting hundreds of applications using Azure Site Recovery, what the most common deployment patterns and popular application topologies are. So not only does Azure Site Recovery work with any application, Microsoft tests and certifies popular first and third-party application suites, a list that is constantly growing. As part of this effort to test and provide Azure Site Recovery solution guides for various applications, Microsoft provides a rich Azure Automation library with production-ready, application specific and generic runbooks for most common automation tasks that enterprises need in their application recovery plans. Let’s close with a few examples: An application like SharePoint typically has three tiers with multiple virtual machines that need to come up in the right sequence, and requires application consistency across the virtual machines for all features to work properly. Azure Site Recovery solves this by giving you recovery plans and multi-tier application consistency. Opening a port / adding a public IP / updating DNS on an application’s virtual machine, having an availability set and load balancer for redundancy and load management, are examples of common asks of all enterprise applications. Microsoft solves this by giving you a rich automation script library for use with recovery plans, and the ability to set up complex network configurations post recovery to reduce RTO, e.g. setting up Azure Traffic Manager. Most applications will need an Active Directory / DNS deployed and use some kind of database, e.g. SQL Server. Microsoft tests and certifies Azure Site Recovery solutions with Active Directory replication and SQL Server Always On Availability Groups. Enterprises always have a number of proprietary business critical applications. Azure Site Recovery protects these with in-built replication and lets you test your application’s performance and network configuration on the recovery site using the test failover capability, without production downtime or replication impact. With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization’s IT applications. You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.
Quelle: Azure

Power BI solution templates now support Azure Analysis Services

We’re pleased to announce support for Azure Analysis Services for Power BI solution templates. Effective today, we support Azure Analysis Services for the Campaign/Brand Management for Twitter, System Center Configuration Manager, Sales Management for Dynamics 365, and Sales Management for Salesforce solution templates.

Power BI solution templates simplify and accelerate building analytics solutions on popular applications, many that you probably use today. They offer a very quick guided experience to create compelling analytics and visualizations on an extensible, scalable, and secure architecture that offer immediate value and that you can customize as you see fit. This means that instead of spending weeks or months getting going, you can get started immediately and spend your time on extending and customizing the result to meet your organization’s needs. Learn more about Power BI solution templates.

So what is Azure Analysis Services and why should I care?

Well – first some background. Azure AS (let’s call it AAS) is another incarnation of the same engine that runs in Power BI Desktop, the Power BI service, and in SQL Server Analysis Services. Azure Analysis Services was announced several months ago in an article on the Azure blog. (Read it – good stuff.)

Now, why should you care about AAS? The Power BI Service is an amazing thing – why would you want to store your data in AAS rather than leaving it in Power BI? Especially when there is an added cost to do so.

The simplest and most obvious answer is data volume. The maximum size of a power BI desktop file that can be published to the Power BI Service is 1GB, after compression. For larger databases, you’ll need to bring in Azure Analysis Services.

This is a pretty simplistic way to look at things and AAS brings enterprise-ready capabilities that you might need long before you hit this data size limit. Size is a leading indicator, but here are some other things to consider.

Processing – How often do you want to refresh your reports? A report published to the Power BI service can be refreshed several times per day. If you want your model to be processed more frequently or you need more control on how it is processed, then AAS is one of your options. Reports bound to AAS are as fresh as the data inside it.

Partitioning – A table can be divided into logical parts each of which can be processed independently of the other. We don’t exploit this capability directly with solution templates (at least not yet) but you can. Solution templates are designed to be extended. So, for example, if you anticipate marrying your own data with what we provide, this can be important.

Client tools – As hard as this is to say, Power BI might not meet all your needs. Other tools like Excel offer some capabilities that Power BI does not. You might even have to use a *cough* competitor product. Azure Analysis Services supports not only Excel, but too many other client tools to mention here.

Size – Yes, it’s simplistic but it matters. If your model gets large you may need to turn to AAS to give you the Azure resources you need to not only host the data you have, but give you the performance you need.

So please try it out. As always, we look forward to your reaction and feedback. You can comment on your community site or simply email us at PBISolnTemplates@microsoft.com.
Quelle: Azure

Join us March 15th for the first Azure Blockchain AMA

Join us Wednesday, March 15th at 9am PST/12pm EST for the first Azure Blockchain-hosted Ask Me Anything (AMA) on Microsoft Tech Community. We receive great feedback and input about blockchain through customer engagements and other channels, but we haven’t interacted broadly in real-time, until now. You’ll be able to connect directly with the Blockchain team, who will be on hand to answer your questions and listen to feedback.

Add the AMA to your calendar!

When:

Wednesday, March 15, 2017 from 09:00 am to 10:00 am Pacific Time

Where:

The Azure Blockchain Community

What’s an AMA session?

We’ll have folks from the Azure Blockchain engineering team available to answer any questions you have. You can ask us anything about our products, services, or even our team!

Why are we doing an AMA?

Connect directly with customers and hear your feedback and answer your questions, such as:

What is Microsoft’s strategy around blockchain?
Why have blockchain on Azure?
What are the most common blockchain scenarios for industry X or vertical Y?
How do I submit blockchain feature requests?

Who will be at the AMA?

We’ll have PMs, developers, and technical thought leaders from the Azure Blockchain engineering team.

We’re looking forward to this AMA and to connect with you directly.
Quelle: Azure

Azure Data Lake Analytics and Data Lake Store now available in Europe

Azure Data Lake Analytics and Azure Data Lake Store are now available in the North Europe region. For more information about pricing, please visit the Data Lake Analytics Pricing and Data Lake Store Pricing webpages. Data Lake Analytics is a cloud analytics service for developing and running massively parallel data transformation and processing programs in U-SQL, R, Python, and .NET over petabytes of data. Data Lake Store is a no-limit cloud data lake built so enterprises can unlock value from unstructured, semi-structured, and structured data. To learn more about these services, please visit the Data Lake Analytics and Data Lake Store webpages.
Quelle: Azure

Leverage the Azure CLI with these examples

Customer response to our CLI has been great since the preview release last September and the GA announcement in February. This has been a great opportunity for us to work with customers and learn what is working well and what is still needed. Some of the feedback we’ve received is that we need to provide more documentation and examples to fully leverage all the new features.

Based on this feedback, we’re delivering samples for Linux and Windows VMs, Web Apps, and Azure SQL Database, all of which can be found on this overview page. For Linux VMs, we’re highlighting best practices for creating, monitoring, and troubleshooting, with similar scripts for Windows VMs. For Microsoft SQL Azure, we get you started with scripts for creating single and pooled databases. For Web Apps, we show you how to create a Web App with integration to your favorite deployment method (git, GitHub, Docker, etc.) as well as configure, scale, connect to resources, and monitor your Web Apps, all from the command line. Here’s an example that creates an Azure Web App that is ready to deploy code from GitHub:

All of the example scripts can be used in the CLI “as is”, and also as documentation to help you understand how to develop your own scripts.

You can also get started with the CLI using the rest of our updated docs and samples, including installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network. The Azure CLI is open source and on GitHub.

We’re continuing to provide updates based on your ongoing feedback, so please share any suggestions you may have. Reach out with suggestions or questions via StackOverflow using the azure-cli tag, or email us directly at azfeedback@microsoft.com.
Quelle: Azure

Portal Preview of Azure Resource Policy

Since the first release of resource policies last April, we have received valuable feedback from customers and with this feedback we have added new features. I’m pleased to announce the following new features for Azure Resource Policies:

Policy management in portal (preview)
Policy with parameters

Policy Management in Portal

Many customers requested the ability to manage policies through the Azure portal. Using the portal reduces the learning curve for creating policies and makes managing the policies easier. It is now available in Azure preview portal.

Similar to working with Identity and Access Control, you can configure resource policies for subscriptions and resource groups from the settings menu. You can view what policies are assigned to the current subscriptions and resource groups, and add new policy assignments. For common policies, you can use the built-in policies and customize the values you need. For example, when creating a geo-compliance policy, the UI simply asks you for a list of permitted locations. You can provide the name and a description that are seen by users when they violate the policy.

 

Figure 1: View all policy assignments

 

Figure 2: Adding new policy assignment

Policy using Parameters

With API version 2016-12-01, you can add parameters to your policy template. The parameters enable you to customize the policy definition. The preceding example for the portal utilizes parameters in the policy. There are two benefits:

Reduce the number of policy definitions to manage. For example, you previously needed multiple policies to manage tags for different applications in different resource groups. Now, you can consolidate them into one policy definition with tag name as a parameter. You provide the value of the tag name when you assign the policy to the application.
Separate access control for policy definition and policy management. Previously, if you used resource groups as the scope for most of your policy assignments, all users who assigned a policy to a resource groups also needed permission to create policy definitions. This permission was required because different assignments required different policy definitions. However, granting this permission created the risk that they could potentially modify other policy definitions. By using parameters, users no longer need to create their own policy definitions.

{
"properties": {
"displayName": "Allowed virtual machine SKUs",
"policyType": "BuiltIn",
"description": "This policy enables you to specify a set of virtual machine SKUs that your organization can deploy.",
"parameters": {
"listOfAllowedSKUs": {
"type": "Array",
"metadata": {
"description": "The list of SKUs that can be specified for virtual machines.",
"displayName": "Allowed SKUs",
"strongType": "VMSKUs"
}
}
},
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
{
"not": {
"field": "Microsoft.Compute/virtualMachines/sku.name",
"in": "[parameters(&;listOfAllowedSKUs&039;)]"
}
}
]
},
"then": { "effect": "Deny" }
}
},
"id": "/providers/Microsoft.Authorization/policyDefinitions/cccc23c7-8427-4f53-ad12-b6a63eb452b3",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "cccc23c7-8427-4f53-ad12-b6a63eb452b3"
}

 

Since this policy is built-in, you can directly assign it without creating your policy definition JSON. To assign this policy using PowerShell, run the following commands:

$policydefinition = Get-AzureRmPolicyDefinition | Where-Object {$_.Properties.DisplayName -like "Allowed virtual machine SKUs"}
New-AzureRmPolicyAssignment -Name testassignment –Scope {scope} -PolicyDefinition $policydefinition -listOfAllowedSKUs "Standard_LRS", "Standard_GRS"

It is this simple now!

Help us improve the experience

 

Please try the new features and provide feedback to us through the user voice. Let us know what policies you want to use and how we can improve the experience.
Quelle: Azure

Azure Analysis Services adds Standard S0 pricing tier

Over the past several months, we have received positive feedback about the Azure Analysis Services preview. Many customers have moved their models to the cloud and have enjoyed the improved manageability of the platform-as-a-service offering. We have expanded regions where Azure Analysis Services is available, and we are working on several of the improvements asked for on the Azure Analysis Services feedback site.

One scenario customers have asked for is a way to support smaller workloads in the cloud. While you can run multiple databases on a single Standard S1 instance, you may want to start with something smaller based on your model size and query volume. We are introducing a new smaller pricing level, Standard S0, which has 40 QPUs and 10 GB of RAM for models.

This new size offers the same features and capabilities as the rest of the Standard tier. You can continue to scale up and down based on your expected load to achieve the best experience for your users. For example, if you need more RAM when processing data, you can scale up during processing and scale down after processing.  You can also scale up during business hours for better query performance, scale back down in off-hours, or even pause when needed for further cost savings. You can track QPU utilization in the Azure portal and through the Azure Monitoring APIs. Note, your model must fit in the available RAM. It is a good idea to check your memory consumption the Azure Portal to ensure Standard S0 is appropriate for your workload.

Please try out the Standard S0 size and let us know how it works for you. You can share your experiences on the Azure Analysis Services MSDN forum. The forum is also a great place to get help from the community.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
 
Quelle: Azure

Increasing PolyBase Row width limitation in Azure SQL Data Warehouse

Azure SQL Data Warehouse (SQL DW) is a SQL-based, fully managed, petabyte-scale cloud solution for data warehousing. SQL DW is highly elastic, you can provision in minutes and scale capacity in seconds. You can scale compute and storage independently, allowing you to burst compute for complex analytical workloads or scale down your warehouse for archival scenarios, and pay based off what you&;re using instead of being locked into predefined cluster configurations.

In the latest release of PolyBase in SQL DW, we have increased the row width limit to 1MB from 32KB. This will allow you to ingest your wide columns directly from Windows Azure Storage Blob or Azure Data Lake Store into SQL DW.

When thinking about loading data into SQL DW via PolyBase, you need to take into consideration a couple key points regarding the data size of strings.

For character types (char, varchar, nchar, nvarchar), the 1MB data size is based on memory consumption of data in UTF-16 format. This means that each character is represented by 2 bytes.
When importing variable length columns ((n)varchar, varbinary), the loading tool pads the buffer to the width of the schema in the external table definition regardless of data type. This means that a varchar(8000) has 8000 bytes reserved regardless of the size of the data in the row.

To help improve performance, define your external table with minimal amount of padding on schema data types to maximize the amount of data transferred per internal buffer.

Additionally, it is a best practice to use a medium or a large resource class and to scale up to a larger DWU instance to take advantage of additional memory needed for importing data, especially into CCI tables. More information can be found at our documentation for Memory allocation by DWU and Resource Class.

Next Steps

Give loading with External Tables into SQL DW a try with our loading tutorial.

Learn More

What is Azure SQL Data Warehouse?

What is Azure Data Lake Store?

SQL Data Warehouse best practices

MSDN forum

Stack Overflow forum
Quelle: Azure

Announcing Azure SQL Database Premium RS, 4TB storage options, and enhanced portal experience

Today we are happy to announce the preview of the latest edition to our service tiers, Premium RS, a 4TB increase of storage limits for Premium P11 and P15, and along with it a new, enhanced portal experience for selecting and managing service tiers and performance levels.

Adding more choices in our service tiers and increasing the available storage is a crucial step towards reaching our long-term commitment of providing more flexibility. Both for compute as well as storage across all performance tiers, allowing increased flexibility to customers.

Premium RS

Premium RS is designed for your IO-intensive workloads that need Premium performance but do not require the highest availability guarantees. This tier is ideal for workloads can replay the data in case of a severe system error such as analytical workloads where the database is not system of record. In addition, Premium RS is great for non-production databases, such as development using in-memory technologies or pre-production performance testing. For more details refer to the documentation.

4TB storage option in Premium P11 and P15

You can now use up to 4TB of included storage with P11 and P15 Premium databases at no additional charge. Until we have worldwide availability later in CY 2017, the 4TB option can be selected for databases located in the following regions: East US 2, West US, Canada East and South East Asia (all starting March 9th) and West Europe, Japan East, Australia East, Canada Central (available today). For more details refer to the documentation.

Enhanced pricing tier portal experience

We have simplified your pricing tier manageability experience for databases in the portal. The configuration of your database can now be done in three simple steps reflecting the additional options we are providing such as Premium RS and additional storage configurations:

Select the service tier which corresponds to your workload needs.
Select the performance limits (DTU) required by your database.
Select the maximum storage required to your database. This added option hopes to make it simpler for you to manage the growth of your databases.

Next steps:

Review the pricing page for our new offers.
Create a new Premium RS database or elastic pool.
Create a P11 or P15 Premium database with 4TB of storage.

Quelle: Azure