ADAL .NET 3.14.1 released

ADAL.NET, delivered as a nuget package named Microsoft.IdentityModel.Clients.ActiveDirectory, is an authentication library which enables developers to acquire tokens from Azure AD (Active Directory) and ADFS, to be used to access Microsoft APIs or applications registered with Azure AD. ADAL.NET is available on several .NET platforms, including desktop, Universal Windows Platform, Xamarin / Android, Xamarin iOS, Portable Class Libraries, and .NET Core. It supports a number of authentication scenarios, involving native applications (desktop or device) or private applications (Web API). Authentication can leverage users credentials or application secrets.

What’s new – Support for Client assertion certificates in .Net Core

Client credential authentication is used by a confidential client application such as a daemon or web service to access resources using its own identity, rather than the user’s identity. The application can use either a shared secret or a client certificate to authenticate itself. Learn more about service to service calls using client credentials.

Previously, client certificates were only available on .Net 4.5. In ADAL .NET 3.14 we now support the certificate-based scenario on .NET Core as well. The .NET core daemon application sample shows how a .NET core client console application can access an ASP.NET Core API protected with AzureAD.

Other changes

In this release, the “old” common (platform-independent) PCL library is now a .NET standard 1.1 library, and several known issues have been resolved:

Fixed the issue where silently logging in with an expired refresh token could cause a null reference exception.
Fixed the issue in federated tenant scenarios (GitHub issue #401).
Ported the ADAL.PCL project to .NET Standard 1.1 project.
Ported the ADAL.CoreCLR project to .NET Standard 1.3 project.
Authenticode-signed the assemblies with SHA-256 certificate.

In closing

As usual we’d love to hear your feedback:

Ask questions on Stack Overflow using the ADAL tag. We highly recommend you ask your questions on Stack Overflow first and browse existing issues to see if someone has asked your question before.
Use GitHub Issues on the ADAL.Net open source repository to report bugs or request features.
Use the User Voice page to provide recommendations and/or feedback.

Quelle: Azure

June 2017 Leaderboard of Database Systems contributors on MSDN

Congratulations to our June top-10 contributors!

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com.
Quelle: Azure

Public Preview of compatibility level 140 for Azure SQL databases

We are announcing the official public preview of compatibility level 140 in Azure SQL Database.

Compatibility level 140 enables the following query optimizer changes:

A trivial plan referencing Columnstore indexes will be discarded in favor of a plan that is eligible for batch mode execution.
The sp_execute_external_script UDX operator is eligible for batch mode execution.
Three adaptive query processing features are being introduced:

Batch mode memory grant feedback, which improves the performance of repeating queries that request too much or too little memory.
Batch mode adaptive join, which is a new query operator type that allows dynamic selection of the most optimal join algorithm based on runtime row counts.
Interleaved execution, which improves the performance of queries that reference multi-statement table valued functions by using the true row count of the function call for use during query optimization.

Please note that this list is not exhaustive.  Most optimizer hotfixes released after SQL Server 2016 RTM will be on by default in compatibility level 140.

The alignment of SQL versions to default compatibility levels are as follows:

100: in SQL Server 2008 and Azure SQL Database
110: in SQL Server 2012 and Azure SQL Database
120: in SQL Server 2014 and Azure SQL Database
130: in SQL Server 2016 and Azure SQL Database
140: in SQL Server 2017 and Azure SQL Database

To determine the current compatibility level of your database, execute the following Transact-SQL statement:

SELECT compatibility_level
FROM [sys].[databases]
WHERE [name] = 'Your Database Name';

Use of compatibility level 140 enables developers to benefit from query processor enhancements. To change the compatibility level of an existing database, execute ALTER DATABASE:

ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = 140;

The recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article, Change the Database Compatibility Mode and Use the Query Store.  Note that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for SQL Server and Azure SQL DB.

After SQL Server 2017 launches, the default Azure SQL Database compatibility level will change from 130 to 140 for newly created databases. Databases created before that time will not be affected and will maintain their current compatibility level. You can find more details at ALTER DATABASE Compatibility Level (Transact-SQL).
Quelle: Azure

Azure Site Recovery now supports large disk sizes in Azure

Following the recent general availability of large disk sizes in Azure, we are excited to announce that Azure Site Recovery (ASR) now supports the disaster recovery and migration of on-premises virtual machines and physical servers with disk sizes of up to 4095 GB to Azure.

Many on-premises virtual machines that are part of the Database tier and file servers use disks with sizes greater than 1 TB. Support for protecting these virtual machines with large disk sizes has consistently featured as a top ask from both our customers and partners. With this enhancement, ASR now provides you the ability to recover or migrate these workloads to Azure.

These large disk sizes are available on both standard and premium storage. In standard storage, two new disk sizes, S40 (2TB) and S50 (4TB) are available for managed and unmanaged disks. For workloads that consistently require high IOPS and throughput, two new disk sizes, P40 (2TB) and P50 (4TB) are available in premium storage, again for both managed and unmanaged disks. Depending upon your application requirements, you can choose to replicate your virtual machines to standard or premium storage with ASR. More details on the configuration, region availability and pricing of large disks is available in this storage documentation.

To show you how Azure Site Recovery supports large disk sizes, I protected the Database tier VM of a SharePoint farm. You can see that this VM has data disks which are greater than 1 TB.

Pre-requisite step for existing ASR users:

Before you start protecting virtual machines/physical servers with greater than 1 TB disks, you need to install the latest update on your existing on-premises ASR infrastructure. This is a mandatory step for existing ASR users.

For VMware environments/physical servers, install the latest update on the Configuration server, additional process servers, additional master target servers and agents.

For Hyper-V environments managed by System Center VMM, install the latest Microsoft Azure Site Recovery Provider update on the on-premises VMM server.

For Hyper-V environments not managed by System Center VMM, install the latest Microsoft Azure Site Recovery Provider on each node of the Hyper-V servers that are registered with Azure Site Recovery.

I would like to call out that support for Disaster Recovery of IaaS machines in Azure with large disk sizes is not available currently. This support would be made available soon.

Start using Azure Site Recovery today. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers. You can also use the ASR User Voice to let us know what features you want us to enable next.
Quelle: Azure

Enterprise Cloud Strategy, 2nd Edition

A roadmap to becoming a cloud-centric company

I’m delighted to announce the second edition of our free e-book, Enterprise Cloud Strategy, written by Barry Briggs and myself. Get your copy of the e-book. 

Much has changed in the two years since we published the first edition. Cloud computing has evolved from a technology that delivers efficiencies and cost savings, to a technology that also transforms the scope of IT and business operations with new opportunities.

The questions around the cloud have gone from “if” to “when” and “how.”

Learn how to shape the efficient enterprise cloud transformation

In the second edition, you’ll find best practices and guidance on how to get started, and which applications to consider first in your cloud migration. After the technical exercise of migrating applications, the journey starts for the rest of the organization. The cloud can and should begin transforming your business with greater scale, integration, and richer capabilities.

The book is based on real-world experiences from enterprise IT and seeks to answer a new question, “How can I use cloud computing to become a true partner to the business?” You’ll come away with an understanding of the three stages of cloud migration – experimentation, migration, and transformation – and how to plan and build strategies that involve all departments of the business.

With your organization’s data in the cloud, how do you integrate your migrated applications to take maximum advantage of new cloud services like big data analytics, machine learning, and Internet of Things? What new skills and new roles are needed? How do you appropriately involve various business units in the decision-making process?

“The move to the cloud has opened many opportunities. With it comes the need for best practices and guidance of how to adopt cloud platforms with enterprise-grade rigor and governance. This book fills this much-needed gap in a clear, concise, and practical way. It is an easy read, too.”

– Gavriella Schuster, Corporate Vice President, Channels & Programs, One Commercial Partner,  Microsoft

Enterprise Cloud Strategy, 2nd Edition is organized for cloud experts and novices alike, with chapters dedicated to understanding the different types of cloud, application models, and cloud journeys; all the way to planning and implementing a cloud transformation.

About the authors

Barry Briggs, an independent consultant, has a long history in software and enterprise computing. He served in a several roles during his twelve-year career at Microsoft, most recently as chief enterprise architect on the Microsoft DX (Developer Experience) team. Previously Barry served as chief architect and CTO for Microsoft’s IT organization, where he created and led Microsoft IT’s cloud strategy team.

Eduardo Kassner is the Chief Technology and Innovation Officer for the Worldwide Channels & Programs Group at Microsoft Corporation. His team is responsible for defining the strategy and developing the programs to drive the technical capacity, practice development, and profitability for the hundreds of thousands Microsoft partners worldwide. He recently co-wrote and published the first edition of Enterprise Cloud Strategy published by Microsoft Press, which has been downloaded more than 250,00 times from the Azure.com website.

Download the e-book today.
Quelle: Azure

Microsoft announces Project Olympus support for new Intel Xeon Scalable Processors

In March at the Open Compute Project (OCP) annual summit, we announced that Project Olympus, our next generation hyperscale cloud hardware design, attracted the latest in silicon innovation to address the exploding growth of cloud services. Project Olympus was based on a new hardware development model for community based open collaboration that we developed with OCP. Today, Microsoft is proud to announce support for the newest generation of Intel Xeon Scalable Processors within the Project Olympus ecosystem. 

Intel has been a premier platform partner for Project Olympus and the Intel Xeon Scalable Processor will be a cornerstone for this new platform. Microsoft has also worked closely with Intel to engineer Arria-10 FPGAs, which are deployed on every single Project Olympus server, to create a “Configurable Cloud” that can be flexibly provisioned and optimized to support a diverse set of applications and functions.

We designed Project Olympus with the ability to accommodate a variety of workloads from email to databases, online productivity, HPC, and even AI. Some of these workloads have extremely demanding requirements for compute, storage, and networking which require a base platform that can scale with demands of current and emerging workloads. Intel Xeon Scalable Processors enable such platform capabilities by providing the ability to scale resources as needed. Whether it’s high core counts and memory bandwidth for extreme multithreaded performance, IO scaling capabilities, or the new Intel AVX-512 instructions for HPC and AI workloads, Intel Xeon Scalable Processors, and Intel FPGAs provide a significant degree of flexibility and performance that allows us to meet the emerging demands of the cloud.

Project Olympus is Microsoft’s blueprint for future hardware development and collaboration.  We look forward to the continued collaboration with Intel in designing and building the highest performing, most flexible, and secure clouds possible.
Quelle: Azure

Azure Site Recovery support for Storage Spaces and Windows Server 2016

We recently announced the public preview of disaster recovery for Azure IaaS machines, which allows you to replicate applications between Azure regions as well as create networks, storage accounts, and availability sets. This capability reduces the complexity typically involved in setting up disaster recovery and helps you stay compliant by having a business continuity plan in place to keep applications available during a disaster.

Today we are announcing Azure Site Recovery between Azure region’s support for Windows Server 2016 and Storage Spaces.

Windows Server 2016 has seen tremendous adoption on both private clouds as well as on Azure in the few months since the time it became generally available. Azure Site Recovery for Azure virtual machines now supports workloads running on Windows Server 2016 Data center and Windows Server 2016 Data center – server core editions.

Storage spaces is a technology in Windows Server that enables virtualization of storage by grouping disks into storage pools for performance, flexibility and storage scaling. Storage spaces is a commonly utilized configuration on Azure virtual machines to improve input/output performance by striping disks and to create logical disks larger than 4 TB. For example, this is a very common configuration in SQL workloads where need for higher performance and capacity is obvious. Popular Azure gallery templates like SQL Server Always On deploy machines using storage spaces and to meet this need, in the latest release of Azure Site Recovery, we’ve added support for storage spaces so you can have better availability and compliance for your workloads.

Check out our product information to start replicating your IaaS workloads between Azure regions today.

Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers or use the ASR User Voice to let us know what features you want.
Quelle: Azure

New government datacenter regions available in Arizona and Texas

I am pleased to announce that Azure Government is commercially live in two additional datacenter regions in Arizona and Texas for U.S. government customers and their partners following my announcement late last year. With these expansions, Azure Government has capacity in proximity to government customers and partners on the East Coast, West Coast and Central United States.

With a total of six U.S. datacenter regions, including two dedicated regions with DoD Impact Level 5 Provisional Authorization (PA), Azure Government continues to deliver the most customer choice in the U.S. for locating government workloads and sensitive data. As part of offering the broadest geographic availability, these regions are over 500 miles apart for geo-redundancy and we offer data replication across regions for business continuity.

Customers and partners are clear in their feedback that compliance with U.S. government standards and requirements is a priority. To address the compliance needs for U.S. government, we’ve engineered our datacenters and services to meet or exceed critical compliance requirements. In fact, Microsoft provides the broadest coverage for compliance and regulatory standards with support for FedRAMP High, CJIS, ITAR, DFARS, and DoD L4 and L5.

Specific to the future of cloud computing in government, we continue to announce Azure Government services that enable cool new ways for U.S. government customers and their partners to achieve mission. Most recently, we’ve announced:

PowerBI and HDInsight for data analysis and visualization
Azure CosmosDB and hot/cool blob storage
Expanded Cognitive Services Preview and interesting uses for U.S. government
IoT services

Combining the most datacenter region choices with most comprehensive compliance coverage and innovative services, further extends customer confidence in Azure Government as they deliver mission workloads with the cloud now and into the future.

Explore Azure Government here or contact us to learn more.
Quelle: Azure

New networking features in Azure scale sets

Today we are announcing a set of networking enhancements for Azure virtual machine scale sets. We are adding new ways to assign IP addresses, configure DNS, and assign network security.

Azure scale sets were built to provide a fast and easy way to deploy and manage a collection of virtual machines. The initial implementation of scale sets included a core set of network features most commonly associated with scalable compute clusters; for example, Azure Load Balancer and Application Gateway integration, support for load balancing and dynamic NAT pools routing to private IP addresses.

Since the initial release of scale sets in 2016, we've been working to support more advanced networking scenarios, and to attain network equivalency between scale set VMs, and standalone VMs in Availability Sets. Today's announcement opens up exciting new application scenarios for scale sets with more complex networking requirements, as well as allowing existing applications that were designed for standalone virtual machines to take advantage of scale set features such as easy dynamic scaling, autoscale and patching.

Here's a summary of the new features you can now use with scale sets, and where to find more information.

Public IPv4 addresses per VM

Previously you could only assign private IP addresses to scale set VMs. Typical scale set architectures would assign one or more public IP addresses to a load balancer, which would route incoming connections to the private scale set VM IP addresses, or assign a public IP address to a "jump box" VM in the same VNet which could connect directly to the VMs.

Though private IP addresses per VM is an optimal configuration for many applications which deploy at scale, in some cases it is useful for VMs to support direct external connections, and to connect to one another across regions. There are also cases where outbound network bandwidth requirements exceed that provided by a load balancer.

Now you can configure a scale set to allocate a public IPv4 address to every VM. Examples of where this can be useful include:

Distributed databases where stateful nodes communicate with one another, potentially across regions. Scale sets provide the elasticity and easy deployment at scale. Public IP per VM provides maximum network interoperability. E.g. Couchbase.

VM Scale Sets make it possible for Couchbase users to scale their cluster up simply by moving a slider in the Azure Portal. VMSS also provide improved reliability and ease of management over previous approaches of managing VMs.  The new Public IP per VM feature allows the configuration of cross-datacenter replication leveraging the high bandwidth, low latency Azure backbone.  With this architecture, cross region communication is limited only by a nodes bandwidth cap, which can be as high as many Gbps.  As always, it’s been a pleasure working with the Microsoft team on testing preview versions of this feature.  You can try the GA version yourself in Azure Marketplace or with the Azure 2.0 CLI.

– Ben Lackey – Director of Partner Solutions at Couchbase

Applications where outbound bandwidth exceeds load balancer capabilities. Public IP per VM increases this bandwidth by allowing each VM to use its NIC for outbound network traffic.
Applications which need a direct connection from client to server. One example is gaming, where a game console makes direct connections to VMs doing game physics for massive shared reality environments.
Large scale client simulations. E.g. stress testing a retail service by simulating a large number of independent clients.

Configurable DNS

Previously scale sets relied on the specific DNS settings of the VNet and subnet they were created in. With configurable DNS, you can now configure the DNS settings for a scale set directly. You can configure which DNS Servers the VMs in the scale set should reference, and specify a domain name label to apply to each VM.

Multiple IP addresses per NIC, multiple NICs per VM

Why stop at one public IP address per VM when you can have up to 400? The ability to define more than one IP address and NIC for a virtual machine is particularly useful for applications like Web Application Firewalls, which need to manage multiple networks and can optimize resources by being able to easily scale out VMs.

Now you can define up to 50 IP addresses per NIC, and up to 8 NICs per VM (depending on VM size) for all the VMs in your scale set.

Network Security Groups per scale set

A Network Security Group (NSG) contains a list of security rules that allow or deny network traffic to resources connected to Azure Virtual Networks. NSGs enable you to customize your security requirements to your security needs.

Previously you could assign an NSG to a subnet, or to standalone virtual machine NICS, but not directly to a scale set. NSGs can now be applied directly to scale sets. Network traffic rules can be enforced and controlled through NSGs securing your scale sets in Azure, allowing finer grained control over your infrastructure.

IPv6 Load Balancer support  – public preview

As IPv4 addresses become scarcer, more applications are leveraging the 128-bit address space provided by IPv6. Now with the public preview of IPv6 load balancer support, you can configure Azure Load Balancers with public IPv6 addresses, which can route requests to VM scale set VMs.

Accelerated Networking

The Azure Accelerated Networking feature, which dramatically improves network performance by enabling single root I/O virtualization (SR-IOV) to a VM, is now available for virtual machine scale sets. This feature is generally available for Windows, and in public preview for Linux.

To find out more about these networking features for scale sets and how to use them, refer to Azure Virtual Machine Scale Sets Networking.
Quelle: Azure

Announcing StorSimple 8000 series in the new Azure portal!

I'm pleased to announce the General Availability of StorSimple 8000 series management in the new Azure portal. Everything about the StorSimple Physical Device Series experience in the new Azure portal is designed to be easy. Our 8000 series customers can now use the new Azure portal and Azure Resource Manager to unlock the deep personalization, role-based access control, and a single portal to manage all your applications.

Get started

The new Azure portal supports devices running Update 3.0 or later. Using the Azure resource manager, you can now create StorSimple Cloud Appliances (8010/8020). The Azure Resource Manager (ARM) enables you to leverage your existing ARM-based VNET or storage accounts. To learn how to manage your 8000 series devices in the portal, please refer to the product documentation.

Automate operations

You can leverage Azure Resource Manager SDK for automating the 8000 series device management.

To automate 8000 series device management, you can now leverage Azure Resource Manager SDK. Refer the samples to create a volume, list backups, roll over the service encryption key, scan for updates, and generate a backup report.

Transition to the new Azure portal

In a single click, you can seamlessly transition from the classic portal to the new Azure portal. Once in the new Azure portal, you can explore all the ARM capabilities. To leverage the seamless transition experience, apply the latest update on your devices. Your existing StorSimple Physical Device Series resources in the classic portal can be transitioned to the new Azure Portal in the coming weeks. For more information, go to Transition to the new Azure portal.

We'll transition all the customers to the new Azure portal by September 30, 2017. ​The complete transition process is quick, easy, and non-disruptive. We will reach out to you with more details. Stay tuned! 

During the transition:

You can’t manage your device from the portal.
You’re protected as tiering and scheduled backups continue to occur.

After the transition:

You can no longer manage your devices from the classic portal.
All device managers under the selected subscription will be transitioned.
The existing Azure Service Management (ASM) based PowerShell cmdlets are not supported. Update the scripts to manage your devices through the ARM.
All your service settings and device configuration are intact! This includes the volumes and backups created in the classic portal.

Your new home

The new Azure portal is easy to use. Search for your StorSimple Device Manager by clicking on More services > from the left jumpbar. Go to Quick start to learn how to set up a device.

Go to Overview for a quick peek of your service summary.

Click Devices to see all the devices registered. Click a specific device to view the device summary. To monitor the device consumption and performance charts, click Usage, Performance, or Capacity.

Visit StorSimple MSDN forum to find answers, ask questions, and connect with the StorSimple community. Your feedback is important to us, so send all your feedback or any feature requests using the StorSimple User Voice. And don’t worry – if you need any assistance, Microsoft Support is there to help you along the way!
Quelle: Azure