Cloud innovations empowering IT for business transformation

Leading the Azure engineering organization over the past several years has been an incredible experience, and one that has taught me a great deal. My discussions with enterprise customers has given me an intimate understanding of the challenges IT teams (including our own) are facing today – How do I innovate with greater agility and faster time to market? How do I modernize our app portfolio? How do I maintain and optimize what I have? How do I manage and secure it all? Amidst all these challenges, however, lies a unique opportunity to align IT with business strategy. And the cloud is the enabling technology that makes this more possible than ever.

Azure is the cloud platform designed for enterprises with the fundamental tenets of global, trusted and hybrid. With Azure infrastructure spanning 34 global regions, we provide twice the choice of regions than AWS to run your applications, as well as offering unique data sovereignty capabilities. With 47 compliance certifications and attestations, Azure is the most compliant hyper scale cloud on the planet. Our hybrid IT depth means ensuring your on-premises investments work consistently with Azure, because hybrid IT means true consistency across your entire environment, not just connectivity between your datacenter and the cloud. But what’s exciting and humbling is to see the tremendous value customers are getting by betting on Azure. From Fortune 500 organizations like Wal-Mart, BMW and EcoLab to startups like Soluto and Linukury, our customers save on costs, are more agile and can transform their businesses.

This week at Ignite, we’re unveiling many new Azure capabilities and you’ll see a common meme across these – enabling IT with cloud infrastructure, security capabilities, holistic management, and world-class support for open source.

Infrastructure for IT innovation

In Azure’s global datacenters lies incredible compute capacity that you can tap into. We want to ensure you can run every workload – meaning all the performance and scale you need, regardless of what you are running. To that end, we are making several announcements today.

New compute offerings – storage optimized, fastest CPU, SAP HANA

We are introducing new Virtual Machine and compute offerings to address your unique application needs – the L-series, H-series and general availability of special-purpose large instances for SAP HANA. This expands upon the recently released N-series, available in preview and offering best-in-class GPU compute VMs.

L-Series – L-series are storage-optimized VMs specially designed for applications requiring low latency, high throughput, large local disk storage such as NoSQL databases (e.g. Cassandra, MongoDB, Cloudera and Redis), and data warehousing. Built on Intel Haswell processors (Intel® Xeon® processor E5 v3), L-series supports up to 6 TB of local SSD and offers unmatched storage performance. We will be rolling out L-series in the coming weeks.
H-series – Aligned with our commitment to deliver the best performing technology to market, H-series sports the fastest CPUs in public cloud as well as RDMA with InfiniBand, so you can run high performance computing (HPC) applications like computational fluid dynamics, automotive crash testing, genome and molecular research. H-Series VMs provides customers the ability to easily get on-demand HPC infrastructure and use it for faster insight.
Large Instances for SAP HANA – Continuing our commitment to be able to run the largest enterprise applications, I am delighted to announce the general availability of large instances specifically designed for SAP HANA workloads. These purpose-built hardware configurations can run the largest SAP HANA workloads in the public cloud. They accommodate SAP HANA OLTP scenarios for up to 3 TB, and for large scale-out OLAP deployments for up to 32 TB of RAM.
N-series – Earlier in August, we announced the preview release of our new GPU-powered N-series VM sizes. With both a visualization SKU and a compute-focused SKU, these VM sizes offer unparalleled performance for desktop graphical modeling/rendering and deep learning computational models.

More openness options

Azure is an open platform, deeply committed to leading open source support. Today, nearly one in three VMs deployed on Azure run Linux. The strong momentum for Linux and open source on Azure is driven by our ongoing innovation and demonstrated commitment. Customers like Johnson Controls and KPMG are using Azure for open source workloads and building modern application architectures, including containers and big data solutions. Two weeks ago, we released a preview of our microservices platform, Service Fabric on Linux, for creating highly scalable, cloud-native applications. Customers can now also provision Service Fabric clusters in Azure using Linux as the host OS and deploy Java applications in these clusters.

This week, we are expanding our open source support with further regional availability of Linux and open source solutions, including on-demand Red Hat Enterprise Linux (RHEL) in Azure Government. We are also adding RHEL support for SAP applications (NetWeaver and HANA). We are committed to great experiences for developers and system admins and to meet customers where they are, from platform stacks to management tools.

New networking capabilities

Azure provides a rich set of networking features so you have the most performant network and diverse set of options for your applications. To that end, we are introducing several new capabilities today.

IPv6 support – With the explosive growth of devices powered by Internet of Things (IoT), compliance regulations and the need for future proofing applications, the need for IPv6 has become more real than ever before. Today, we’re introducing support for IPv6 for applications within virtual machines on Azure.
Azure DNS – We introduced Azure DNS to provide you the speed, reliability, and convenience of having your DNS services hosted close to your applications and cloud infrastructure. We are excited to make this networking service generally available today.
Accelerated Networking – As applications demand faster performance than ever before, we are previewing the ability for VMs to tap into incredible network performance (up to 25 Gbps). Powered by FPGAs, the network throughput and low latency offered by this capability is unparalleled in the public cloud today.
Web Application Firewall (WAF) – With the addition of WAF capabilities to the Application Gateway service, we’re significantly enhancing your ability to manage application security.
New Virtual Network capabilities like peering, active-active VPN gateways and a new ultra-performance gateway for ExpressRoute, all significantly improve the way in which you define network topologies and connect different network environments in powerful ways.

Hybrid cloud enablement

Over 80% of enterprises today have a hybrid cloud strategy – this is the real world for organizations. With decades of experience partnering closely with enterprises, we understand the importance of true hybrid that provides a consistent and comprehensive approach for your entire IT estate. Today, we announced the next step in delivering the power of Azure in your datacenter with the release of Azure Stack Technical Preview 2, bringing more proven cloud innovation such as the Azure Marketplace and security capabilities like Key Vault directly to your datacenter.

We are also bringing cloud-first innovation to your on-premises datacenters and other clouds with General Availability of Service Fabric for Windows Server. With this release, we enable microservice-based Service Fabric applications to have portability and flexibility. This standalone offering provides you a runtime that can be installed on Windows Server on-premises or even in other clouds.

Many of our customers turn to Azure for backup and disaster recovery across the enterprise. In addition to Azure integration with offerings from our rich storage partner ecosystem, we now have the Azure StorSimple Virtual Array for remote and branch offices and the existing Azure StorSimple 8000 series hybrid cloud storage offerings for the datacenter. We are also providing data transformation services that make data backed up using StorSimple available in native Azure formats like blobs and disks. This makes it easy for customers to address business needs with services such as Media Services, Search, Analytics and even custom applications.

New Azure SQL database enhancement

We are announcing general availability of Temporal Tables feature of Azure SQL Database. Temporal Tables are designed to improve your productivity when you develop applications. It lets you focus data analysis on a specific point in time and use a declarative cleanup policy to control retention of historical data. It also enables you to track the full history of data changes in Azure SQL DB, without custom coding.

Securing infrastructure

Security is a primary adoption concern for customers embracing the cloud. We know that staying ahead of sophisticated and ever-evolving cyber threats is a challenging and an ongoing process. We can firmly say that trust and security are cornerstones of the Azure platform. We have the largest compliance portfolio and with it, organizations in even highly regulated industries like Financial Services can use Azure; nearly 85 percent of the world’s largest banks are Azure customers. We are investing heavily in building a cloud that you can trust and today we are announcing key enhancements that further bolster the security of our platform.

Compliance portfolio expansion

We are further strengthening our robust compliance portfolio with these new additions.

ISO 22301 Certification – Azure is the only hyper scale cloud service provider to receive a formal certification for business continuity management, demonstrating comprehensive internal guidelines for the prevention, response, and recovery from disruptive incidents.
EU-US Privacy Shield Framework – Microsoft is also the first cloud vendor to get certified under the new EU-US Privacy Shield Framework for the protection of personal data of EU citizens and is the latest example of the company’s commitment to privacy.
IT-Grundschutz Workbook – Azure has also made available a new security and compliance workbook, IT-Grundschutz, for Azure for who are subject to the German Federal Office for Information Security (BSI) information protection standards.

Azure Security Center enhancements

We continue to enhance Azure Security Center, offering unmatched security monitoring and management for your cloud resources, since its general availability earlier this year.

Using Security Center, customers benefit from ongoing security research resulting in new analytics released today that are designed to detect insider threats, attempts to persist within a compromised system, and use of compromised systems to mount additional attacks, such as DDoS and Brute Force.
Security Incidents, currently available in preview, have been enriched to correlate alerts from different sources, including alerts from connected partner solutions.
Threat attribute reports are now built-in to provide valuable information about attackers, which can be used to remediate threats more quickly.
Security Center also released support for integrated vulnerability assessment from partners like Qualys, along with security assessment of Web Apps and Storage accounts.

Azure Key Vault support for certificates

To better secure your cloud resources and data, Azure Key Vault now extends support for certificates helping simplify tasks associated with SSL/TLS certificates. This service helps customers enroll and automatically renew certificates from supported 3rd party Certificate Authorities while providing auditing trails within the same environment. Aligning with our approach to work with industry partners, the following Certificate Authorities are supported at GA: Digicert, Globalsign and WoSign.

General Availability of Encryption Services

With the general availability of following encryption services, we help customers protect and safeguard their data and meet their organizational security and compliance requirements.

With the availability of Azure Disk Encryption for both Windows and Linux Standard VMs, customers can protect and safeguard their OS disk and data disks at REST using industry standard encryption technology.
Two weeks ago, we announced the general availability of Storage service encryption for Azure Blob Storage. For accounts that have encryption enabled, data will be encrypted using Microsoft managed keys using the industry leading Encryption algorithm, 256-bit Advanced Encryption Standard (AES-256).

Management from the cloud

Cloud intelligence and cloud scale, gives you new options when it comes to management. As you take advantage of the agility of Azure, you need to closely monitor and analyze the utilization and performance of your Azure resources. Today, we announced the preview of Azure Monitor, which provides better insights by enabling you to collect performance and utilization data, activity and diagnostics logs, and notifications from your Azure resources. With workloads on-premises, in Azure or spanning both, you need a unified view and the tools to drill down deep when required. Delivered from Azure, Operations Management Suite gives you a comprehensive hybrid cloud management platform. With Azure Monitor and Operations Management Suite Insight & Analytics, all data from your workloads and applications in Azure, on-premises, in AWS, and on VMware is now at your fingertips.

Empowering the IT cloud journey

While we’re committed to continuous innovation to deliver the world leading cloud platform, staying abreast of rapidly changing technologies can be difficult. Microsoft is therefore providing free resources to help IT Professionals through your cloud career journey, from planning your career path to getting started with Azure to hands-on practice with the latest cloud technology.

Microsoft IT Pro Cloud Essentials program will help you get started with $300 Azure credits, a free support incident, free Pluralsight courses and certification discounts. Today we are expanding availability of the Microsoft IT Pro Cloud Essentials and IT Pro Career Center programs to 25 languages.
Microsoft IT Pro Career Center can help you navigate the skills needed to transition to a cloud role. 
Microsoft Tech Community provides a modern digital community where you can ask questions, exchange ideas, and build connections with Microsoft Valued Professionals (MVPs), Microsoft engineers and peers.  Finally, to stay current with the latest Microsoft cloud technologies subscribe to the Microsoft Mechanics YouTube channel for weekly IT focused videos.

I strongly believe that these investments and innovations on Azure are contributing in a significant way to business transformation. I thank you for being a part of Azure’s incredible growth and am very interested in hearing your feedback on the new releases we have on Azure!
Quelle: Azure

Azure Service Fabric for Windows Server now GA

Enterprises today need to walk a fine line between innovation and delivering reliable services. Firms need to be able to rapidly create and run mission critical enterprise applications that have the potential to capture new areas of growth for the organization, creating the opportunity to increase their exposure in the market and meeting changing customer needs. At the same time, system reliability is equally important since application downtime has a real cost to a business’s reputation, finances, and customer loyalty. For example, customers expect an online banking or e-commerce site to be up and running any time of day across any browser, device, or app. A company that doesn’t meet these 24/7 availability expectations and needs is at risk of losing customers to their competitors. Increasingly, businesses are turning to the cloud to develop and manage their applications at scale and with high availability.

Microsoft’s Azure Service Fabric, our microservices application platform for developing and managing cloud-scale applications, was released last year to help developers build and manage cloud-scale applications.

I am excited to announce today that Azure Service Fabric for Windows Server will be generally available for download at no cost. With today’s announcement, customers can now provision Service Fabric clusters in their own data centers or other cloud providers and run production workloads with the option to purchase support for ultimate confidence. One such customer is Owners.com, an online platform that gives consumers a convenient and cost-effective way to buy or sell a home.

"Our on-premise installation of Azure Service Fabric is a robust and highly scalable platform on which we&;ve been able to build very complex software as a collection of easily manageable modules.  This new paradigm of service development allows us to rapidly develop, test, and deploy (with zero downtime), all while meeting tight SLAs for our production environment." Marion Denny, Director of Engineering at Owners.com

We unveiled Service Fabric preview on Linux earlier this month, furthering our vision to enable developers to build Service Fabric applications on the OS of their choice and run them wherever they want. Battle-hardened internally at Microsoft for almost a decade, Service Fabric has been powering highly scalable services like Cortana, Intune, Azure SQL Database, Azure DocumentDB, and Azure’s infrastructure. We’ve seen tremendous response from our customers and great momentum since our recent GA at Build 2016.

Azure Service Fabric allows the creation of clusters on any machine running Windows Server or Linux which means that you can deploy and run Service Fabric applications in any environment that contains a set of interconnected computers, be it on-premises or with any cloud provider. Azure Service Fabric for Windows Server enables you to create clusters on Windows Server machines, particularly focused on running Service Fabric in your data centers. This means you get benefits such as:

Using data center resources you already own and developing microservice architectures on premise before moving to the cloud.
You can choose to create clusters on other cloud providers.
Service Fabric applications, can be deployed to any cluster with minimal to no changes. This can provider an added layer of reliability because you can move your applications to another deployment environment.
Developer knowledge of building Service Fabric applications and the operational experience of running and managing Service Fabric clusters carries over from one hosting environment to another.

We’re excited that with our continuous updates to Service Fabric, more businesses can take advantage of our innovations to develop and power their applications. Learn more about how to get started with Service Fabric.
Quelle: Azure

Announcing the public preview of Azure Monitor

Today we are excited to announce the public preview of Azure Monitor, a new service making inbuilt monitoring available to all Azure users. This preview release builds on some of the monitoring capabilities that already exist for Azure resources. With Azure Monitor, you can consume metrics and logs within the portal and via APIs to gain more visibility into the state and performance of your resources. Azure Monitor provides you the ability to configure alert rules to get notified or to take automated actions on issues impacting your resources. Azure Monitor enables analytics, troubleshooting, and a unified dashboarding experience within the portal, in addition to enabling a wide range of product integrations via APIs and data export options. In this blog post, we will take a quick tour of Azure Monitor and discuss some of the product integrations.

Quick access to all monitoring tasks

With Azure Monitor, you can explore and manage all your common monitoring tasks from a single place in the portal. To access Azure Monitor, click on the Monitor tab in the Azure portal. You can find Activity logs, metrics, diagnostics logs, and alert rules as well as quick links to the advanced monitoring and analytics tools. Azure Monitor provides these three types of data – Activity Log, Metrics, and Diagnostics Logs.

Activity Log

Operational issues are often caused by a change in the underlying resource. Activity Log keeps track of all the operations performed on your Azure resources. You can use the Activity Log section in the portal to quickly search and identify operations that may impact your application. Another valuable feature of the portal is the ability to pin Activity log queries on your dashboard to keep a tab on the operations you are interested in. For example, you can pin a query that filters Error level events and keep track of their count in the dashboard. You can also perform instant analytics on Activity Log via Log Analytics, part of Microsoft Operations Management Suite (OMS).

Metrics

With the new Metrics tab, you can browse all the available metrics for any resource and plot them on charts. When you find a metric that you are interested in, creating an alert rule is just a single click away. Most Azure services now provide out-of-the-box, platform-level metrics at 1-minute granularity and 30-day data retention, without the need for any diagnostics setup. The list of supported resources and metrics is available here. These metrics can be accessed via the new REST API for direct integration with 3rd party monitoring tools.

Diagnostics logs

Many Azure services provide diagnostics logs, which contain rich information about operations and errors that are important for auditing as well as troubleshooting purposes. In the new Diagnostic logs tab, you can manage diagnostics configuration for your resources and select your preferred method of consuming this data.

Alerts & automated actions

Azure Monitor provides you the data to quickly troubleshoot issues. But you want to be proactive and fix issues before it impacts your customers. With Alert rules, you can get notified whenever a metric crosses a threshold. You can receive email notifications or kick off an Automation-runbook script or webhook to fix the issue automatically. You can also configure your own metrics using custom metrics and events APIs to send data to Azure Monitor pipeline and create alert rules on them. With the ability to create alerts rules on platform, custom and app-level metrics, you now have more control on your resources. You can learn more about alert rules here.

Single monitoring dashboard

Azure provides you a unique single dashboard experience to visualize all your platform telemetry, application telemetry, analytics charts and security monitoring. You can share these dashboards with others on your team or clone a dashboard to build new ones.

Extensibility

The portal is a convenient way to get started with Azure Monitor. However, if you have a lot of Azure resources and want to automate the Azure Monitor setup you may want to use a Resource Manager template, PowerShell, CLI, or REST API. Also, if you want to manage access permissions to your monitoring settings and data look at the monitoring roles.

Product integrations

You may have the need to consume Azure Monitor data but want to analyze it in in your favorite monitoring tool. This is where the product integrations come into play – you can route the Azure Monitor data to the tool of your choice in near real-time. Azure Monitor enables you to easily stream metrics and diagnostic logs to OMS Log Analytics to perform custom log search and advanced alerting on the data across resources and subscriptions. Azure Monitor metrics and logs for Web Sites and VMs can be easily routed to Visual Studio Application Insights, unlocking deep application performance management within the Azure portal.

The product integrations go beyond what you see in the portal. Our partners bring additional monitoring experiences, which you may wish to take advantage of. We are excited to share that there is a growing list of partner services available on Azure to best serve your needs. Please visit the supported product integrations list and give us feedback.

To wrap up, Azure Monitor helps you bring together the monitoring data from all your Azure resources and combine it with the monitoring tool of your choice to get a holistic view of your application. Here is a snapshot of a sample dashboard that we use to monitor one of our applications running on Azure. We are excited to launch Azure Monitor and looking forward to the dashboards that you build. Review the Azure Monitor documentation to get started and please keep the feedback coming.

Quelle: Azure

Azure Networking announcements for Ignite 2016

This week we are announcing several new Azure networking services and features to provide customers greater performance, higher availability, better security and more operational insights. We will continue to innovate to make it even easier and more seamless for customers to run their services in the public cloud. For an overview of all our exciting Azure Ignite announcements please see Jason Zander’s blog post.

Higher Performance

Azure has been developing Microsoft’s cloud scale data center infrastructure for over nine years.  Early on we realized that building network infrastructure for mega scale data centers and ever increasing data transfer rates required fundamental shifts in networking technology.  We have been working across the industry to promote and develop cutting edge networking solutions including Microsoft developed hardware solutions.

Today we are announcing break-through advancements to our entire global server fleet that will improve networking bandwidth performance 33% to 50%. This is achieved by utilizing hardware technologies such as NVGRE offload which harnesses the network processing capabilities of the hardware. Windows and Linux VMs will experience these performance improvements while returning valuable CPU cycles to the application. Our world-wide deployment will complete in 2016 and once completed we will update our VM Sizes table to reflect these new performance benefits.

To provide even more performance, we are very excited to announce the Public Preview of Accelerated Networking. Accelerated Networking provides up to 25Gbps of throughput and drastically reduces network latency up to10x! Applications will benefit from a new generation of hardware technologies including SR-IOV, allowing VMs to communicate directly to the hardware NIC completely bypassing the Hypervisor’s virtual switch. Along with higher bandwidths and lower latencies, applications will experience reduced jitter and improved Packets Per Second (PPS) performance. With Accelerated Networking, Azure SQL DB In-Memory OLTP transaction performance improved 1.5X.  Also with this preview, DS15v2 and D15v2 VM sizes provide up to 25Gbps of network throughput. More details on regional availability and a link to sign up for the preview are available at Accelerated Networking for a virtual machine.

In addition, Azure Storage users will benefit from substantially increased IOPS performance based on these advancements, combined with newly developed storage specific offloads. Hardware now efficiently performs data transfers up to the line rate of the NIC. The roll out for Storage will also complete in 2016.

We are announcing the general availability of Virtual Network Peering (VNet Peering). VNet Peering connects Virtual Networks (VNets) in the same region, enabling direct full mesh connectivity. VMs in the peered VNets communicate with each other as if they are part of the same VNet, thus benefiting from high bandwidth and low latency. Hub & Spoke topologies are supported with Transit Routing through gateways. The VNet without a gateway still has cross-premises connectivity via the gateway in the peered VNet. VNet Peering works across subscriptions allowing for simplified service management.

This allows consolidation of VPN gateways and network virtual appliances in the same region, simplifying management and reducing costs. User-Defined Routes (UDR) and Network Security Groups (NSGs) can manage fine grain control between the peered VNets. Vnet peering enables co-existence of “Classic” VNets and Azure Resource Manager VNets. This allows for incremental adoption to the Azure Resource Manager model.

Many enterprise customers use ExpressRoute to connect their private networks to Microsoft. ExpressRoute is supported by a large ecosystem of global telecom providers, cloud exchanges and service providers in over 35 locations. Today, we are introducing the UltraPerformance Gateway SKU for ExpressRoute that supports up to 10 Gbps throughput. This is a 5x improvement over the existing ExpressRoute HighPerformance gateway with a 99.95% availability SLA. With the UltraPerformance Gateway, customers can deploy even more networking intensive services and workloads into their virtual networks.

Cloud applications with demanding networking and massive real time data access requirements will greatly benefit from these new performance enhancements. We are ready for your workload.

IPv6

Azure now supports Native IPv6 network connectivity for applications and services hosted on Azure Virtual Machines. The demand for IPv6 has never been greater with the explosive growth in mobile devices, billions of Internet of Things (IOT) devices entering the market, along with new compliance regulations. IPv6 has been used by internal Microsoft services such as Office 365 for over three years. We are now offering this feature to all Azure customers. Native IPv6 connectivity to the virtual machine is available for both Windows and Linux VMs.

Higher Availability

Our new Active-Active Virtual Private Network (VPN) Gateway for the High-Performance VPN gateway SKU is recommended for production workloads. Availability requires a complete end to end perspective that includes the customer’s on-premises VPN devices and using different service providers to connect to the Active-Active VPN gateway. Each VPN gateway has two active instances. Customers can now implement dual redundancy for cross-premises VPN connections, increasing the availability of their VPN connections to their Azure VNets. All customers should consider adopting the new Active-Active VPN Gateway.
 

Our customers need more degrees of freedom for their Azure Load Balancer configurations. Today, we are making several announcements to increase design flexibility, enable new scenarios, and allow efficient resource consolidation.

We are announcing general availability of multiple VIPs on internal load balancers and new port reuse options across public and internal load balancers. In the following week, we will be previewing two additional abilities in specific regions: Multiple IP addresses on a Network Interface Card (NIC) and enabling all NICs on a VM to have a Public IP address on the NIC or through the load balancer. Check the service update page on the availability of these abilities.

Network Virtual Appliances (NVA) can now offer more flexible configurations. A firewall appliance can expose an Internet facing service on NIC 1 and an internal management service on NIC 2 using the same backend machines. In addition, an NVA can use a single NIC to host multiple services by securing individual private IP addresses per customer/service. Security can be further enhanced using NSG rules targeted at individual IP addresses.

Another use case is SQL AlwaysOn with Multiple Listeners which is now available in Preview. You can also host multiple availability groups on the same cluster and optimize the number of active replicas.  

Azure DNS

We are announcing the GA release of Azure DNS. Customers can now host domains in Azure DNS and manage DNS records using the same credentials, APIs, tools, billing and support as other Azure services.  Azure DNS also benefits from Azure Resource Manager’s enterprise-grade security features, enabling role-based access control and detailed audit logs. Azure DNS supports multiple record types including, A, AAAA, CNAME, MX, NS, PTR, SOA, SRV and TXT and comes with a 99.99% availability SLA.

Azure DNS uses a global network of name servers, providing exceptionally high availability, even in the event of a multi-region failure or network partitioning.  DNS queries are answered by the closest available DNS server for the fastest possible query performance.

With Azure DNS, IT Pros can manage DNS zones and records using either the Azure Portal, or through scripting using Azure PowerShell or the cross-platform Azure CLI. Developers can use the Azure DNS REST API or SDK to automate DNS record provisioning as part of their application workflows. In both cases, fast DNS record provisioning avoids the need to wait for new DNS records to propagate to the name servers. Customers can use the SDK to automate DNS record provisioning as part of their application workflows.

More Secure

Last year we introduced Application Gateway, an Application Delivery Controller (ADC) offering a Layer 7 load balancing as a service. This complements Azure Traffic Manager (DNS load balancer) for load balancing across geographical regions and Azure Load Balancer for layer 4 load balancing within a region (availability set). Over the past year we have enhanced Application Gateway to better address web application requirements. These capabilities include SSL termination, round robin load distribution, cookie based session affinity, URL path based routing, ability to host multiple web applications on the same load balancer, rich diagnostics with access and performance logs, WebSocket support, VM scale set support and the ability to define user configurable health probes.

In our continued effort to provide enhanced application security, Application Gateway now supports end to end SSL encryption and user configurable SSL policies. Customers can secure end to end communication from user requests to the backend using SSL/TLS, while taking advantage of routing rules set on the Application Gateway. The user’s SSL request is terminated at the gateway, which applies user configured routing rules and then re-encrypts the request before sending it to the backend. User configurable SSL policies allows the customer to selectively disable older SSL/TLS protocol versions thus further strengthening the security profile of the applications behind the Application Gateway.

To provide even more advanced security to protect web applications from common vulnerabilities like SQL injection or cross-site scripting attacks, we are announcing the public preview of Web Application Firewall (WAF) as part of the Application Gateway service.

Application Gateway WAF offers simplified manageability of application security and comes preconfigured with protection from the most prevalent web vulnerabilities as identified by Open Web Application Security Project (OWASP) top 10 common vulnerabilities. Customers can run Application Gateway WAF in either protection or detection only mode. Application Gateway WAF also provides real time metrics and alert reporting to continuously monitor web application against exploits. Security rules customization and integration with the Azure Security Center will be available soon.

Network Monitoring and Diagnostics

As the cloud begins to mature it is important to offer not only the same networking performance but also the same level visibility and insights as on-premises solutions.  We are committed in continually enhancing our capabilities in monitoring and diagnostics, empowering you to more easily manage your networks.

In continuation of our earlier announced capabilities in monitoring and diagnostics –

Network Security Group events and counters
SLB resource exhaustion event and Probe health status
Application Gateway performance and access logs
Audit logs for all Networking resources

We are releasing a series of new capabilities.

Customers can view performance metrics for an Application Gateway on the Azure Portal.  The metrics requires no additional configuration. The current release supports continuous aggregated throughput statistics and additional metrics support will be available soon.

Customers can configure threshold based alerts on metrics to proactively monitor the network. An alert can send an email notification or invoke a web hook that can integrate with 3rd party messaging services.

ExpressRoute users can get operational insights into routing configurations and network peering statistics.

Improved diagnostics for Network Security Groups (NSGs) and Routes enable you to better diagnose complex network connectivity problems.  Effective Routes provide an aggregated view of user-defined routes (UDRs), system and BGP routes that impact a VM’s network traffic flow

The ability to easily verify the correctness of network security settings is critical. The effective security rules view offers a comprehensive yet a simplified and intuitive way to understand the security rules as configured on a VM/NIC.

You can expect a lot more capabilities in the coming months.

Looking forward

The cloud is ever evolving and customers are deploying more demanding and complex workloads. This evolution means our mission and commitment is not an end-point but a journey.  We hope you spend time exploring the new services, features and capabilities and provide your valuable feedback as we continue to create, enhance, and deploy new networking technologies to meet your needs.
Quelle: Azure

Announcing Azure DNS General Availability

Today, we are excited to announce the General Availability of Azure DNS. As a global service, it is available for use in all public Azure regions.

We announced the Public Preview of Azure DNS at the Ignite conference in May of last year. Since then the service has been used by thousands of customers, whose valuable feedback has helped drive engineering improvements and to mature the service.

With this announcement, Azure DNS can now be used for production workloads. It is supported via Azure Support, and is backed by a 99.99% availability SLA.

As with other Azure services, Azure DNS offers usage-based billing with no up-front or termination fees. Azure DNS pricing is based on the number of hosted DNS zones and the number of DNS queries received (in millions). General availability pricing applies from 1 Nov 2016, until that time the Preview pricing discount of 50% continues to apply.

Key Features and Benefits

Azure DNS enables you to host your DNS domains and manage your DNS records in Azure.

Hosting and managing your DNS in Azure provides the following benefits:

Reliability – Azure DNS has the scale and redundancy built-in to ensure high availability for your domains—and is backed by our 99.99% uptime SLA. As a global service, Azure DNS is resilient to multiple Azure region failures and network partitioning for both its control plane and DNS serving plane.
Performance – Our global network of name servers uses ‘anycast’ networking to ensure your DNS queries are always routed to the closest server for the fastest possible response.
Ease of use – Your DNS zones and records in Azure DNS can easily be managed via the Azure Portal, Azure PowerShell, or cross-platform Azure CLI. Application integration is supported via our SDK or REST API.
Security – Azure DNS benefits from the same authentication and authorization features as other Azure services, including the ability to configure multi-factor authentication and role-based access controls.
Convenience – Hosting your DNS in Azure enables you to manage your Azure applications and their DNS records in one place, using a single set of credentials, with a single bill and with end-to-end support.

Key features of Azure DNS include:

All common DNS record types—Azure DNS supports all of the DNS record types most commonly used in customer domains: A, AAAA, CNAME, MX, NS, PTR, SOA, SRV and TXT.  (SPF records are supported via the TXT record type, as per the DNS RFCs.)
Easy migration – Migrating your existing domain hosting to Azure DNS is quick and easy using our zone file import feature, which enables existing DNS zones to be imported into Azure DNS in a single command. This feature is available via Azure CLI on Windows, Mac and Linux.
Fast record propagation – When you create a new DNS record in Azure DNS, that record is available at our name servers in just a few seconds. You can verify name resolution and move on to your next task without having to wait.
Record-level access control – As you would expect, Azure Resource Manager’s role-based access controls can be applied to restrict which users and groups are able to manage each DNS domain. In addition, these permissions can be applied on individual DNS record sets. This is particularly useful for large enterprises, in which shared zones are common, enabling different teams to self-manage their own DNS records without having access to records owned by other teams.

Third-Party Support

Men & Mice is a leading supplier of DNS, DHCP and IPAM (DDI) software solutions for enterprise networks operating on diverse and distributed platforms. Deployed on top of existing network infrastructure, the Men & Mice Suite consolidates hybrid environments and offers comprehensive tools for managing large portfolios of DNS domains and IP address blocks, via a powerful user interface or via the robust Men & Mice APIs.

The Men & Mice team has been working with the Azure DNS team during the Azure DNS Preview to implement full support for Azure DNS in the Men & Mice Suite. This combination offers customers with large domain portfolios the flexibility and power of the Men & Mice Suite for large-scale DNS management together with the low cost, convenience, availability and performance of hosting their domains in Azure DNS.

Magnus E. Bjornsson, CEO of Men & Mice, says: “We are proud partners of Microsoft and embrace the opportunity to join forces with this leader in the field of IT. Our mutual collaboration enhances the value of the open and adaptable Men & Mice Suite to our customers. We look forward to continuing the development of third-party support products in close cooperation with Microsoft.”
Quelle: Azure

Deep Learning, Simulation and HPC Applications with Docker and Azure Batch

The Azure Big Compute team is happy to announce version 1.0.0 of the Batch Shipyard toolkit, which enables easy deployment of batch-style Dockerized workloads to Azure Batch compute pools. Azure Batch enables you to run parallel jobs in the cloud without having to manage the infrastructure. It’s ideal for parametric sweeps, Deep Learning training with NVIDIA GPUs, and simulations using MPI and InfiniBand.

Whether you need to run your containerized jobs on a single machine or hundreds or even thousands of machines, Batch Shipyard blends features of Azure Batch — handling complexities of large scale VM deployment and management, high throughput, highly available job scheduling, and auto-scaling to pay only for what you use — with the power of Docker containers for application packaging.  Batch Shipyard allows you to harness the deployment consistency and isolation for your batch-style and HPC containerized workloads, and run them at any scale without the need to develop directly to the Azure Batch SDK.

The initial release of Batch Shipyard has the following major features:

Automated Docker Host Engine installation tuned for Azure Batch compute nodes
Automated deployment of required Docker images to compute nodes
Accelerated Docker image deployment at scale to compute pools consisting of a large number of VMs via private peer-to-peer distribution of Docker images among the compute nodes
Automated Docker Private Registry instance creation on compute nodes with Docker images backed to Azure Storage if specified
Automatic shared data volume support for:

Azure File Docker Volume Driver installation and share setup for SMB/CIFS backed to Azure Storage if specified
GlusterFS distributed network file system installation and setup if specified

Seamless integration with Azure Batch job, task and file concepts along with full pass-through of the Azure Batch API to containers executed on compute nodes
Support for Azure Batch task dependencies allowing complex processing pipelines and graphs with Docker containers
Transparent support for GPU accelerated Docker applications on Azure N-Series VM instances (Preview)
Support for multi-instance tasks to accommodate Dockerized MPI and multi-node cluster applications on compute pools with automatic job cleanup
Transparent assist for running Docker containers utilizing Infiniband/RDMA for MPI on HPC low-latency Azure VM instances (i.e., STANDARD_A8 and STANDARD_A9)
Automatic setup of SSH tunneling to Docker Hosts on compute nodes if specified

We’ve also made available an initial set of recipes that enable scenarios such as Deep Learning, Computational Fluid Dynamics (CFD), Molecular Dynamics (MD) and Video Processing with Batch Shipyard. In fact, we are aiming to make Deep Learning on Azure Batch an easy, low friction experience. Once you have the toolkit installed and have Azure Batch and Azure Storage credentials, you can get CNTK, Caffe or TensorFlow running in an Azure Batch compute pool in under 15 minutes. Below is a screenshot of CNTK running on a GPU-enabled STANDARD_NC6 VM via Batch Shipyard with nvidia-smi:

We hope to continue to expand the repertoire of recipes available for Batch Shipyard in the future.

The Batch Shipyard toolkit can be found on GitHub. We welcome any feedback and contributions!
Quelle: Azure

Microsoft Ignite: Azure Stack technical sessions

Last week my colleague, Wale Martins, posted a great summary of all the Microsoft Ignite sessions focused on Microsoft Azure Stack. For all of you attending Ignite, this is a good guide to learn more about some of the sessions we plan to deliver and when they will occur.

If you are like me, you crave as many details as you can get about each session, especially the technical sessions. This blog post provides more details about what you can expect from some technical sessions on Thursday and Friday at Ignite.

BRK3115: Becoming a Microsoft Azure Stack infrastructure rockstar

Are you ready to learn how to become a cloud administrator, Microsoft Azure Stack, infrastructure managing rockstar? If so, come start your journey to rockstar status in this session.

My colleague, Thomas Roettinger, program manager on the Azure Stack team, and I will be hosting a session about how we view infrastructure management in Azure Stack and what capabilities are included in the Azure Stack Technical Preview 2. We will deep dive into several areas including:

Integrating Azure Stack with your datacenter: What points of integration are available, why you should integrate, and how
Hardware management: How will cloud administrators manage the hardware supporting Azure Stack?
Monitoring: How are concepts like health and alerting enabled?

This session is just the start of our journey together. Feel free to follow us @chasat and @troettinger for more updates on these topics and come visit us while we are in Atlanta!

BRK3327: Dive deep into Microsoft Azure Stack IaaS

Azure delivers Infrastructure as a Service (IaaS) at hyper-scale, with a massive global infrastructure behind it. So how will Azure Stack deliver an IaaS offering that looks, tastes and feels just like Azure?

Scott Napolitan and Mallikarjun Chadalapaka, program managers on the Microsoft Azure Stack team, show you how Azure Stack delivers an IaaS experience that is consistent with Azure yet uses infrastructure at a fraction of the scale so it fits into your datacenter.

In this session, they will talk about how we took robust, scalable technologies directly from Azure and combined them with new features in Windows Server 2016 built for cloud. You will walk out with a better understanding of how the infrastructure works, and what IaaS scenarios are enabled by it. The session will dive deep into:

Compute, storage and networking resource providers and how they interact with the underlying infrastructure
The infrastructure and technologies that enable Azure Stack to surface simple resource primitives that can be consumed by the same APIs used with Azure
What to expect in terms of IaaS scenarios and features enabled in TP2
How cloud administrators will surface resources to their tenants

They also plan on doing some demos to help drive the learnings home and show how seamless the experience can be.

BRK3112: Learn about the community of templates for Azure Stack

Azure Stack provides consistency with Azure which allows you to reuse artifacts across clouds. But, how do you create those artifacts? And why are they so important?

Marc van Eijk and Ricardo Mendes will help you understand ARM templates across Azure and Azure Stack. They will cover the basics on how to get started with Azure Resource Manager templates including:

What tools you can use
How to create and deploy ARM templates
How to troubleshoot deployments

In this session, they will look at the existing community templates, how you can reuse them for your own purposes and how you can contribute to community templates in addition to:

An introduction to GitHub
Repositories, forks, clones, branches, commits and pull requests
End-to-end example on how to make an update to the Azure-Quickstart Templates

They will complete the session with a more advanced, production-ready, deployment scenario. Join us and learn how to get started with ARM templates for Azure Stack and Azure.

BRK3141: Discuss Microsoft DevOps on Azure Stack

Do you want to learn how to give your organization’s developers the flexibility of the cloud with the security of your own datacenter? If so, attend this session and learn about how Azure Stack integrates into a modern DevOps workflow.

My colleagues Anjay Ajodha, Matthew McGlynn, and Shri Natarajan will showcase how Azure Stack allows you to adapt the skills you use to deploy and maintain complex applications in the cloud to your on-premises infrastructure. They&;ll go over some common examples of continuous integration and deployment, using both Microsoft-based and Linux-based stacks, and demonstrate the value of having your infrastructure defined and versioned through code. They’ll cover a breadth of concepts including:

What is DevOps and how does it bring value to your organization?
How can your developers define an application’s infrastructure through Resource Manager templates, and deploying to Azure Stack?
How can rapid changes be made to an application’s infrastructure?
How can you bring DevOps to your own organization?

They also have some exciting demos planned!

BRK3148: Learn about hybrid applications with Azure and Azure Stack

Do you build and operate applications that use public cloud resources and resources in your datacenter? Are you doing cutting edge Hybrid App development? If yes, this session is for you!

Please join my colleague, Ricardo Mendes, program manager on the Azure Stack team, to learn how can you use Azure Stack to build solutions comprised of resources in the public cloud and on-premises in a consistent way leveraging your knowledge of Azure.

This session will cover a broad set of concepts, including:

The different types of hybrid cloud apps
Why hybrid solutions?
Challenges on building those type of apps
Tooling and resources to get you started faster
Tips and tricks

Next Stop Atlanta

All our presenters are excited to share their knowledge of Azure Stack and answer your questions both after their sessions as well as in our booth on the Expo Hall floor. We hope you are looking forward to learning and networking next week. See you in Atlanta!
Quelle: Azure

Microsoft Azure Storage samples – cross platform quick starts and more

Getting started with new technology can sometimes be complex and time consuming. Often it requires searching for the right getting started and operational guidance that include samples and posting questions on forums.

We at Azure Storage continue to strive to improve our end-user experiences to make it easy for you to discover and try out a sample in just 5 minutes. As part of this, we want to make our samples more easily discoverable, fully functional and community-friendly.

1. Discoverable: We now have a landing page with all the Azure Storage samples listed with per language GitHub repos. You can download the zip project file or fork the sample repo that you are interested in. Most of our Storage content pages either are already updated with or will be updated with relevant sample page links for you to easily pick up the sample, compile and experiment with. Beyond specifying either using the emulator or connecting to an Azure Storage account with your credentials, the code should just work.

2. Relevant: In the samples page (image on right), we initially focused on functional code samples for the most common Azure Storage usage scenarios for Blobs, Tables, Queues, and Files written in .NET, Java, Node.js, Python, C++, Ruby and PHP. These are already available for you to use right away!  Similarly, we have invested in functional samples in scripting /tooling options (Powershell, AzureCLI).

Following this, we plan to invest in creating a few scenario samples like data movement solutions, image upload from a mobile device, designing for high availability, client side encryption working across OS platforms and languages that light up the rich service and client library capabilities on Storage and at the same time showcase patterns and best practices. Also, as we build new features, we will do our best to keep these up to date so you can see your favorite new features in action.

3. Open Source: Finally, the code is open source and is readily usable from GitHub to make it possible for community contributions to the samples repository. Simply propose your change and we will review the design and code then merge it in. You can help build new samples or keep these samples up to date as the client libraries and the service evolves.

As always, we are interested in your feedback so please let us know what you think by providing comments on this post. As you start leveraging individual samples, please provide actionable feedback in the GitHub repo and/or the comments in the azure storage documentation web page.

Go ahead, navigate to the Storage samples page, get started with the samples and explore how easy it is to build cloud applications on Storage!
Quelle: Azure

Umbraco uses Azure SQL Database Elastic Pools for thousands of CMS tenants in the cloud

Umbraco is a popular open-source content-management system (CMS) that can run anything from small campaign or brochure sites to complex applications for Fortune 500 companies and global media websites.

Azure SQL Database powers Umbraco-as-a-Service (UaaS), a software-as-a-service (SaaS) solution that eliminates the need for on-premises deployments, provides built-in scaling, and removes management overhead by enabling developers to focus on product innovation rather than solution management. Umbraco is able to provide all those benefits by relying on the flexible platform-as-a-service (PaaS) model offered by Microsoft Azure, SQL Database Elastic Database Pools.

To learn more about Umbraco&;s journey and how you can take advantage of Elastic Database Pools, take a look at this newly published case study.
Quelle: Azure

Ladies and gentlemen, start your logging

This post was authored by Matías Quaranta, Azure MVP, Autocosmos.

In this blog post, I’m going to show you how I migrated from ELK to Azure Log Analytics and lowered our operation costs by more than ninety percent and reduced our maintenance time.

Background

The need for logging is probably as old as computers and its importance has grown hand in hand with the complexity of distributed architectures.

It is common nowadays for applications and platforms to span multiple servers, service instances, languages and even technologies. Keeping the status and logs of every part becomes a challenge.

In Autocosmos we work entirely on Azure, our whole architecture runs on a myriad of different Azure services, ranging from the most common ones like Azure App Services, Azure Redis Cache and Azure SQL Database to Azure Search, Azure DocumentDB and Microsoft Cognitive Services. We are a technology team that focuses on creating and deploying the best products and platforms for the Automotive and Car enthusiasts in Latin America by leveraging Azure’s SaaS and PaaS offerings.

A few years ago when we decided to implement a centralized logging platform, there weren’t a lot of options that could manage our diverse log output and structure, so we ended up implementing an ELK (Elasticsearch-Logstash-Kibana) Stack running on Azure Virtual Machines. I have to admit, it worked quite well and we were able to absorb JSON logs through Logstash and visualize through Kibana.

Pain points

Like I mentioned, we are a focused technology team of developers and engineers, and even though the ELK Stack worked, we were forced to maintain the virtual Linux environments, watch for Elasticsearch cluster health and a lot of tasks that we, as developers, really didn’t care for. We felt it was becoming a time sink to maintain our logging architecture and it wasn’t truly adding value to our products.

Adding to that, Azure VMs are an IaaS offering, which makes any kind of scaling and network balancing a tedious and complex task for a developer. And all this effort just for our logging architecture.
Who would have thought that a videogame would open the door for a solution to our problems?

Unexpected opportunities

After reading how Halo 5 managed its logging pipeline, we got in touch with the amazing team behind Azure Log Analytics and shared with them our current scenario.

For those not familiar with Azure Log Analytics, it’s a service part of Microsoft Operations Management Suite but has a separate pricing (including a free tier) and allows for collection, storing and analysis of log data from multiple sources, which includes Windows and Linux environment sources (on-premises or cloud).

To our surprise, they were already working on an implementation that would allow for custom log ingestion that wasn’t exclusively coming from a declared agent or source, through an HTTP API and using JSON objects.

This was exactly the solution we needed! We were already using JSON for our ELK Stack, so it was as easy as redirecting our log flow to Azure Log Analytics HTTP Data Collector API. Our whole architecture started directing its logs, taking advantage of the freedom of HTTP and JSON and the fact that Azure Log Analytics parses and understands each JSON attribute separately, we could just send our Front End logs, our DocumentDB logs, Cognitive Services logs. We could even send logs from inside Azure Functions, though they were different sources with different information, it just worked!

It became the backbone of all our operational insights.

Implementing JSON logging by HTTP

Every service and application that is part of our solution is running ASP.NET Core, so we created a .Net wrapper for the Azure Log Analytics HTTP Data Collector API in the form of a Nuget Package open to contributions on a GitHub public repository. This wrapper lets us send JSON payloads from any part of our architecture and even supports object serialization.

We use logging for post-mortem analysis, achievable by directly sending the Exception when an internal call on our APIs or Webjob function fails and then using the search experience in Azure Log Analytics to find the root of the problem:

Performance can be also measured with logs; we track our DocumentDB Request Units used on each call (using the exposed headers) to measure how our hourly quota is used and alert us if we are reaching a point where we need to scale:

Search experience

Azure Log Analytics is not just a log storage, it includes a powerful Search feature that lets you delve deep into your log data with a well-documented query syntax and shows each JSON attribute parsed and filterable:

 

You can even create custom Dashboards with widgets and tiles of your choice and build the visualizations best-suited for your scenario:

Finally, you can configure Alerts based on Searches and time frames and tied them with Webhooks, Azure Automation Runbooks or Email notifications.

Conclusion

Implementing Azure Log Analytics meant that not only we reduced our maintenance time, we also lowered our operation costs (no more VMs!) by more than ninety percent. All our time now devoted to what matters most for our business, creating and building the best products and platforms for Car enthusiasts in Latin America.
Quelle: Azure