Azure Service Bus Premium Messaging now available in UK

We’re pleased to announce Azure Service Bus Premium Messaging now available in the UK.

Service Bus Premium Messaging supports a broader array of mission-critical cloud apps, and provides all the messaging features of Service Bus queues and topics with predictable, repeatable performance and improved availability – now generally available in UK.

For more general information about Service Bus Premium Messaging, see this July 2016 blog post and this January 2017 article "Service Bus Premium and Standard messaging tiers".

We are excited about this addition, and invite customers using this Azure region to try Azure Service Bus Premium Messaging today!
Quelle: Azure

Azure Data Factory February new features update

Azure Data Factory allows you to bring data from a rich variety of locations in diverse formats into Azure for advanced analytics and predictive modeling on top of massive amounts of data. We have been listening to your feedback and strive to continuously introduce new features and fixes to support more data ingest and transformation scenarios. Moving to the new year, we would like to start a monthly feature summary blog series so our users can easily keep track of new feature details and use them right away.

Here is a complete list of the Azure Data Factory updates for February. We will go through them one by one in this blog post.

New Oracle driver bundled with Data Management Gateway with performance enhancements
Service Principal authentication support for Azure Data Lake Store
Automatic table schema creation when loading into SQL Data Warehouse
Zip compression/decompression support
Support extracting data from arrays in JSON files
Ability to explicitly specify cloud copy execution location
Support updating the new Azure Resource Manager Machine Learning web service

New Oracle driver bundled with Data Management Gateway with performance enhancements

Introduction: Previously, to connect to Oracle data source through Data Management Gateway users were required to install the Oracle provider separately, causing them to run into different issues. Now, with the Data Management Gateway version 2.7 update, a new Microsoft driver for Oracle is installed so no separate Oracle driver installation is required. The new bundled driver providers better load throughput, with some customers observing 5x-8x performance increase. Refer to Oracle connector documentation page for details.

Configuration: The Data Management Gateway periodically checks for updates. You can check its version from the Help page as shown below. If you are running a version lower than v2.7, you can get update directly from the Download Center. With Data Management Gateway version 2.7, the new driver will be used automatically in Copy Wizard when Oracle is being used as source. Learn more about Oracle linked service properties.

Service Principal authentication support for Azure Data Lake Store

Introduction: In addition to the existing user credential authentication, Azure Data Factory now supports Service Principal to access the Azure Data Lake Store. The token used in the previous user credential authentication mode could expire after 12 hours to 90 days, so periodically reauthorizing the token manually or programmatically is required for scheduled pipelines. Learn more about the token expiration of data moving from Azure Data Lake Store using Azure Data Factory. Now with the Service Principal authentication, the key expiration threshold is much longer so you are suggested to use this mechanism going forward, especially for scheduled pipelines. Learn more about the Azure Data Lake Store and Service Principal.

Configuration: In the Copy Wizard, you will see a new Authentication type option with Service Principal as default, shown below. 

Automatic table schema creation when loading into SQL Data Warehouse

Introduction: When copying data from On-Premise SQL Server or Azure SQL Database to Azure SQL Data Warehouse using the Copy Wizard, if the table does not exist in the destination SQL Data Warehouse, Azure Data Factory can now automatically create the destination table using schema from source.

Configuration: From the Copy Wizard, in the Table mapping page, you now have the option to map to existing sink tables or create new ones using source tables’ schema. Proper data type conversion may happen if needed to fix the incompatibility between source and destination stores. Users will be warned in the Schema mapping page, as shown in the second image below, about potential incompatibility issues. Learn more about Auto table creation.

 

Zip compression/decompression support

Introduction: The Azure Data Factory Copy Activity can now unzip/zip your files with ZipDeflate compression type in addition to the existing GZip, BZip2, and Deflate compression support. This applies to all file-based stores, including Azure Blob, Azure Data Lake Store, Amazon S3, FTP/s, File System, and HDFS.

Configuration: You can find the option in Copy Wizard pages as shown below. Learn more from the specifying compression section in each corresponding connector topic.

Extracting data from arrays in JSON files

Introduction: Now the Copy Activity supports parsing arrays in JSON files. This is to address the feedback that the entire array can only be converted to a string or skipped. You can now extract data from array or cross apply objects in array with data under root object.

Configuration: The Copy Wizard provides you with the option to choose how JSON array can be parsed as shown below. In this example, the elements in “orderlines” array are parsed as “prod” and “price” columns. For more details on configuration and examples, check the specifying JSON format section in each file-based data store topic.

Ability to explicitly specify cloud copy execution location

Introduction: When copying data between cloud data stores, Azure Data Factory, by default, detects the region of your sink data store and picks the geographically closest service to perform the copy. If the region is not detectable or the service that powers the Copy Activity doesn’t have a deployment available in that region, you can now explicitly set the Execution Location option to specify the region of service to be used to perform the copy. Learn more about the globally available data movement.

Note: Your data will go through that region over the wire during copy.

Configuration: Copy wizard will prompt for the Execution location option in the Summary page if you fall into the cases mentioned above.

Support updating the new Azure Resource Manager Machine Learning web service

Introduction: You can use the Machine Learning Update Resource Activity to update the Azure Machine Learning scoring service, as a way to operationalize the Machine Learning model retrain for scoring accuracy. Now in addition to supporting the classic web service, Azure Data Factory can support the new Azure Resource Manager based Azure Machine Learning scoring web service using Service Principal.

Configuration: The Azure Machine Learning Linked Service JSON now supports Service Principal so you can access the new web service endpoint. Learn more from scoring web service is Azure Resource Manager web service.

 

Above are the new features we introduced in February. Have more feedbacks or questions? Share your thoughts with us on Azure Data Factory forum or feedback site, we’d love to hear more from you.
Quelle: Azure

Azure Stream Analytics Tools for Visual Studio

Have you had chance to try out the public preview version of the Azure Stream Analytics Tools for Visual Studio yet? If not, read through this blog post and get a sense of the Stream Analytics development experience with Visual Studio. These tools are designed to provide an integrated experience of Azure Stream Analytics development workflow in Visual Studio. This will help you to quickly author query logic and easily test, debug, and diagnose your Stream Analytics jobs.

Using these tools, you get not only the best in class query authoring experience, but also the power of IntelliSense (for code-completion), Syntax Highlighting, and Error Marker capabilities. You now can test queries on local development machines with representative sample data to speed up development iterations. Seamless integration with the Azure Stream Analytics service helps you submitting jobs, monitoring live job metrics easily, and exporting existing jobs to projects with just few clicks. Never the less, you can naturally leverage Visual Studio source control integration capabilities to manage your job configurations and queries.

Use Visual Studio Project to manage a Stream Analytics job

To create a brand-new job, just create a new project from the built-in Stream Analytics template. The job input and output are stored in JSON files, and job query is saved in the Script.asaql file. Double click on input and output JSON files inside the project, you will find out that their setting UI is very similar to the Portal UI.

Using Visual Studio tools to author Stream Analytics queries

Double click on the Script.asaql file to open the query editor, its IntelliSense (code-completion), Syntax Highlighting, and Error Marker make your authoring experience very efficient. Also, if the local input file is specified and defined correctly or you have ever sampled data from the live input sources, the Query Editor will prompt the column names in the input source when the developer enters the input name.

Testing locally with sample data

Upon finishing the query authoring, you can quickly test it on your local machine by specifying local sample data as input. This speeds up your development iterations.

View live job metrics

After you validate the query test result, click on “Submit to Azure” to create a streaming job under your Azure subscription. Once the job is created you can start and monitor your job inside the job view.

View and update jobs in Server Explorer

You can browse all your Stream Analytics jobs under your subscriptions in Server Explorer, expand a job node to view its job status and metrics. If needed, you can directly update job queries and job configurations, or export an existing job to a project.

How do I get started?

You need to first install Visual Studio (2015, Visual Studio 2013 update 4, or Visual Studio 2012) and install the Microsoft Azure SDK for .NET version 2.7.1 or above using the Web platform installer. Then get the latest ASA tools from Download Center.

For more information, can be found at Tutorial: Use Azure Stream Analytics Tools for Visual Studio.
Quelle: Azure

SONiC: The networking switch software that powers the Microsoft Global Cloud

Running one of the largest clouds in the world, Microsoft has gained a lot of insight into building and managing a global, high performance, highly available, and secure network. Experience has taught us that with hundreds of datacenters and tens of thousands of switches, we needed to:

Use best-of-breed switching hardware for the various tiers of the network.
Deploy new features without impacting end users.
Roll out updates securely and reliably across the fleet in hours instead of weeks.
Utilize cloud-scale deep telemetry and fully automated failure mitigation.
Enable our Software-Defined Networking software to easily control all hardware elements in the network using a unified structure to eliminate duplication and reduce failures.

To address these requirements, Microsoft pioneered Software for Open Networking in the Cloud (SONiC), a breakthrough for network switch operations and management. Microsoft open-sourced this innovation to the community, making it available on our SONiC GitHub Repository. SONiC is a uniquely extensible platform, with a large and growing ecosystem of hardware and software partners, that offers multiple switching platforms and various software components.

Switch Abstraction Interface (SAI) accelerates hardware innovation

SONiC is built on the Switch Abstraction Interface (SAI), which defines a standardized API. Network hardware vendors can use it to develop innovative hardware platforms that can achieve great speeds while keeping the programming interface to ASIC (application-specific integrated circuit) consistent. Microsoft open sourced SAI in 2015. This approach enables operators to take advantage of the rapid innovation in silicon, CPU, power, port density, optics, and speed, while preserving their investment in one unified software solution across multiple platforms.

Figure 1. SONiC: one investment to unblock hardware innovation

Modular design with containers accelerates software evolution

SONiC is the first solution to break monolithic switch software into multiple containerized components. SONiC enables fine-grained failure recovery and in-service upgrades with zero downtime. It does this in conjunction with Switch State Service (SWSS), a service that takes advantage of open source key-value pair stores to manage all switch state requirements and drives the switch toward its goal state. Instead of replacing the entire switch image for a bug fix, you can now upgrade the flawed container with the new code, including protocols such as Border Gateway Protocol (BGP), without data plane downtime. This capability is a key element in the serviceability and scalability of the SONiC platform.

Containerization also enables SONiC to be extremely extensible. At its core, SONiC is aimed at cloud networking scenarios, where simplicity and managing at scale are the highest priority. Operators can plug in new components, third-party, proprietary, or open sourced software, with minimum effort, and tailor SONiC to their specific scenarios.

Figure 2. SONiC: plug and play extensibility

Monitoring and diagnostic capabilities are also key for large-scale network management. Microsoft continuously innovates in areas such as early detection of failure, fault correlation, and automated recovery mechanisms without human intervention. These innovations , such as Netbouncer and Everflow, are all available in SONiC, and they represent the culmination of years of operations experience.

Rapidly growing ecosystem

SONiC and SAI have gained wide industry support over the last year. Most major network chip vendors are supporting SAI on their flagship ASICs:

Barefoot Networks: Tofino

Broadcom Limited: Trident and Tomahawk

Cavium: XPliant

Centec Networks: Goldengate

Mellanox Technologies: Spectrum

Marvell Technology Group Ltd.: Prestera

Nephos Inc: Taurus

The community are actively adding new extensions and advanced capabilities to SAI releases:

Broadcom, Marvell, Barefoot, and Microsoft are driving advanced monitoring and telemetry in SAI to enable deep visibility into the ASIC and powerful analytic capabilities.

Mellanox, Cavium, Dell, and Centec are contributing to protocol announcement to SAI for richer protocol support and large scale network scenarios; for example, MPLS, Enhanced ACL model, Bridge Model, L2/L3 Multicast, segment routing, and 802.1BR.

Dell and Metaswitch are bringing failure resiliency and performance to SAI by adding L3 fast reroute and BFD proposals.

The pipeline model driven by Mellanox and Broadcom and multi-NPU by Dell enriches the infrastructure that SAI and network stack built on top can apply to.

At the Open Compute Project U.S. Summit 2017, we will demonstrate 100-gigabyte switches from multiple switch hardware companies. SONiC is enabled on their latest and fastest SKUs. The platforms that support SONiC are:

Arista Networks: 7050 and 7060 series

Centec Networks: E580 and E582 series

Dell Inc: S6000 ON, S6100-ON and Z9100-ON series

Edge-core Networks: AS7512 series, Wedge-100b

Facebook: Wedge-100

Ingrasys Technology Inc.: S9100 series

Marvell Technology Group Ltd.: RD-BC3-4825G6CG-A4 and RD-ARM-48XG6CG-A4 series

Mellanox Technologies: SN2700 series

With SONiC, the cloud community has choices—they can cherry pick best-of-breed solutions. Partners are joining the eco-system to make it richer:

Arista is offering containerized EOS components like EOS BGP to run on top of SONiC. The SONiC community now has easy access to Arista’s rich software suite of EOS.

Canonical enabled SONiC as a snap for Ubuntu. It enables MAAS to deploy SONiC to switches as well as using SONiC to deploy the servers. Unified network and server deployment is going to significantly improve the agility of operators.

Docker enabled using Swarm to manage the SONiC containers. With its simple and declarative service model, Swarm can manage and update SONiC at scale.

Mellanox is using SONiC to unleash the hardware-based packet generation capabilities in the Spectrum ASIC. This is a highly sought-after capability that will help diagnosis and troubleshooting.

By working with the community and our partner ecosystem, we’re looking to revolutionize networking for today and into the future.

SONiC is fully open sourced on GitHub and is available to industrial collaborators, researchers, students, and innovators alike. With the SONiC containerized approach and software simulation tools, developers can experience the switch software used in Microsoft Azure, one of the world’s largest cloud platforms, and contribute components that will benefit millions of customers. SONiC will benefit the entire cloud community, and we’re very excited for the increasingly strong partner momentum behind the platform.

Quelle: Azure

Enabling cloud workloads through innovations in Silicon

Today, I’ll be talking to a global community of attendees at the 2017 Open Compute Project (OCP) U.S. Summit about the exciting disruptions we see in the processor ecosystem, and how Microsoft is embracing the innovation created by these disruptions for the future of our cloud infrastructure.

The demand for cloud services continues to grow at a dramatic pace and as a result we are always innovating and looking out for technology disruptions that can help us scale more rapidly. We see one such disruption taking shape in the silicon manufacturing industry. The economic slowdown of Moore’s Law and the tremendous scale of the high-end smart phone market has expanded the number of processor suppliers leading to a “Cambrian” explosion of server options.

We’re announcing that we are driving innovation with ARM server processors for use in our datacenters. We have been working closely with multiple ARM server suppliers, including Qualcomm and Cavium, to optimize their silicon for our use. We have been running evaluations side by side with our production workloads and what we see is quite compelling. The high Instruction Per Cycle (IPC) counts, high core and thread counts, the connectivity options and the integration that we see across the ARM ecosystem is very exciting and continue to improve.

Also, due to the scale required for certain cloud services, i.e. the number of machines allocated to them, it becomes more economically feasible to optimize the hardware to the workload instead of the other way around, even if that means changing the Instruction Set Architecture (ISA).

When we looked at the variety of server options, ARM servers stood out for us for a number of reasons:

There is a healthy ecosystem with multiple ARM server vendors which ensures active development around technical capabilities such as cores and thread counts, caches, instructions, connectivity options, and accelerators.
There is an established developer and software ecosystem for ARM. We have seen ARM servers benefit from the high-end cell phone software stacks, and this established developer ecosystem has significantly helped Microsoft in porting its cloud software to ARM servers.
We feel that ARM is well positioned for future ISA enhancements because its opcode sets are orthogonal. For example, with out-of-order execution running out of steam and with research looking at novel data-flow architectures, we feel that ARM designs are much more amenable to handle those new technologies without disrupting their installed software base.

We have been working closely with multiple ARM suppliers, including Qualcomm and Cavium, on optimizing their hardware for our datacenter needs. One of the biggest hurdles to enable ARM servers is the software. Rather than trying to port every one of our many software components, we looked at where ARM servers are applicable and where they provide value to us. We found that they provide the most value for our cloud services, specifically our internal cloud applications such as search and indexing, storage, databases, big data and machine learning. These workloads all benefit from high-throughput computing. 

To enable these cloud services, we’ve ported a version of Windows Server, for our internal use only, to run on ARM architecture. We have ported language runtime systems and middleware components, and we have ported and evaluated applications, often running these workloads side-by-side with production workloads.

During the OCP US Summit, Qualcomm, Cavium, and Microsoft will be demonstrating the version of Windows Server ported for our internal use running on ARM-based servers.

The Qualcomm demonstration will run on the Qualcomm Centriq 2400 ARM server processor, their recently announced 10nm, 48-core server processor with Qulacomm’s most advanced interfaces for memory, network, and peripherals.

The demonstration with Cavium runs on their flagship 2nd generation 64-bit ThunderX2 ARMv8-A server processor SoCs for datacenter, cloud and high performance computing applications.

Cavium, who collaborated with leading server supplier Inventec, and Qualcomm have each developed an Open Compute-based motherboard compatible with Microsoft’s Project Olympus that allows us to seamlessly deploy these new servers in our datacenters.

We feel ARM servers represent a real opportunity and some Microsoft cloud services already have future deployment plans on ARM servers.  We are working with ARM Limited on design specifications and server standard requirements and we are committed to collaborate with the community on open standards to advance ARM64 servers for cloud services applications.

You can read about our other announcements during the 2017 Open Compute Project Summit at this blog.
Quelle: Azure

Ecosystem momentum positions Microsoft’s Project Olympus as de facto open compute standard

Last November we introduced Microsoft’s Project Olympus – our next generation cloud hardware design and a new model for open source hardware development. Today, I’m excited to address the 2017 Open Compute Project (OCP) U.S. Summit to share how this first-of-its-kind open hardware development model has created a vibrant industry ecosystem for datacenter deployments across the globe in both cloud and enterprise.

Since opening our first datacenter in 1989, Microsoft has developed one of the world’s largest cloud infrastructure with servers hosted in over 100 datacenters worldwide. When we joined OCP in 2014, we shared the same server and datacenter designs that power our own Azure hyper-scale cloud, so organizations of all sizes could take advantage of innovations to improve the performance, efficiency, power consumption, and costs of datacenters across the industry. As of today, 90% of servers we procure are based on designs that we have contributed to OCP.

Over the past year, we collaborated with the OCP to introduce a new hardware development model under Project Olympus for community based open collaboration. By contributing cutting edge server hardware designs much earlier in the development cycle, Project Olympus has allowed the community to contribute to the ecosystem by downloading, modifying, and forking the hardware design just like open source software. This has enabled bootstrapping a diverse and broad ecosystem for Project Olympus, making it the de facto open source cloud hardware design for the next generation of scale computing.

Today, we’re pleased to report that Project Olympus has attracted the latest in silicon innovation to address the exploding growth of cloud services and computing power needed for advanced and emerging cloud workloads such as big data analytics, machine learning, and Artificial Intelligence (AI). This is the first OCP server design to offer a broad choice of microprocessor options fully compliant with the Universal Motherboard specification to address virtually any type of workload.

We have collaborated closely with Intel to enable their support of Project Olympus with the next generation Intel Xeon Processors, codename Skylake, and subsequent updates could include accelerators via Intel FPGA or Intel Nervana solutions.

AMD is bringing hardware innovation back into the server market and will be collaborating with Microsoft on Project Olympus support for their next generation “Naples” processor, enabling application demands of high performance datacenter workloads.

We have also been working on a long-term project with Qualcomm, Cavium, and others to advance the ARM64 cloud servers compatible with Project Olympus. Learn more about Enabling Cloud Workloads through innovations in Silicon.

In addition to multiple choices of microprocessors for the core computation aspects, there has also been tremendous momentum to develop the core building blocks in the Project Olympus ecosystem for supporting a wide variety of datacenter workloads.

Today, Microsoft is announcing with NVIDIA and Ingrasys a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.

Our work with NVIDIA and Ingrasys is just a one of numerous stand-out examples of how the open source strategy of Project Olympus has been embraced by the OCP community. We are pleased by the broad support across industry partners that are now part of the Project Olympus ecosystem.

This is a significant moment as we usher in a new era of open source hardware development with the OCP community.  We intend for Project Olympus to provide a blueprint for future hardware development and collaboration at cloud speed. You can learn more and view the specification for Microsoft’s Project Olympus at our OCP GitHub branch.
Quelle: Azure

Networking innovations that drive the cloud disruption

Whether your organization is a one-person shop or a global enterprise, makes it easier to do business with customers and partners around the world, and it’s disrupting traditional IT practices in the process. Cloud computing reduces costs and improves service quality. It empowers organizations to quickly respond to changing demands for new services and lets them focus on their core business rather than IT. Enterprises are moving on-premises servers, datacenters, and services to the cloud. Startup companies are building cloud-based businesses from the ground up. Both are offloading infrastructure concerns to cloud providers and they’re getting nearly unlimited on-demand compute, storage, networking, and software as a service capabilities from almost anywhere in the world.

Ideally, cloud services “are secure, compliant, and just work.” Although you may realize that there is a massive datacenter infrastructure behind them, you may not know that the quality and integrity of the service you get depends on robust and secure networks. No matter how good the underlying server infrastructure is, a slow or low-quality network connection at any point between you, or your customer, and the datacenter will degrade your experience.

At Microsoft, our goal is to offer cloud services that any customer, anywhere in the world, can securely use without worrying about capacity constraints or service quality. We want customers to be able to get to their resources from anywhere, at any scale, with no limitations, easily and securely. However, when we started developing cloud offerings, we quickly realized that connecting an enterprise-grade cloud infrastructure across the entire world would take new networking technologies and novel management strategies. Traditional networking approaches wouldn’t give us the speed, reliability, and security needed by customers. To meet these challenges, we’ve been innovating and heavily investing in network infrastructure.

Figure 1. The Microsoft global network

Software-Defined Networking innovations

Hardware takes time to rack, stack, and configure, but we wanted to let customers scale their services up and down with a click. Using the pioneering work of Microsoft Research in Software-Defined Networking, we built a scalable and flexible datacenter network. It uses a virtualized layer 3 overlay network that is independent of the physical network topology. In this design, multiple virtual networks run on the same physical network in the datacenter, just like multiple virtual machines run isolated from each other on the same physical server. Each customer has their own isolated virtual network. Customers get on-demand network services with the network defined and managed in software, and are not tied to specific hardware.

For our Azure datacenters, we use scalable software load balancing developed by Microsoft Research which pushes networking intelligence into software. We eliminated hardware load balancers and replaced them with Azure Load Balancer running on standard compute servers. Now customers provision a load balancer with just a click. Although this approach is widely accepted now, it was novel in the industry when we first introduced it.

Performance

Azure handles the most demanding networking workloads by providing each virtual machine with up to 25 Gbps bandwidth with very low latency within each region. To achieve world-class performance, we optimized the network from an end-to-end perspective. Servers running in our datacenters have special network cards (NICs) that offload network processing to the hardware. We’ve also developed novel network acceleration techniques using Field Programmable Gate Array (FPGA) technology incorporated into our SmartNIC project introduced at SIGCOMM 2015. These network optimizations free up the server CPU to handle application workloads. Customers get a great networking experience. Linux and Windows virtual machines will experience these performance improvements while returning valuable CPU cycles to the application. When our world-wide deployment completes in April, we’ll update our VM Sizes table so you can see the expected networking throughput performance numbers for our virtual machines.

Another area we tackled to improve performance was how we connect our regional datacenters. Worldwide, Microsoft has regions comprised of multiple campuses and each campus may have multiple datacenters. The sheer physical size and power consumption of the physical network gear needed to connect our datacenters within these campuses presented a design challenge. We took the learnings from designing and deploying in-datacenter flat, high bandwidth networks and applied them to inter-DC networks. We created a regional network high bandwidth interconnection architecture using networking optics that Microsoft co-developed. These optics will be available from third-party suppliers, thereby allowing other cloud providers to take advantage of our innovations in this area. 

Global backbone and edge: Connecting from any client, anywhere

We wanted to optimize the network experience as customers connect to our cloud services from anywhere in the world. We built a backbone network that spans the globe, even laying undersea cables to Europe and Asia. All our datacenters connect to this global network that supports Azure, Bing, Dynamics 365, Office 365, OneDrive, Skype, Xbox, and soon LinkedIn. It’s one of the largest backbone networks in the world.

Our backbone network also connects to the Microsoft edge network, which in turn connects our peers to the Internet. We peer with thousands of networks with more than 4,500 connections globally. Our goal is that latency will be dictated only by the physics of the speed of light, not by the lack of a networking path or lack of sufficient bandwidth in a geography. Since network latency is a function of physical distance, we strategically locate our edge nodes close to customers. We continue to grow our network, with more than 130 edge nodes around the world. To further reduce latency, we allow customers to cache content at the edge nodes. We’ve developed Traffic Manager, a network service that automatically routes customer traffic to the closest datacenter and acts as a global cloud load balancer. Customers define a routing policy, and we implement it. In addition to performance, policies can be defined for disaster recovery and round-robin load sharing.

At selected edge locations, we also allow private network connectivity via a service called ExpressRoute. Customers can use their existing network carriers to bypass the Internet to reach our cloud services. Customers enter our network at select edge locations; from there, they reach any of our datacenters. For example, customers can get connectivity to a local ExpressRoute site in Dallas and access their virtual machines in Amsterdam, Busan, Dublin, Hong Kong, Osaka, Seoul, Singapore, Sydney, Tokyo, or any of our other datacenters, with the traffic safely staying on our global backbone network. We have 37 ExpressRoute sites with one near each Azure datacenter, as well as other strategic locations. Every time we announce a new Azure region, like we recently did in Korea, you can expect that ExpressRoute will also be there.

Microsoft is a global software and services company. Our rich heritage, combined with years of operational experience running a global cloud infrastructure, permeates our perspective and approach. We’ve built a cloud-scale network using automation, and we’re moving intelligence from hardware to software. In future posts over the next few weeks, we’ll dive deeper into Microsoft networking technologies, detailing our journey as we continue to pioneer and transform the computing landscape in this exciting era of cloud disruption. We’ll cover topics such as our approach to open source networking, a deeper inspection of our global WAN, details on network security, and insights into how we manage a global network that supports some of the biggest services in the world. We hope you’ll join us for this insider’s tour of Microsoft networking.
Quelle: Azure

Backup Azure VMs using Azure Backup templates

This post was co-authored by Nilay Shah, Engineer, Azure Backup Product Group.

Azure Backup provides cloud-first solution to backup VMs running in Azure and on-premises. You can backup Azure Windows VMs with application-consistency and Azure Linux VMs with file-system consistency without the need to shutdown virtual machines using enterprise level policy management. It provides backup support for encrypted virtual machines, VMs running on Premium storage, and on Managed Disks. You can restore a full VM or disks and use them to create a customized VM, or get individual files from VM backup using Instant File recovery.

Azure Templates provide a way for provisioning resources using declarative templates. These templates can be deployed using Azure Portal or PowerShell. You can get started with backing up and protecting your VMs running in Azure using these templates. In this blog post, we will explore how to create a Recovery Services vault, a backup policy, and use them to backup set of VMs using vault and policy created using templates.

Create Recovery Services vault

Recovery Services vault is an Azure resource, used by Azure Backup and Azure Site Recovery to provide Backup and Disaster Recovery capabilities for workloads running either on-premises or in Azure. To create Recovery Services vault, we can use an Azure quick start template called Create Recovery Services vault.

By default every Recovery Services vault created comes with a default policy. This policy has a daily backup schedule and retains backup copies for 30 days. You can use this policy to backup VMs or create a custom backup policy. If you want to create a custom policy, you can combine vault creation and policy creation using a single quick start template based on the your organizational requirement of Weekly Backup schedule or Daily Backup schedule.

Configure Backup on VMs

Recovery Services vault is used to store backups for multiple VMs belonging to different resource groups. You can configure classic, as well as Resource Manager VMs, to be backed up in a Recovery Services vault using the quick start template, Backup Classic and Resource Manager VMs. Most of the enterprises deploy their application specific VMs to a single resource group and you can back them up to a vault, belonging to the same resource group as VMs or to a different group using the simple quick start template, Backups VMs to Recovery Services vault. Please be sure to check out Azure Backup best practices to optimally deploy VMs to a backup policy. 

Once configured for backup, you can restore or take an on-demand backup of backed up VMs using Portal or PowerShell. 

Related links and additional content

Want more details? Check out Azure Backup documentation and Azure Template walkthrough
Browse through Azure Quickstart templates
Learn more about Azure Backup
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates

Quelle: Azure

Protect and recover Hyper-V machines to premium storage with Azure Site Recovery

We are excited to announce support for replication of Hyper-V virtual machines (managed by System Center VMM or not under System Center VMM management)  to premium storage accounts in Azure. 

We recommend that you replicate I/O intensive enterprise workloads to premium storage which provides high IOPS and high disk throughput per VM with extremely low latencies for read operations. At the time of a failover to Azure, workloads replicating to Premium storage will come up on Azure virtual machines running on premium storage and achieves high-levels of performance, both in terms of throughout and latency.

To set up replication to premium storage, you will need

A premium storage account: When you replicate your on-premises virtual machines/physical servers to premium storage, all the data residing on the protected machine’s disks is replicated to the premium storage account.
A standard storage account: After the initial phase of replicating disk data is complete, all changes to the on-premises disk data are tracked continuously and stored as replication logs in the standard storage account.

 

Below are a few considerations to keep in mind when using premium storage:

Replication to premium storage is supported for both Classic and Resource Manager storage accounts.
Copy frequency of 5 minutes or 15 minutes (configured as a setting in Replication policies) is supported for premium storage. This is based on the number of snapshots per blob (100 snapshots per blob) supported by premium storage.

 

Support Matrix for premium storage

Scenario
Replication to premium storage

Hyper-V virtual machines (managed by System Center VMM/ not managed by System Center VMM)
                     Yes

VMware virtual machines/ physical servers
                     Yes

 

To understand more about how premium storage works, including performance and scalability targets of premium storage, you can refer the detailed documentation on Premium Storage from the Azure storage team.

Ready to start using ASR? Check out additional product information, to start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR UserVoice to let us know what features you want us to enable next.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run (Azure, AWS, Windows Server, Linux, VMware or OpenStack) with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.
Quelle: Azure

Azure Data Lake Tools for VSCode (Preview) – March Update

Continue our journey to launch Azure Data Lake Tools for VSCode for better cross-platform support,  meet developers where they are in Mac, Linux and Windows, and deliver a first class light weight code editor experiences for U-SQL. We are pleased to announce our March release which includes a few important features.

Primary New Features

Assembly Registration –  Our updated “ADL: Register Assembly” feature enables you not only quickly register an assembly, but also implement a built-in intelligence to detect dependencies, auto register dependencies and upload dependencies files  (if needed).

ADLS Integration – Following the strong momentum of Azure Data Lake, we have enabled seamless ADLS integration to allow you to easily browse and preview your ADLS files. You can do so either through command palette or right click menu.

a) Use ADL: List Storage Path to navigate to your ADLS folders, files and objects.:  

b) Right click on a path string, choose ADL: List Storage Path to automatically list all folders and files for that path.

a) Use ADL: Preview Storage File to preview your ADLS files.

b) Right click on a path string, choose ADL: Preview Storage File to automatically open the preview of the file.

Enhanced Language Service – To boost your productivity and solidify the U-SQL author experiences, Go To Definition and Auto Format features have been added.

Open Sample Code – To improve your get started experiences, ADL: Open Sample Script have been enhanced to facilitate your first time use of the tool and get familiar with U-SQL language.

How do I get started?

Please first install Visual Studio Code and download the prerequisite files including JRE 1.8.x, Mono 4.2.x (for Linux and Mac), and .Net Core (for Linux and Mac).  Then get the latest ADL Tools by going to the VSCode Extension repository or VSCode Marketplace and searching “Azure Data Lake Tool for VSCode”.   

For more information about Azure Data Lake Tool for VSCode, please see:

More information on using Data Lake Tools for VSCode
For ADL Tools for VSCode User Instructions in Video: ADL for VSCode Video
For the getting started information on Data Lake Analytics: Tutorial: get started with Azure Data Lake Analytics
For the information on developing assemblies: Develop U-SQL assemblies for Azure Data Lake Analytics jobs

 

If you have questions please don’t hesitate to contact us at: Azure Data Lake Dev. Tooling Team adldevtool@microsoft.com.
Quelle: Azure