AWS Database Migration Service Enables Individual Table Reload

AWS Database Migration Service (DMS) continues to make migrations easier by adding the ability to reload data for an individual table. When a database table is migrated as part of a larger migration task, you can choose to reload it while the migration is already running, without having to reload all tables in the task. This adds a new level of flexibility so you can complete a migration quicker than ever before. DMS can move data between heterogeneous database engines, giving you the freedom to migrate your databases unchanged to AWS or to replace expensive, restrictive commercial databases with cloud-enabled, cost-effective ones.
Quelle: aws.amazon.com

Amazon Aurora Supports Cross-Account Encrypted Snapshot Sharing

Amazon Aurora now supports the sharing of encrypted snapshots between AWS accounts. This follows our recent announcement of encrypted database replication and snapshot copy across regions, and extends the Aurora security model to separate accounts that have shared encryption keys. The owner of the other account can copy the snapshot or restore a database instance from it.
Quelle: aws.amazon.com

Amazon Aurora Announces Encryption Support for Globally Distributed Database Deployments

We are pleased to announce new Amazon Aurora capabilities for global database deployment. Replication across AWS regions now supports encrypted databases, so you can scale read operations to a location close to your users, build a disaster recovery architecture that spans the globe, and easily migrate data from one region to another, all the while maintaining full encryption at rest and in transit.
Quelle: aws.amazon.com

Helping PTG attendees and other developers get to the OpenStack Summit

Although the OpenStack design events have changed, developers and operators still have a critical perspective to bring to the OpenStack Summits. At the PTG, a common whisper heard in the hallways was, &;I really want to be at the Summit, but my [boss/HR/approver] doesn&;t understand why I should be there.&; To help you out, we took our original &8220;Dear Boss&8221; letter and made a few edits for the PTG crowd. If you&8217;re a contributor or developer who wasn&8217;t able to attend the PTG, with a few edits, this letter can also work for you. (Not great with words? Foundation wordsmith Anne can help you out&;anne at openstack.org)
 
Dear [Boss],
 
I would like to attend the OpenStack Summit in Boston, May 8-11, 2017. At the Pike Project Team Gathering in Atlanta (PTG), I was able to learn more about the new development event model for OpenStack. In the past I attended the Summit to participate in the Design Summit, which encapsulated the feedback and planning as well as design and development of creating OpenStack releases. One challenge was that the Design Summit did not leave enough time for “head down” work within upstream project teams (some teams ended up traveling to team-specific mid-cycle sprints to compensate for that). At the Pike PTG, we were able to kickstart the Pike cycle development, working heads down for a full week. We made great progress on both single project and OpenStack-wide goals, which will improve the software for all users, including our organization.
 
Originally, I––and many other devs––were under the impression that we no longer needed to attend the OpenStack Summit. However, after a week at the PTG, I see that I have a valuable role to play at the Summit’s “Forum” component. The Forum is where I can gather direct feedback and requirements from operators and users, and express my opinion and our organization’s about OpenStack’s future direction. The Forum will let me engage with other groups with similar challenges, project desires and solutions.
 
While our original intent may have been to send me only to the PTG, I would strongly like us to reconsider. The Summit is still an integral part of the OpenStack design process, and I think my attendance is beneficial to both my professional development and our organization. Because of my participation in the PTG, I received a free pass to the Summit, which I must redeem by March 14.      
 
Thank you for considering my request.
[Your Name]
Quelle: openstack.org

Networking innovations that drive the cloud disruption

Whether your organization is a one-person shop or a global enterprise, makes it easier to do business with customers and partners around the world, and it’s disrupting traditional IT practices in the process. Cloud computing reduces costs and improves service quality. It empowers organizations to quickly respond to changing demands for new services and lets them focus on their core business rather than IT. Enterprises are moving on-premises servers, datacenters, and services to the cloud. Startup companies are building cloud-based businesses from the ground up. Both are offloading infrastructure concerns to cloud providers and they’re getting nearly unlimited on-demand compute, storage, networking, and software as a service capabilities from almost anywhere in the world.

Ideally, cloud services “are secure, compliant, and just work.” Although you may realize that there is a massive datacenter infrastructure behind them, you may not know that the quality and integrity of the service you get depends on robust and secure networks. No matter how good the underlying server infrastructure is, a slow or low-quality network connection at any point between you, or your customer, and the datacenter will degrade your experience.

At Microsoft, our goal is to offer cloud services that any customer, anywhere in the world, can securely use without worrying about capacity constraints or service quality. We want customers to be able to get to their resources from anywhere, at any scale, with no limitations, easily and securely. However, when we started developing cloud offerings, we quickly realized that connecting an enterprise-grade cloud infrastructure across the entire world would take new networking technologies and novel management strategies. Traditional networking approaches wouldn’t give us the speed, reliability, and security needed by customers. To meet these challenges, we’ve been innovating and heavily investing in network infrastructure.

Figure 1. The Microsoft global network

Software-Defined Networking innovations

Hardware takes time to rack, stack, and configure, but we wanted to let customers scale their services up and down with a click. Using the pioneering work of Microsoft Research in Software-Defined Networking, we built a scalable and flexible datacenter network. It uses a virtualized layer 3 overlay network that is independent of the physical network topology. In this design, multiple virtual networks run on the same physical network in the datacenter, just like multiple virtual machines run isolated from each other on the same physical server. Each customer has their own isolated virtual network. Customers get on-demand network services with the network defined and managed in software, and are not tied to specific hardware.

For our Azure datacenters, we use scalable software load balancing developed by Microsoft Research which pushes networking intelligence into software. We eliminated hardware load balancers and replaced them with Azure Load Balancer running on standard compute servers. Now customers provision a load balancer with just a click. Although this approach is widely accepted now, it was novel in the industry when we first introduced it.

Performance

Azure handles the most demanding networking workloads by providing each virtual machine with up to 25 Gbps bandwidth with very low latency within each region. To achieve world-class performance, we optimized the network from an end-to-end perspective. Servers running in our datacenters have special network cards (NICs) that offload network processing to the hardware. We’ve also developed novel network acceleration techniques using Field Programmable Gate Array (FPGA) technology incorporated into our SmartNIC project introduced at SIGCOMM 2015. These network optimizations free up the server CPU to handle application workloads. Customers get a great networking experience. Linux and Windows virtual machines will experience these performance improvements while returning valuable CPU cycles to the application. When our world-wide deployment completes in April, we’ll update our VM Sizes table so you can see the expected networking throughput performance numbers for our virtual machines.

Another area we tackled to improve performance was how we connect our regional datacenters. Worldwide, Microsoft has regions comprised of multiple campuses and each campus may have multiple datacenters. The sheer physical size and power consumption of the physical network gear needed to connect our datacenters within these campuses presented a design challenge. We took the learnings from designing and deploying in-datacenter flat, high bandwidth networks and applied them to inter-DC networks. We created a regional network high bandwidth interconnection architecture using networking optics that Microsoft co-developed. These optics will be available from third-party suppliers, thereby allowing other cloud providers to take advantage of our innovations in this area. 

Global backbone and edge: Connecting from any client, anywhere

We wanted to optimize the network experience as customers connect to our cloud services from anywhere in the world. We built a backbone network that spans the globe, even laying undersea cables to Europe and Asia. All our datacenters connect to this global network that supports Azure, Bing, Dynamics 365, Office 365, OneDrive, Skype, Xbox, and soon LinkedIn. It’s one of the largest backbone networks in the world.

Our backbone network also connects to the Microsoft edge network, which in turn connects our peers to the Internet. We peer with thousands of networks with more than 4,500 connections globally. Our goal is that latency will be dictated only by the physics of the speed of light, not by the lack of a networking path or lack of sufficient bandwidth in a geography. Since network latency is a function of physical distance, we strategically locate our edge nodes close to customers. We continue to grow our network, with more than 130 edge nodes around the world. To further reduce latency, we allow customers to cache content at the edge nodes. We’ve developed Traffic Manager, a network service that automatically routes customer traffic to the closest datacenter and acts as a global cloud load balancer. Customers define a routing policy, and we implement it. In addition to performance, policies can be defined for disaster recovery and round-robin load sharing.

At selected edge locations, we also allow private network connectivity via a service called ExpressRoute. Customers can use their existing network carriers to bypass the Internet to reach our cloud services. Customers enter our network at select edge locations; from there, they reach any of our datacenters. For example, customers can get connectivity to a local ExpressRoute site in Dallas and access their virtual machines in Amsterdam, Busan, Dublin, Hong Kong, Osaka, Seoul, Singapore, Sydney, Tokyo, or any of our other datacenters, with the traffic safely staying on our global backbone network. We have 37 ExpressRoute sites with one near each Azure datacenter, as well as other strategic locations. Every time we announce a new Azure region, like we recently did in Korea, you can expect that ExpressRoute will also be there.

Microsoft is a global software and services company. Our rich heritage, combined with years of operational experience running a global cloud infrastructure, permeates our perspective and approach. We’ve built a cloud-scale network using automation, and we’re moving intelligence from hardware to software. In future posts over the next few weeks, we’ll dive deeper into Microsoft networking technologies, detailing our journey as we continue to pioneer and transform the computing landscape in this exciting era of cloud disruption. We’ll cover topics such as our approach to open source networking, a deeper inspection of our global WAN, details on network security, and insights into how we manage a global network that supports some of the biggest services in the world. We hope you’ll join us for this insider’s tour of Microsoft networking.
Quelle: Azure

Backup Azure VMs using Azure Backup templates

This post was co-authored by Nilay Shah, Engineer, Azure Backup Product Group.

Azure Backup provides cloud-first solution to backup VMs running in Azure and on-premises. You can backup Azure Windows VMs with application-consistency and Azure Linux VMs with file-system consistency without the need to shutdown virtual machines using enterprise level policy management. It provides backup support for encrypted virtual machines, VMs running on Premium storage, and on Managed Disks. You can restore a full VM or disks and use them to create a customized VM, or get individual files from VM backup using Instant File recovery.

Azure Templates provide a way for provisioning resources using declarative templates. These templates can be deployed using Azure Portal or PowerShell. You can get started with backing up and protecting your VMs running in Azure using these templates. In this blog post, we will explore how to create a Recovery Services vault, a backup policy, and use them to backup set of VMs using vault and policy created using templates.

Create Recovery Services vault

Recovery Services vault is an Azure resource, used by Azure Backup and Azure Site Recovery to provide Backup and Disaster Recovery capabilities for workloads running either on-premises or in Azure. To create Recovery Services vault, we can use an Azure quick start template called Create Recovery Services vault.

By default every Recovery Services vault created comes with a default policy. This policy has a daily backup schedule and retains backup copies for 30 days. You can use this policy to backup VMs or create a custom backup policy. If you want to create a custom policy, you can combine vault creation and policy creation using a single quick start template based on the your organizational requirement of Weekly Backup schedule or Daily Backup schedule.

Configure Backup on VMs

Recovery Services vault is used to store backups for multiple VMs belonging to different resource groups. You can configure classic, as well as Resource Manager VMs, to be backed up in a Recovery Services vault using the quick start template, Backup Classic and Resource Manager VMs. Most of the enterprises deploy their application specific VMs to a single resource group and you can back them up to a vault, belonging to the same resource group as VMs or to a different group using the simple quick start template, Backups VMs to Recovery Services vault. Please be sure to check out Azure Backup best practices to optimally deploy VMs to a backup policy. 

Once configured for backup, you can restore or take an on-demand backup of backed up VMs using Portal or PowerShell. 

Related links and additional content

Want more details? Check out Azure Backup documentation and Azure Template walkthrough
Browse through Azure Quickstart templates
Learn more about Azure Backup
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates

Quelle: Azure