Cloud-first, mobile-first: Microsoft moves to a fully wireless network

Supporting a network in transition: Q&A blog series with David Lef

In a series of blog posts, this being the third, David Lef, Principal Network Architect at Microsoft IT, chats with us about supporting a network as it transitions from a traditional infrastructure to a fully wireless platform. Microsoft IT is responsible for supporting 900 locations and 220,000 users around the world. David is helping to define the evolution of the network topology to a cloud-based model in Azure that supports changing customer demands and modern application designs.

David Lef explains the planning and processes behind migrating to a wireless networking environment, including primary drivers, planning considerations, and challenges.

Q: Can you explain your role and the environment you support?

A: My role at Microsoft is Principal Network Architect with Microsoft IT. My team supports almost 900 sites around the world and the networking components that connect those sites, which are used by a combination of over 220,000 Microsoft employees and vendors that work on our behalf. Our network supports over 2,500 individual applications and business processes. We are responsible for providing wired, wireless, and remote network access for the organization, implementing network security across our network, including our network edges. We make sure that the nuts and bolts of network functionality work as they should: IP addressing, name resolution, traffic management, switching, routing, and so on.

Q: What is driving the shift toward a wireless environment?

A: For Microsoft, it’s two main things: first, our employees want flexibility in the way they get their work done. Our users don’t simply use a workstation at a desk to do their jobs anymore. They’re using their phone, their tablet, their laptop, and their desktop computer, if they have one. It’s evolved into a devices ecosystem rather than a single productivity device, and most of those devices support wireless. In fact, most of them support only wireless. The second motivator is simple cost effectiveness. It’s cheaper and simpler to set up and install a wireless environment than it is to do the same with wired infrastructure. It also makes upgrades and additions to the networking environment easier and cheaper. With wireless, there are no switch stacks to add and no cables to run.

Q: How did you begin planning for this?

A: When Microsoft started accepting and supporting mobile devices connecting to the corporate network, it was clear that the way our network was accessed was going to change. We initially planned to provide wireless support to the physical locations that needed it the most as a support for our wired infrastructure. However, traffic and use analysis showed that the wireless network was very quickly becoming our main network infrastructure, from a user’s perspective. We knew that wireless needed to be there to support mobile devices, and we knew we had to plan for the wireless network to support most of our end-user connectivity, eventually. We looked at the device profiles across our information worker roles to assess what was necessary, and we built out a network to meet that demand and make sure that it scales well with future growth.

We had, and still have, a lot of wired infrastructure that simply isn’t being used to its potential. At many of our information worker sites, wired port utilization is less than 10 percent. If you average it out across all of our user sites, it’s closer to thirty percent, but when you do the math, it still ends up being a lot of investment in network infrastructure that simply isn’t necessary. Over seventy-five percent of our sites are targeted for wireless-first, and we’ve been going through the process of removing dependencies on the wired network infrastructure from a user perspective. In some cases, that means putting wireless network adapters into desktop computers that don’t natively support wireless, and simply making sure wireless connectivity is enabled and configured on those devices that do support it. The more complete we can make the transition to wireless in terms of number of devices, the sooner we can retire the existing wired infrastructure and realize the cost savings from it. We estimate that our wireless-first strategy will result in a reduction in network equipment of more than fifty percent.

Q: What are your key considerations in this project?

A: It’s driven primarily from the high-level goal of cloud first, mobile first. Wireless networking simply complements both of these strategies; it’s a logical and necessary part of the larger puzzle. We are a business, of course, so cost and capital cost savings are important. Migration to wireless as our primary network infrastructure means long-term cost avoidance, less equipment to buy, and decreased maintenance requirements.

We also want the transition to be as non-intrusive as possible to our users. We’re going on-site to make sure they’re ready for the transition to wireless. This might mean helping users install or configure wireless adapters and showing them how to perform tasks, such as installing an operating system, differently. We also want to educate them about using the network and get them comfortable with being their own first level of support and solving basic issues they might encounter.

Q: What have been or will be the biggest challenges in making this work?

A: We’ve run into some challenges in a few different areas. Different devices and their drivers have their peculiarities and issues, whether that’s with a new wireless adapter we’re putting into an existing computer or access and authentication mechanisms for devices that use older wireless network hardware. We also have a lot of wireless access points around the globe, so standardization of those access points has been a challenge. With the advent of bring your own device (BYOD) and the emergence of the “Internet of Things” (IoT), many more wirelessly networked devices are showing up in our environment, and bandwidth is always a concern. A big part of managing this trend is realizing that not all IoT devices need to be included in our corporate network—only those that will benefit from the functionality that the corporate network enables. We’re providing the highest level of wireless bandwidth that we can, as far as supporting devices and meeting transmission standards, but we’re still closely monitoring bandwidth availability to ensure that we’re eliminating any unnecessary bottlenecks.

We’ve also had to address some changes in processes and conceptions. In some cases, older technology that’s in use doesn’t work with wireless, so we have to show users how to do tasks differently, or give them an alternative method.

Q: Is the technology available today to make this successful?

A: Yes, and we’re in the process of rolling out 802.11ac, which gives us more capabilities and bandwidth across our wireless infrastructure. We’ve also committed to having 802.11ac fully implemented before we begin any mandated removal of our existing wired infrastructure. We want to ensure that our wireless network can provide our users a satisfactory level of reliability and performance before we start removing the old way of connecting.

We’re continually rolling out upgrades and changes to our infrastructure to implement 802.11ac, but it also means making sure that existing equipment that our users employ isn’t being removed from the network inadvertently. Whether we provide an 802.11ac-compatible solution or simply replace the device itself, we’re very conscious of reducing the negative impact of the change on our users.

Q: What is the roadmap for pilot and implementation of this project?

A: It’s already in place and underway. The pilot project has been closed and we have 660 sites targeted for wireless infrastructure updates and conversion in the next 24 months. The other 200 or so will retain wired functionality—these are datacenters, engineering centers, or locations where our customers or users might still require wired connectivity. In the grand scheme of things, we’ll be cutting over 90 percent of our end-user network infrastructure. Wired ports will still be available where they are needed, but our footprint and associated resources needed to support it will be massively reduced.

Learn more

Other blog posts in this series:

Supporting network architecture that enables modern work styles
Engineering the move to cloud-based services

Learn how Microsoft IT is evolving its network architecture.
Quelle: Azure

Azure SQL Database Threat Detection, your built-in security expert

Azure SQL Database Threat Detection has been in preview for a few months now. We’ve onboarded many customers and received some great feedback. We would like to share a few customer experiences that demonstrate how Azure SQL Database Threat Detection helped address their concerns about potential threats to their database.

What is Azure SQL Database Threat Detection?

Azure SQL Database Threat Detection is a new security intelligence feature built into the Azure SQL Database service. Working around the clock to learn, profile and detect anomalous database activities, Azure SQL Database Threat Detection identifies potential threats to the database.

Security officers or other designated administrators can get an immediate notification about suspicious database activities as they occur. Each notification provides details of the suspicious activity and recommends how to further investigate and mitigate the threat.

Currently, Azure SQL Database Threat Detection detects potential vulnerabilities and SQL injection attacks, as well as anomalous database access patterns. The following customer feedback attests to how Azure SQL Database Threat Detection warned them about these threats as they occurred and helped them improve their database security.

Case : Attempted database access by former employee

Borja Gómez, architect and development lead at YesEnglish

“Azure SQL Database Threat Detection is a useful feature that allows us to detect and respond to anomalous database activities, which were not visible to us beforehand. As part of my role designing and building Azure-based solutions for global companies in the Information and Communication Technology field, we always turn on Auditing and Threat Detection, which are built-in and operate independently of our code. A few months later, we received an email alert that "Anomalous database activities from unfamiliar IP (location) was detected." The threat came from a former employee trying to access one of our customer’s databases, which contained sensitive data, using old credentials. The alert allowed us to detect this threat as it occurred, we were able to remediate the threat immediately by locking down the firewall rules and changing credentials, thereby preventing any damage. Such is the simplicity and power of Azure.”

Case : Preventing SQL Injection attacks

Richard Priest, architectural software engineer at Feilden Clegg Bradley Studios and head of the collective at Missing Widget

“Thanks to Azure SQL Database Threat Detection, we were able to detect and fix code vulnerabilities to SQL injection attacks and prevent potential threats to our database. I was extremely impressed how simple it was to enable the threat detection policy using the Azure portal, which required no modifications to our SQL client applications. A while after enabling Azure SQL Database Threat Detection, we received an email notification about ‘An application error that may indicate a vulnerability to SQL injection attacks.’  The notification provided details of the suspicious activity and recommended concrete actions to further investigate and remediate the threat. The alert helped me to track down the source my error and pointed me to the Microsoft documentation that thoroughly explained how to fix my code. As the head of IT, I now guide my team to turn on Azure SQL Database Auditing and Threat Detection on all our projects, because it gives us another layer of protection and is like having a free security expert on our team.”

Case : Anomalous access from home to production database

Manrique Logan, architect and technical lead at ASEBA

“Azure SQL Database Threat Detection is an incredible feature, super simple to use, empowering our small engineering team to protect our company data without the need to be security experts. Our non-profit company provides user-friendly tools for mental health professionals, storing health and sales data in the cloud. As such we need to be HIPAA and PCI compliant, and Azure SQL Database Auditing and Threat Detection help us achieve this. These features are available out of the box, and simple to enable too, taking only a few minutes to configure. We saw the real value from these not long after enabling Azure SQL Database Threat Detection, when we received an email notification that ‘Access from an unfamiliar IP address (location) was detected.&;  The alert was triggered as a result of my unusual access to our production database from home. Knowing that Microsoft is using its vast security expertise to protect my data gives me incredible peace of mind and allows us to focus our security budget on other issues. Furthermore, knowing the fact that every database activity is being monitored has increased security awareness among our engineers. Azure SQL Database Threat Detection is now an important part of our incident response plan. I love that Azure SQL Database offers such powerful and easy-to-use security features.

Turning on Azure SQL Database Threat Detection

Azure SQL Database Threat Detection is incredibly easy to enable. You simply navigate to the Auditing and Threat Detection configuration blade for your database in the Azure management portal. There you switch on Auditing and Threat Detection, and configure at least one email address for receiving alerts.

Click the following links to:

Learn more about Azure SQL Database Threat Detection.
Learn more about Azure SQL Database.

We&039;ll be glad to get feedback on how this feature is serving your security requirements. Please feel free to share your comments below.
Quelle: Azure

Portal support for Azure Search blob and table indexers now in preview

When building a search enabled application, data can come from many places and take many forms, so making it easy to ingest a variety of data sources is extremely important. Bringing amazing search to your data just got a little easier. Today we&;re excited to announce preview support for Azure blob and Azure table data sources in the Portal. Make your Microsoft Office, HTML, PDF, and other documents searchable with just a few clicks in the Import Data wizard.

We’ve provided simple user interfaces to pick accounts and containers from within your subscription. Perhaps you want to index blobs containing an Outlook email archive, or create a resume search application to streamline your hiring process.

After selecting your data, we’ll detect your metadata fields and suggest an index. The blob indexer has the ability to crack open your documents and extract all text into the content field as well.

We hope that these new data sources will enable some truly awesome experiences! For more information about indexers, see our articles on indexing table and blob storage through the API. To get started with Azure Search in the Portal check out this article. If you have questions, please feel free to reach out in the comments below, or leave your feedback on UserVoice.
Quelle: Azure

Accelerate your insights with Application Insights Performance Counters

You can now monitor performance counters for Azure Web Apps using Visual Studio Application Insights. Until recently, Performance Counters such as CPU and network usage weren’t available on the Azure portal when monitoring your app. This is because Azure Web Apps don’t run on their own machines. Our new Aggregate Metrics package adds this telemetry, so you can now monitor your app’s use of resources as workload varies.

To get started, simply install the Aggregate Metrics prerelease NuGet package in your app. This package is now in the SDK Labs feed. You should then be able to view the below performance charts in your Application Insights Metrics Explorer within the Azure portal.

Azure Web Apps have continued to allow you to host whatever web app, website, and API in the Azure cloud, all while maintaining a plethora of capabilities such as Visual Studio integration, ease of deployment, and agility. In a nutshell, Azure Web Apps are simple to deploy and run.

However, there are limitations when it comes to monitoring your Azure Web App’s performance while it runs on the cloud. Web pages and apps are run in a sandbox environment, separating them from other apps running on the same machine. For developers, this sandbox is ultimately a tradeoff for the low monetary cost of using Azure Web Apps. The sandbox also prevents your app from accessing Performance Counters and using Performance Monitor. On your desktop, Performance Monitor provides a comprehensive selection of metrics with easy to interpret visualization

Until now, there were few metrics available for gaining insights on the performance of your Azure Web App, and those few metrics available gave nowhere near the comprehensiveness of Performance Monitor.

The Application Insights team saw the need for more complete feedback on web app performance, and we’re proud to announce that we’ve added a solution to Application Insights’ SDK Labs that collects Performance Counters and visualizes them on the Azure portal through Application Insights. The Aggregate Metrics solution contains several different Performance Counters, such as memory being used and the percentage of the CPU’s processor being used.  Currently, Performance Counters currently provide historical data, with Live Metrics Stream implementation planned for future development.

Custom Performance Counters

We have also implemented custom performance counters, such as thread and handle count, to provide even more detailed insights into app performance. These counters are specific and can be added independently to a project based on your app’s needs. At the moment there are few custom counters, as their availability is limited by provided content from the Azure Web Apps team. To get started with Custom Counters, you only need to adjust the ApplicationInsights.config file, just like would for other performance monitors.

Microsoft Intern Program

Performance Counters have been an ongoing project developed by two of Application Insights’ summer interns: myself, Mackenzie Frackleton, and Mateo Torres Ruiz. Click through to our GitHub profiles to see our latest developments.

Summary

Performance Counters are now available to provide metrics for Azure Web App telemetry. We always want to hear your feedback, so please visit the Application Insights SDK Labs repository here for issues or feature requests. The Application Insights team as a whole is committed to providing quality tools for developers. Any additional feedback or new feature recommendations is welcome.
Quelle: Azure

Microsoft expands and renews international certifications in seven countries

Microsoft invests heavily in to not only create the most advanced functionality and highest quality services possible, but also to ensure security, compliance, privacy and transparency are provided to our cloud services customers. Products like Azure Security Center and Microsoft Transparency Hub, and activities such as our ongoing legal effort to protect privacy rights across the globe, show our holistic approach to trust and security which no other cloud service provider can match.

We continue to maintain the largest portfolio of cloud certifications. In the first half of 2016, we achieved four new international certifications as well as renewed and expanded other certifications in seven countries. Here is a quick recap of our international compliance activities:

New certifications

Japan: We achieved Cloud Security Mark Gold Level accreditation and announced our alignment to the My Number Act on protecting personal data in Japan. Cloud Security Mark by Japan Information Security Audit Association (JASA) is the standard required by the government for cloud procurement.
Spain: Microsoft is the first global cloud service provider that achieved the Spain Esquema Nacional de Seguridad certification, which reiterates the effectiveness of our security controls implemented to protect customer data.
United Kingdom: We are also the first public cloud to gain the Federation Against Copyright Theft (FACT) certification. This accreditation proves our compliance with established media-industry security best-practices, including Content Delivery and Security Association’s (CDSA), Content Protection and Security (CPS) Standard, and the Motion Picture Association of America’s application and cloud security guidelines.

Expanded certifications

China: Microsoft Azure operated by 21vianet upgraded our Multi-Layer Protection Scheme (MLPS) classification from level 2 to level 3 and also added a new service to our Trusted Cloud Services certification.
Canada: We announced the alignment of our approach to protect customer data with recommendations from the Canadian privacy commission on related privacy laws. Based on the shared responsibility principle, customers that want to use cloud services should also go through self-assessment to ensure proper planning and adherence to the laws.
New Zealand: Our responses to New Zealand’s Cloud Computing Information Security and Privacy Considerations have been updated for new services in scope for the new question set.
Singapore: Microsoft’s Multi-Tier Cloud Security Singapore Standard:584 (MTCS SS:584-2015) certification has been upgraded to the 2015 version at Tier 3 with expanded scope of services. We also published a whitepaper in the context of Singapore compliance for Azure to help our customers address questions from MTCS and PDPA.
United Kingdom: Our UK G-Cloud has been expanded to cover all services that are in-scope for ISO 27001:2013 and updated to address the latest version of cloud security principles at OFFICIAL level.

As a potential or continuing customer, you can rest assured of our commitment to compliance and security based on our dedication to customer regulatory requirements. This dedication is evident in the success of an industry-leading certification count and its international breadth. We are achieving compliance for our customers so they can leverage our cloud services to grow their missions and businesses while knowing their regulatory needs are being met.

For access to any of the certifications mentioned above or any other compliance certifications achieved by Microsoft Azure, visit our Service Trust Portal or Microsoft Trust Center.
Quelle: Azure

Alerting and monitoring for Azure Backup

We are excited to announce the preview release of alerting and monitoring for Azure backups, which is currently the top-voted idea on Azure backup UserVoice. In a continuation of the simplified experience using the new Recovery Services vault, customers can now monitor cloud backups for their on-premises servers and Azure IaaS virtual machines in a single dashboard. In addition, they can also configure email notifications for all backup alerts.

Enroll your subscription for the preview release:

 # 1: Login to your Azure account from Windows PowerShell. Learn more on how to install Azure PowerShell.
PS C:> Login-AzureRmAccount
Step 2: Select the subscription which you want to register for preview
PS C:>  Get-AzureRmSubscription –SubscriptionName "Subscription Name" | Select-AzureRmSubscription
Step 3: Register this subscription for alerting preview
PS C:>  Register-AzureRmProviderFeature -FeatureName MABAlertingFeature –ProviderNamespace Microsoft.RecoveryServices

Introducing Recovery Services Vault

 

 

Introducing Alerting & Monitoring

 

If you are an exiting Azure backup customer using recovery services vault, update to the latest azure backup agent to use this feature. If you have configured email notifications before enrolling, turn off email notifications, enroll the subscription, and then configure notifications.

Related links and additional content:

If you are new to Azure Backup, start configuring the backup on Azure portal
Want more details? Check out Azure Backup documentation
Need help? Reach out to the Azure Backup forum for support.

Quelle: Azure

Announcing HTTP/2 support for all customers with Azure CDN from Akamai

We are pleased to announce HTTP/2 is now available for all customers with Azure CDN from Akamai. This feature is on by default, all existing and new Akamai standard profiles (enabling from Azure portal) can benefit from it with no additional cost.

HTTP/2 is designed to improve webpage loading speed and optimize user experience. All major web browsers already support HTTP/2 today. Though this protocol is designed to work with HTTP and HTTPS, most of the browsers only support HTTP/2 over TLS.

Key HTTP/2 benefits include:

Multiplexing and concurrency: Allow multiple requests sent on the same TCP connection
Header compression: Reduce header size for faster transfer time
Stream prioritization and dependencies: Prioritize resources to transfer important data first 
Server push (not supported currently): Allow server to "push" responses proactively into client caches

Next steps

We&;ll work on HTTP2 support for Azure CDN from Verizon in the next few months.

Read also

Azure CDN HTTP/2 doc
HTTP/2 spec
HTTP/2 FAQ

Is there a feature you&039;d like to see in Azure CDN? Give us feedback.
Quelle: Azure

Microsoft Azure Stack: Upcoming Technical Preview and other updates

This post was authored by the Microsoft Azure Stack Team.

Over the course of the last several weeks, we have continued to get a lot of great feedback and questions about Azure Stack. We are seeing some common questions and comments, so we thought it would be a good time to address them and continue our dialogue.

First, a lot of customers have asked us about the next technical preview of Azure Stack. We’ve got good news – we’ve started rolling out Azure Stack Technical Preview 2 (TP2) to some early adopter customers this week. This begins the process of rolling it out more broadly, and we expect to release TP2 publicly later this year.

We have also heard questions around the integrated systems hardware strategy, with concerns around flexibility, cost, and size. So, we asked Vijay Tewari to sit down with us and provide some insights behind our vision and rationale for integrated systems in this short video. The video provides insight into their top engineering design goals, the value of having software tightly connected with specific hardware, and how they ultimately think about lifecycle management of this system. One point worth reemphasizing is the prioritization of integrated systems from Dell, HPE, and Lenovo, as a starting point. As we have done in other cases in the past, we will continue to broaden our support for a diverse hardware ecosystem that allows customer choice and configuration from a certified catalog of solutions.

Finally, we know customers want to protect their investment in the turnkey Cloud Platform System (CPS) from Dell, HPE, and Nutanix, as well as more general deployments of Windows Azure Pack (WAP). We are working on side-by-side integration between CPS/WAP and Azure Stack, which will allow users to seamlessly manage Virtual Machine Manager (VMM) resources, created in WAP, from within the Azure Stack portal. In this way customers can use these Azure-consistent cloud solutions now, and leverage those resources in Azure Stack deployments in the future.

Let’s keep up the discussion!
Quelle: Azure

Engineering the move to cloud-based services

Supporting a network in transition: Q&A blog post series with David Lef

In a series of blog posts, this being the second, David Lef, principal network architect at Microsoft IT, chats with us about supporting a network as it transitions from a traditional infrastructure to a fully wireless platform. Microsoft IT is responsible for supporting 900 locations and 220,000 users around the world. David is helping to define the evolution of the network topology to a cloud-based model in Azure that supports changing customer demands and modern application designs.

David Lef explains the major factors that affect migration of IT-supported services and environments to cloud-based services, focusing on network-related practicalities and processes.

Q: Can you explain your role and the environment you support?

A: My role at Microsoft is principal network architect with Microsoft IT. My team supports almost 900 sites around the world and the networking components that connect those sites, which are used by a combination of over 220,000 Microsoft employees and vendors that work on our behalf Our network supports over 2,500 individual applications and business processes. We are responsible for providing wired, wireless, and remote network access for the organization, implementing network security across our network (including our network edges), and connectivity to Microsoft Azure in the cloud. We support a large Azure tenancy using a single Azure Active Directory tenancy that syncs with our internal Windows Server Active Directory forests. We have several connections from our on-premises datacenters to Azure using ExpressRoute. Our Azure tenancy supports a huge breadth of Azure resources, some of which are public-facing and some that are hosted as apps and services internal to Microsoft, but hosted on the Azure platform.

Q: What are the biggest networking challenges in migrating on-premises services to cloud-based services in Azure?

A: First of all, it&;s a fundamental change in traffic patterns. It used to be that we hosted most of our network traffic within our corporate network and datacenters, and selectively allowed access from the Internet into our network for apps and services that our employees needed to access while they were outside of the corporate network. From the aspect of traffic going in and out of our corporate network, we had our users accessing what you might call traditional Internet content, as well as users connecting to the corporate network using a virtual private network (VPN). Now, we are moving toward hosting the bulk of our on-premises datacenter infrastructure within Azure and choosing how we want to allow access to it.

Secondly, we’ve had network edge traffic increase a lot. Our bandwidth at the edge is over 500 percent what it was just a couple of years ago. The on-premises datacenter is no longer the hub of traffic for us and, and the cloud is the default app and infrastructure location for new projects at Microsoft. Our traffic pattern now revolves primarily around traffic to Azure datacenters. This, of course, has brought the demand for more robust and higher bandwidth edge connections—the resources that users formerly accessed within the corporate network are now being hosted in Azure, and those users experience the same level of responsiveness from their apps and services that they’ve been accustomed to.

We’re continuously moving apps and services from on-premises datacenters to Azure, so the connectivity requirements between Azure and our on-premises datacenters are changing as that migration continues. In addition, the pipeline between Azure and our datacenters is shrinking as more of our infrastructure moves to Azure. Our migration teams are moving as much as possible to software as a service (Saas) and platform as a service (Paas) in Azure wherever possible and, in situations where SaaS or PaaS doesn’t offer an immediate or beneficial solution, simply lifting the infrastructure components out of on-premises and into Azure infrastructure as a service (Iaas) virtual machines and virtual networks.

A significant part of the migration for these apps and services is analysis for redesign in the cloud. Wherever possible, our engineering teams are redesigning and re-architecting for the cloud. Internet-based traffic can have a higher latency than what Microsoft experiences within its corporate network infrastructure, so designing for that and educating users on the changes they should expect is important.

Q: How do you ensure adequate service levels in an Azure-based cloud delivery model?

A: The network component has a big impact on service levels, but it really does start with service design for our Azure-based resources. Connectivity to Azure is, for all intents and purposes, Internet connectivity, so anything hosted in Azure is designed as an Internet-based solution, wherever possible. Along with accommodating higher latency that I’ve already mentioned, the redesign process also includes retry logic for when a connection experiences any type of outage, caching and prefetching data, and compression of data across client connections.

After services design, we’re doing as much as we can on the network side to ensure robust connectivity. We’re using ExpressRoute extensively for our large locations, and making sure that we locate our hop onto ExpressRoute as close as physically possible to the resources that will use that connection, whether it is servers or users. That means using network service providers that have co-location facilities close to our physical locations. We don’t rely on traditional hub and spoke networking architectures for our location, and we try to avoid moving unnecessary traffic across our network backbones. We’ve found that the quicker you can drop someone onto the Internet, with the exception of cases where the provider infrastructure is very immature or limited, the better off they will be.

We monitor our environment pretty thoroughly. We’re designing the modern apps that run on Azure SaaS and PaaS to use the built-in instrumentation those platforms provide. We’re leveraging built-in synthetic transactions in those services and building in our own, using System Center products and Operations Management Services in Azure. It allows us to get a comprehensive view of our infrastructure; both centralized and decentralized. We treat our cloud services hosted in Azure as a product in which we’re the provider and the consumer—and all of Microsoft—is the customer.

Q: How does the challenge differ by geographic locations, and has that changed since the migration to cloud-based services?

A: Anytime we talk about geography, services placement is a huge consideration. We look at where our clients are for any given services, where the app to app dependencies lie, and plan accordingly. In most cases, we have at least one Azure datacenter within 1,000 kilometers of our clients, so we use that in our business continuity and disaster recovery planning. Azure’s built in geo-redundancy and resiliency components also help in those respects.

From a pure networking perspective, we try to place our Layer 3 management as close to the Azure datacenter as possible. That gives us the greatest control over traffic to Azure, and the best insight into what’s happening with that traffic.

Q: How do you encourage user adoption and buy-in when migrating to cloud-based services?

A: Our Azure teams provide a lot of guidance around the entire Azure experience. From a user experience, we do the best we can to provide them with accurate expectations for their apps and services that are migrated to Azure. In many cases, the general user experience is improved for apps on Azure, so this isn’t as much about softening the blow as it is showing them how having their app hosted on Azure changes the way the app is accessed and experienced. We make sure that users are aware of the ways that making an app available in the cloud can expose new functionality or ways to use the app. We focus on providing a user experience that enables mobile access from multiple device platforms. The key idea here is access from anywhere, on anything, at any time. An excellent example of this is the re-architecting of our licensing platform for the cloud, which was written about in a case study.

For the general migration to Azure, Microsoft IT has allotted people and capital to facilitate a smooth transition whenever a migration takes place. These resources contribute to the technical migration itself, training, and making sure that business processes are running as well or better than when the app or service was hosted on-premises.

Q: How have the IT teams changed to support this new delivery model?

A: The biggest change most people expect is this mass exodus or culling of traditional IT functions, but that’s not really the way it’s worked for us. We still have a network infrastructure to support throughout our physical locations, and datacenters don’t disappear overnight. Whether there are ten servers or 10,000 servers in a datacenter, disaster recovery and business continuity processes still need to happen and we need IT support for that. That being said, the requirement for on-premises infrastructure support does change. A lot of our high-level support teams are transitioning to different projects, sometimes in the Azure space. It’s given a lot of Microsoft employees the chance to improve their skill sets and shift their focus to development and innovation instead of maintenance and management.

With Azure, IT responsibilities become more compartmentalized, where we have IT staff that are focused on providing first-level support in their area of expertise, and it works without requiring a lot of people to have end-to-end knowledge of the environment or solution. Our Azure network experts provide their service and know their product and environment, and our Azure app experts do the same in their area, without needing to know specifically what’s happening with the network. The high-level knowledge is there across teams, of course, but resources and solutions become much more like plug-and-play solutions. This means that we’re more agile and able to respond to demand or start new projects more efficiently. Our teams don’t need to wait for physical servers to be built out or networking hardware to be installed; they simply request what they need, and Azure generates the resources.

Learn more

Other blog posts in this series:

Supporting network architecture that enables modern work styles

Learn how Microsoft IT is evolving its network architecture.
Quelle: Azure

Visual Studio Team Services digest: August 2016

This month we&;re kicking off the Visual Studio Team Services monthly digest on the Azure blog. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure.

With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS. This post series will provide the latest updates and news for Visual Studio Team Services and will be a great way for Azure users to keep up-to-date with new features being released every three weeks.

SSH Support for Git Repos is now available

You can now connect to any Team Services Git repo using an SSH key. This is particularly helpful if you develop on Linux or Mac.

Try paid Team Services extensions

You can now try Team Services paid extensions free for 30 days. This allows you to test an extension easily and see if its the right fit for your use case.

Team Services plugins for IntelliJ and Android Studio 1.0 released

After several months in preview, we are excited to announce the release of the official 1.0 version of the Team Services plugin for IntelliJ and Android Studio.

A new build task to queue Jenkins jobs from Team Services

Team Services sprint 102 introduces a new build task, Jenkins Queue Job. Now your Team Services/TFS builds can integrate with Jenkins to queue and monitor Jenkins jobs. The Jenkins Queue Job task is cross-platform and does not require any additional build agent dependencies.

The Team Services plugin for Visual Studio Code now supports connecting to Team Foundation Server Update 2 or later

The Team Services plugin for Visual Studio Code allows you to manage your pull requests for your Team Services and Team Foundation Server Git repositories, as well as monitor builds and work items for your team project. With just a glance at the status bar, you can see the number of active pull requests assigned to you and check the status of the latest build for your repository.

Inside Team Services: Kanban boards with Patrick Desjardins

Each month, we will bring you the insiders view into Visual Studio Team Services: How the product is developed, how we dog food it and use it every day, who are the people behind it and tips and tricks on becoming a power user.

This month, we interview Patrick Desjardins, a software developer on the team that develops the Agile tooling at the Microsoft Redmond campus.

Quelle: Azure