Introducing maintenance control for platform updates

Today we are announcing the preview of a maintenance control feature for Azure Virtual Machines that gives more control to customers with highly sensitive workloads for platform maintenance. Using this feature, customers can control all impactful host updates, including rebootless updates, for up to 35 days.

Azure frequently updates its infrastructure to improve the reliability, performance, and security, or to launch new features. Almost all updates have zero impact on your Azure virtual machines (VMs). When updates do have an effect, Azure chooses the least impactful method for updates:

If the update does not require a reboot, the VM is briefly paused while the host is updated, or it's live migrated to an already updated host. These rebootless maintenance operations are applied fault domain by fault domain, and progress is stopped if any warning health signals are received.
In the extremely rare scenario when the maintenance requires a reboot, the customer is notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you.

Typically, rebootless updates do not impact the overall customer experience. However, certain very sensitive workloads may require full control of all maintenance activities. This new feature will benefit those customers who deploy this type of workload.

Who is this for?

The ability to control the maintenance window is particularly useful when you deploy workloads that are extremely sensitive to interruptions running on an Azure Dedicated Host or an Isolated VM, where the underlying physical server runs a single customer’s workload. This feature is not supported for VMs deployed in hosts shared with other customers.

The typical customer who should consider using this feature requires full control over updates because while they need to have the latest updates in place, their business requires that at least some of their cloud resources must be updated with zero impact on their own schedule.

Customers like financial services providers, gaming companies, or media streaming services using Azure Dedicated Hosts or Isolated VMs will benefit by being able to manage necessary updates without any impact on their most critical Azure resources.

How does it work?

You can enable the maintenance control feature for platform updates by adding a custom maintenance configuration to a resource (either an Azure Dedicated Host or an Isolated VM). When the Azure updater sees this custom configuration, it will skip all non-zero-impact updates, including rebootless updates. For as long as the maintenance configuration is applied to the resource, it will be your responsibility to determine when to initiate updates for that resource. You can check for pending updates on the resource and apply updates within the 35-day window. When you initiate an update on the resource, Azure applies all pending host updates. A new 35-day window starts after another update becomes pending on the resource. If you choose not to apply the updates within the 35-day window, Azure will automatically apply all pending updates for you, to ensure that your resources remain secure and get other fixes and features.

Things to consider

You can automate platform updates for your maintenance window by calling “apply pending update” commands through your automation scripts. This can be batched with your application maintenance. You can also make use of Azure Functions and schedule updates at regular intervals.
Maintenance configurations are supported across subscriptions and resource groups, so you can manage all maintenance configurations in one place and use them anywhere they're needed.

Getting started

The maintenance control feature for platform updates is available in preview now. You can get started by using CLI, PowerShell, REST APIs, .NET, or SDK. Azure portal support will follow.

For more information, please refer to the documentation: Maintenance for virtual machines in Azure.

FAQ

Q: Are there cases where I can’t control certain updates? 

A:  In case of a high-severity security issue that may endanger the Azure platform or our customers, Azure may need to override customer control of the maintenance window and push the change. This is a rare occurrence that would only be used in extreme cases, such as a last resort to protect you from critical security issues.

Q: If I don’t self-update within 35-days what action will Azure take?

A:  If you don’t execute a platform update within 35-days, Azure will apply the pending updates on a fault domain by fault domain basis. This is done to maintain security and performance, and to fix any defects.

Q: Is this feature supported in all regions?

A:   Maintenance Control is supported in all public cloud regions. Currently we don't support gov cloud regions, but this support will come later.
Quelle: Azure

Networking enables the new world of Edge and 5G Computing

At the recent Microsoft Ignite 2019 conference, we introduced two new and related perspectives on the future and roadmap of edge computing.

Before getting further into the details of Network Edge Compute (NEC) and Multi-access Edge Compute (MEC), let’s take a look at the key scenarios which are emerging in line with 5G network deployments. For a decade, we have been working with customers to move their workloads from their on-premises locations to Azure to take advantage of the massive economies of scale of the public cloud. We get this scale with the ongoing build-out of new Azure regions and the constant increase of capacity in our existing regions, reducing the overall costs of running data centers.

For most workloads, running in the cloud is the best choice. Our ability to innovate and run Azure as efficiently as possible allows customers to focus on their business instead of managing physical hardware and associated space, power, cooling, and physical security. Now, with the advent of 5G mobile technology promising larger bandwidth and better reliability, we see significant requirements for low latency offerings to enable scenarios such as smart-buildings, factories, and agriculture. The “smart” prefix highlights that there is a compute-intensive workload, typically running machine learning or artificial intelligence-type logic, requiring compute resources to execute in near real-time. Ultimately the latency, or the time from when data is generated to the time it is analyzed, and a meaningful result is available, becomes critical for these smart-scenarios. Latency has become the new currency, and to reduce latency we need to move the required computing resources closer to the sensors, data origin or users.

Multi-access Edge Compute: The intersection of compute and networking

Internet of Things (IoT) creates incredible opportunities, but it also presents real challenges. Local connectivity in the enterprise has historically been limited to Ethernet and Wi-Fi. Over the past two decades, Wi-Fi has become the de-facto standard for wireless networks, not due to it necessarily being the best solution, but rather its entrenchment in the consumer ecosystem and lack of alternatives. Our customers from around the world tell us that deploying Wi-Fi to service their IoT devices requires compromises on coverage, bandwidth, security, manageability, reliability, and interoperability/roaming. For example, autonomous robots require better bandwidth, coverage, and reliability to operate safely within a factory. Airports generally have decent Wi-Fi coverage inside the terminals, but on the tarmac, coverage often drops significantly, making it insufficient and less suitable to power the smart airport.

Next-gen private cellular connectivity greatly improves bandwidth, coverage, reliability, and manageability. Through the combination of local compute resources and private mobile connectivity (private LTE), we can enable many new scenarios. For instance, in the smart factory example used earlier customers are now able to run their robotic control logic, highly available and independent of connectivity to the public cloud. MEC helps ensure that operations and any associated critical first-stage data processing remain up and production can continue uninterrupted.

With its promise and advantage of near-infinite compute and storage, the cloud is ideal for large data-intensive and computational tasks, such as machine learning jobs for predictive maintenance analytics. At this year’s Ignite conference, we shared our thoughts and experience, along with a technology preview of MEC with Azure. The technology preview brings private mobile network capabilities to Azure Stack Edge; an on-premises compute platform managed from Azure. In practical terms, the MEC allows locally controlling the robots; even if the factory suffers a network outage.

From an edge computing perspective, we have containers, running across Azure Stack Edge and Azure. A key aspect is that the same programming paradigm can be used for Azure and the edge-based MEC platform. Code can be developed and tested in the cloud, then seamlessly deployed at the edge. Developers can take advantage of the vast array of DevOps tools and solutions available in Azure and apply them to the new exciting edge scenarios. The MEC technology preview focuses on the simplified experience of cross-premises deployment and operations of managed compute and Virtual Network Functions with integration to existing Azure services.

Network Edge Compute

Whereas Multi-access Edge Compute (MEC) is intended to be deployed at the customer’s premises, Network Edge Compute (NEC) is the network carrier equivalent, placing the edge computing platform within their network. Last week we announced the initial deployment of our NEC platform in AT&T’s Dallas facility. Instead of needing to access applications and games running in the public cloud, software providers can bring their solutions physically closer to their end-users. At AT&T’s Business Summit we gave an augmented reality demonstration, working with Taqtile, and showed how to perform maintenance on an aircraft landing gear.

The HoloLens user sees the real landing gear along with the virtual manual along with specific parts of the landing gear virtually highlighted. The mixing of real-world and virtual objects displayed via HoloLens is what is often referred to as augmented reality (AR) or mixed reality (MR).

Edge Computing Scenarios

We have been showcasing multiple MEC and NEC use-cases over these past few weeks. For more details please refer to our Microsoft Ignite MEC and 5G session.

Mixed Reality (MR)

Mixed reality use cases such as remote assistance can revolutionize several industrial automation scenarios. Lower latencies and higher bandwidth coupled with local compute, enables new remote rendering scenarios to reduce battery consumption in handsets and MR devices.

Retail e-fulfillment

Attabotics provides a robotic warehousing and fulfillment system for the retail and supply chain industries. Attabotics employs robots (Attabots) for storage and retrieval of goods from a grid of bins. A typical storage structure has about 100,000 bins and is serviced by between 60 and 80 Attabots. Azure Sphere powers the robots themselves. Communications using Wi-Fi or traditional 900 MHz spectrum does not meet the scale, performance and reliability requirements.
  
The Nexus robot control system, used for command and control of the warehousing system, is built natively on Azure and uses Azure IoT Central for telemetry. With a Private LTE (CBRS) radio from our partners Sierra Wireless and Ruckus Wireless and packet core partner Metaswitch, we enabled the Attabots to communicate over a private LTE network. The reduced latency improved reliability and made the warehousing solution more efficient. The entire warehousing solution, including the private LTE network used for a warehouse, run on a single Azure Stack Edge.

Gaming

Multi-player online gaming is one of the canonical scenarios for low-latency edge computing. Game Cloud Studios has developed a game based on Azure Play Fab, called Tap and Field. The game backend and controls run on Azure, while the game server instances reside and run on the NEC platform. Having lower latencies results in better gaming experiences for players who are nearby in venues such as e-sport events, arcades, arenas, and similar venues.

Public Safety

The proliferation of drone use is disrupting many industries, from security and privacy to the delivery of goods. Air Traffic Control operations are on the cusp of one of the most significant disruptive events in the field, going from monitoring only dozens of aircrafts today to thousands tomorrow. This necessitates a sophisticated near real-time tracking system. Vorpal VigilAir has built a solution where drone and operator tracking is done using a distributed sensor network powered by a real-time tracking application running on the NEC.

Data-driven digital agriculture solutions

Azure FarmBeats is an Azure solution that enables aggregation of agriculture datasets across providers, and generation of actionable insights by building artificial intelligence (AI) or machine learning (ML) models by fusing the datasets. Gathering datasets from sensors distributed across the farm requires a reliable private network, and generating insights requires a robust edge computing platform that is capable of being operated in a disconnected mode in remote locations where connectivity to the cloud is often sparse. Our solution, based on the Azure Stack Edge along with a managed private LTE network, offers a reliable and scalable connectivity fabric along with the right compute resources close to the farm.

MEC, NEC, and Azure: Bringing compute everywhere

MEC enables a low-latency connected Azure platform in your location, NEC provides a similar platform in a network carrier’s central office, and Azure provides a vast array of cloud services and controls.

At Microsoft, we fundamentally believe in providing options for all customers. Because it is impractical to deploy Azure datacenters in every major metropolitan city throughout the world, our new edge computing platforms provide a solution for specific low-latency application requirements that cannot be satisfied in the cloud. Software developers can use the same programming and deployment models for containerized applications using MEC where private mobile connectivity is required, deploying to NEC where apps are optimally located outside the customer’s premises, or directly in Azure. Many applications will look to take advantage of combined compute resources across the edge and public cloud.

We are building a new extended platform and continue to work with the growing ecosystem of mobile connectivity and edge computing partners. We are excited to enable a new wave of innovation unleashed by the convergence of 5G, private mobile connectivity, IoT and containerized software environments, powered by new and distributed programming models. The next phase of computing has begun.
Quelle: Azure

Building Xbox game streaming with Site Reliability best practices

Last month, we started sharing the DevOps journey at Microsoft through the stories of several teams at Microsoft and how they approach DevOps adoption. As the next story in this series, we want to share the transition one team made from a classic operations role to a Site Reliability Engineering (SRE) role: the story of the Xbox Reliability Engineering and Operations (xREO) team.

This transition was not easy and came out of necessity when Microsoft decided to bring Xbox games to gamers wherever they are through cloud game streaming (project xCloud). In order to deliver cutting-edge technology with top-notch customer experience, the team had to redefine the way it worked—improving collaboration with the development team, investing in automation, and get involved in the early stages of the application lifecycle. In this blog, we’ll review some of the key learnings the team collected along the way. To explore the full story of the team, see the journey of the xREO team.

Consistent gameplay requirements and the need to collaborate

A consistent experience is crucial to a successful game streaming session. To ensure gamers experience a game streamed from the cloud, it has to feel like it is running on a nearby console. This means creating a globally distributed cloud solution that runs on many data centers, close to end users. Azure’s global infrastructure makes this possible, but operating a system running on top of so many Azure regions is a serious challenge.

The Xbox developers who have started architecting and building this technology understood that they could not just build this system and “throw it over the wall” to operations. Both teams had to come together and collaborate through the entire application lifecycle so the system can be designed from the start with considerations on how it will be operated in a production environment.

Architecting a cloud solution with operations in mind

In many large organizations, it is common to see development and operation teams working in silos. Developers don’t always consider operation when planning and building a system, while operations teams are not empowered to touch code even though they deploy it and operate it in production. With an SRE approach, system reliability is baked into the entire application lifecycle and the team that operates the system in production is a valued contributor in the planning phase. In a new approach, involving the xREO team in the design phase enabled a collaborative environment, making joint technology choices and architecting a system that could operate with the requirements needed to scale.

Leveraging containers to clearly define ownership

One of the first technological decisions the development and xREO teams made together was to implement a microservices architecture utilizing container technologies. This allowed the development teams to containerize .NET Core microservices they would own and remove the dependency from the cloud infrastructure that was running the containers and was to be owned by the xREO team.

Another technological decision both teams made early on, was to use Kubernetes as the underlying container orchestration platform. This allowed the xREO team to leverage Azure Kubernetes Service (AKS), a managed Kubernetes cloud platform that simplifies the deployment of Kubernetes clusters, removing a lot of the operational complexity the team would have to face running multiple clusters across several Azure regions. These joint choices made ownership clear—the developers are responsible for everything inside the containers and the xREO team is responsible for the AKS clusters and other Azure services make the cloud infrastructure hosting these containers. Each team owns the deployment, monitoring and operation of its respective piece in production.

This kind of approach creates clear accountability and allows for easier incident management in production, something that can be very challenging in a monolithic architecture where infrastructure and application logic have code dependencies and are hard to untangle when things go sideways.

Scaling through infrastructure automation

Another best practice the xREO team invested in was infrastructure automation. Deploying multiple cloud services manually on each Azure region was not scalable and would take too much time. Using a practice known as “infrastructure as code” (IaC) the team used Azure Resource Manager templates to create declarative definitions of cloud environments that allow deployments to multiple Azure regions with minimal effort.

With infrastructure managed as code, it can also be deployed using continuous integration and continuous delivery (CI/CD) to bring further automation to the process of deploying new Azure resources to existing data centers, updating infrastructure definitions or bringing online new Azure regions when needed. Both IaC and CI/CD, allowed the team to remain lean, avoid repetitive mundane work and remove most of the risk of human error that comes with manual steps. Instead of spending time on manual work and checklists, the team can focus on further improving the platform and its resilience.

Site Reliability Engineering in action 

The journey of the xREO team started with a need to bring the best customer experience to gamers. This is a great example that shows how teams who want to delight customers with new experiences through cutting edge innovation must evolve the way they design, build, and operate software. Shifting their approach to operations and collaborating more closely with the development teams was the true transformation the xREO team has undergone.

With this new mindset in place, the team is now well positioned to continue building more resilience and further scale the system and by so, deliver the promise of cloud game streaming to every gamer.

Resources

The full story of the xREO team
Additional stories: The DevOps journey at Microsoft
Microsoft Game Stack

Quelle: Azure

Announcing the preview of Azure Spot Virtual Machines

We’re announcing the preview of Azure Spot Virtual Machines. Azure Spot Virtual Machines provide access to unused Azure compute capacity at deep discounts. Spot pricing is available on single Virtual Machines in addition to Virtual Machine Scale Sets (VMSS). This enables you to deploy a broader variety of workloads on Azure while enjoying access to discounted pricing. Spot Virtual Machines offer the same characteristics as a pay-as-you-go Virtual Machines, with differences in pricing and evictions. Spot Virtual Machines can be evicted anytime if Azure needs capacity.

The workloads that are ideally suited to run on Spot Virtual Machines include, but are not necessarily limited to, the following:

•    Batch jobs.
•    Workloads that can sustain and/or recover from interruptions.
•    Development and test.
•    Stateless applications that can use Spot Virtual Machines to scale out, opportunistically saving cost.
•    Short-lived jobs which can easily be run again if the Virtual Machine is evicted.

Preview for Spot Virtual Machines will replace the preview of Azure low-priority Virtual Machines on scale sets. Eligible low-priority Virtual Machines will be automatically transitioned over to Spot Virtual Machines. Please refer to the FAQ for additional information. 

Pricing

Unlike low-priority Virtual Machines, prices for Spot Virtual Machines will vary based on capacity for a size or SKU in an Azure region. Spot pricing can give you insights into the availability and demand for a given Azure Virtual Machine series and specific size in a region. The prices will change slowly to provide stabilization, thus allowing you to better manage budgets. In the Azure portal, you will have access to the current Azure Virtual Machine Spot prices to easily determine which region or Virtual Machine size best fits your needs. Spot prices are capped at pay-as-you-go prices.
 

Deployment

Spot Virtual Machines are easy to deploy and manage. Deploying a Spot Virtual Machine is similar to configuring and deploying a regular Virtual Machine. For example, in the Azure portal, you can simply select Azure Spot Instance to deploy a Spot Virtual Machine. You can also define your maximum price for your Spot Virtual Machines. Here are two options: 

You can choose to deploy your Spot Virtual Machines without capping the price. Azure will charge you the Spot Virtual Machine price at any given time, giving you peace of mind that your Virtual Machines will not be evicted for price reasons.
 
Alternatively, you can decide to provide a specific price to stay in your budget. Azure will not charge you above the maximum price you set and will evict the Virtual Machine if the spot price rises above your defined maximum price.
 

There are few other options available to lower costs.

If your workload does not require a specific Virtual Machine series and size, then you can find other Virtual Machines in the same region that may be cheaper.
If your workload is not dependent on a specific region, then you can find a different Azure region to reduce your cost.

Quota

As part of this announcement, to give better flexibility, Azure is also rolling out a separate quota for Spot Virtual Machines that is separate from your pay-as-you-go Virtual Machine quota. The quota for Spot Virtual Machines and Spot VMSS instances is a single quota for all Virtual Machine sizes in a specific Azure region. This approach will give you easy access to a broader set of Virtual Machines.
 

Handling Evictions

Azure will try to keep your Spot Virtual Machine running and minimize evictions, but your workload should be prepared to handle evictions as runtime for an Azure Spot Virtual Machines and VMSS instances is not guaranteed. You can optionally get a 30-second eviction notice by subscribing to scheduled events. Virtual Machines can be evicted due to the following reasons:

Spot prices have gone above the max price you defined for the Virtual Machine. Azure Spot Virtual Machines get evicted when the Spot price for the Virtual Machine you have chosen goes above the price you defined at the time of deployment. You can try to redeploy your Virtual Machine by changing prices.
Azure needs to reclaim capacity.

In both scenarios, you can try to redeploy the Virtual Machine in the same region or availability zone.

Best practices

Here are some effective ways to best utilize Azure Spot Virtual Machines:

For long-running operations, try to create checkpoints so that you can restart your workload from a previous known checkpoint to handle evictions and save time.
In scale-out scenarios, to save costs, you can have two VMSS, where one has regular Virtual Machines and the other has Spot Virtual Machines. You can put both in the same load balancer to opportunistically scale out.
Listen to eviction notifications in the Virtual Machine to get notified when your Virtual Machine is about to be evicted.
If you are willing to utilize pay-as-you-go prices, then use Eviction type to “Capacity Eviction only”, in the API provide “-1” as max price as Azure never charges you more than the Spot Virtual Machine price.
To handle evictions, build a retry logic to redeploy Virtual Machines. If you do not require a specific Virtual Machine series and size, then try to deploy a different size that matches your workload needs.
While deploying VMSS, select max spread in portal management tab or FD==1 in the API to find capacity in a zone or region.

Learn more

Spot Virtual Machine details
Spot Virtual Machine pricing: Windows and Linux
Create Spot Virtual Machines in Portal
Create Spot Virtual Machines in Azure CLI
Create Spot Virtual Machines in Azure PowerShell
Create Spot Virtual Machines in Azure Resource Manager templates
Create Spot VMSS in Azure Resource Manager templates
Planned Azure Batch support for Spot Virtual Machines 

Quelle: Azure

Microsoft has validated the Lenovo ThinkSystem SE350 edge server for Azure Stack HCI

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure's hybrid services such as backup, disaster recovery, update managment, monitoring, and security compliance?  

Microsoft and Lenovo have teamed up to validate the Lenovo ThinkSystem SE350 for Microsoft's Azure Stack HCI program. The ThinkSystem SE350 was designed and built with the unique requirements of edge servers in mind. It is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and can be easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 solution has a focus on smart connectivity, business security, and manageability for the harsh environment. To see all Lenovo servers validated for Azure Stack HCI, see the Azure Stack HCI catalog to learn more.

Lenovo ThinkSystem SE350:

The ThinkSystem SE350 is the latest workhorse for the edge. Designed and built with the unique requirements for edge servers in mind, it is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and is easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 is a rugged compact-sized edge solution with a focus on smart connectivity, business security, and manageability for the harsh environment.

The ThinkSystem SE350 is an Intel® Xeon® D processor-based server, with a 1U height, half-width and short depth case that can go anywhere. Mount it on a wall, stack it on a shelf, or install it in a rack. This rugged edge server can handle anything from 0-55°C as well as full performance in high dust and vibration environments.

Information availability is another challenging issue for users at the edge, who require insight into their operations at all times to ensure they are making the right decisions. The ThinkSystem SE350 is designed to provide several connectivity options with wired and secure wireless Wi-Fi and LTE connection ability. This purpose-built compact server is reliable for a wide variety of edge and IoT workloads.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine (VM) performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network micro-segmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyper-converged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services:

Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure, with advanced analytics powered by artificial intelligence.

Cloud Witness, to use Azure as the lightweight tie-breaker for cluster quorum.

Azure Backup for offsite data protection and to protect against ransomware.

Azure Update Management for update assessment and update deployments for Windows Virtual Machines running in Azure and on-premises.

Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site VPN.

Sync your file server with the cloud, using Azure File Sync.

Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft + Lenovo HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  
Quelle: Azure

Extended filesystem programming capabilities in Azure Data Lake Storage

Since the general availability of Azure Data Lake Storage Gen2 in February 2019, customers have been getting insights at cloud scale faster than ever before. Integration to analytics engines is critical for their analytics workloads and equally important is the ability to programmatically ingest, manage, and analyze data. This ability is critical for key areas of enterprise data lakes such as data ingestion, event-driven big data platforms, machine learning, and advanced analytics. Programmatic access is possible today using Azure Data Lake Storage Gen2 REST APIs or Blob REST APIs. In addition, customers can enable continuous integration and continuous delivery (CI/CD) pipelines using Blob PowerShell and CLI capabilities via multi-protocol access. As part of the journey to enable our developer ecosystem, our goal is to make customer application development easier than ever before.

We are excited to announce the public preview of .NET SDK, Python SDK, Java SDK, PowerShell, and CLI for filesystem operations for Azure Data Lake Storage Gen2. Customers who are used to the familiar filesystem programming model can now implement this model using .NET, Python, and Java SDKs. Customers can also now incorporate these filesystem operations into their CI/CD pipelines using PowerShell and CLI, thereby enriching CI/CD pipeline automation for big data workloads on Azure Data Lake Storage Gen2. As part of this preview, the SDKs, PowerShell, and CLI include support for CRUD operations for filesystems, directories, files, and permissions through filesystem semantics for Azure Data Lake Storage Gen2.

Detailed reference documentation for all these filesystem semantics are provided in the links below. These links will also help you get started and provide feedback.

 .NET SDK
 Python SDK
 Java SDK
 PowerShell
 Azure CLI

This public preview is available globally in all regions. Your participation and feedback are critical to help us enrich your development experience. Join us in our journey.
Quelle: Azure

Achieve operational excellence in the cloud with Azure Advisor

Many customers have questions when it comes to managing cloud operations. How can I implement real-time cloud governance at scale? What’s the best way to monitor my cloud workloads? How can I get help when I need it?

Azure offers a great deal of guidance when it comes to optimizing your cloud operations. At the organizational level, the Microsoft Cloud Adoption Framework for Azure can help you design and implement your approach to management and governance in the cloud. At the cloud resource level, Azure Advisor provides personalized recommendations to help you optimize your Azure workloads for a variety of objectives—including cost savings, security, performance, and availability—based on your usage and configurations.

Recently, Advisor introduced a new recommendation category—operational excellence—to help you follow best practices for process and workflow efficiency, resource manageability, and deployment.

Introducing a new Azure Advisor recommendation category: operational excellence

Azure Advisor now offers a new category of recommendations—operational excellence—to help you optimize your cloud process and workflow efficiency, resource manageability, and deployment practices. You can get these recommendations from Advisor in the operational excellence tab of the Advisor dashboard. They’re also available via Advisor’s CLI and API.

The operational excellence category is launching with nine recommendations, and more on the way. Examples include creating Azure Service Health alerts to be notified when Azure service issues affect you; repairing invalid log alert rules; and following best practices using Azure policy, such as tag management, geo-compliance requirements, and specifying permitted virtual machine (VM) SKUs for deployment. Together, these recommendations will help you optimize your cloud operations practices.

New operational excellence recommendations

Here’s a quick round-up of the new operational excellence recommendations in Advisor at launch:

Create Azure Service Health alerts to be notified when Azure service issues affect you.
Design your storage accounts to prevent hitting the maximum subscription limit.
Ensure you have access to Azure cloud experts when you need it.
Repair invalid log alert rules.
Follow best practices using Azure Policy, including tag management, geo-compliance requirements, and VM audits for managed disks.

For more detailed information on Advisor’s operational excellence recommendations, refer to our documentation. Be sure to check back regularly, as we’re constantly adding new recommendations.

Review your operational excellence recommendations today

Visit Advisor in the Azure portal here to start optimizing your cloud workloads for operational excellence. For more in-depth guidance, visit our documentation. Let us know if you have a suggestion for Advisor by submitting an idea here.
Quelle: Azure

Enabling collaborative bot development across your organization for any user

This post was co-authored by Omar Aftab, Partner Director of Program Management, Power Virtual Agents.

Conversational artificial intelligence (AI) is enabling organizations to improve their business in areas like customer service and employee engagement by automating some of the most commonly requested services, which frees up employees to take on more value-adding activities. While the benefits of conversational AI are well established, determining who in an organization should build these solutions is not always clear.

As is true of many applications, conversational AI solutions (or bots) can be built using software-as-a-service (SaaS) or platform-as-a-service (PaaS) offerings. Consequently, organizations are forced to decide between empowering business users who are closest to the business problems or empowering developers with coding experience to have full control over how these solutions are built, without many options to bridge the gap and allow for collaboration between the two. However, with the integration of Bot Framework Skills into Microsoft Power Virtual Agents (a graphical interface offering for business users creating bots, now generally available), Microsoft uniquely empowers both business users and developers to collaborate seamlessly in building conversational AI solutions.

In the bot building journey, bot builders across the organization should not work in siloes. If a business user is building a bot but wants to add a nuanced scenario, they should be able to collaborate with a developer who can customize the bot further. Similarly, developers building a bot can also leverage bots that have been built by business users as a skill.

Microsoft offers an end-to-end, no-cliffs bot building experience with Power Virtual Agents and Bot Framework.

Power Virtual Agents provides a no-code experience for bot development – ideal for business users and domain experts to easily build a bot, without having to worry about the technical aspects of bot development.
Bot Framework is an open-source SDK and tools purpose-built for bot development – ideal for developers who want to build a bot using a code experience and want full control of technical aspects of bot development, including language model ownership, and visual design. Additionally, Azure Bot Service allows developers to host and deploy their bots to popular channels like Teams and other messaging platforms where users will interact with the bot.
Bot Framework Skills offering a no-cliffs bot building experience – no matter your starting point. With Bot Framework Skills, Power Virtual Agents users have a no-cliff bot building experience because they can collaborate with Bot Framework developers to extend their bots with custom capabilities. Equally important, Bot Framework developers can extend their bot as a skill and allow subject matter experts to update bot conversations.

For example, suppose an organization is creating a travel bot using Power Virtual Agents. The business users build out the dialogs with a UI-based experience that allows the bot to handle customers’ intents, such as Check miles and rewards, Check flight status, Update account information, and Book a flight.

However, what if someone in the organization has already built a Book a Flight skill with custom language models using Bot Framework and Language Understanding service as illustrated below?

In this scenario, business experts can collaborate with the developer who has built this flight booking skill by selecting it as an action in Power Virtual Agents.

As conversational AI adoption continues to grow, we believe it is important for organizations to take an interdisciplinary team approach to bot development. For this reason, Microsoft offers an end-to-end, no-cliffs bot building experience that empowers business subject matter experts and developers alike to collaborate.

Get started with Power Virtual Agents.
Get started with Bot Framework.
Learn more about how to extend your Power Virtual Agents bot with Bot Framework Skills.

Quelle: Azure

Faster and cheaper: SQL on Azure continues to outshine AWS

Over a million on-premises SQL Server databases have moved to Azure, representing a massive shift in where customers are collecting, storing, and analyzing their data.

Modernizing your databases provides the opportunity to transform your data architecture. SQL Server on Azure Virtual Machines allows you to maintain control over your database and operating system while still benefiting from cloud flexibility and scale. For some, this represents a step in the journey to a fully-managed database, while others choose this deployment option for compatibility with on-premises workloads such as SQL Server Reporting Services.

Whatever the reason, migrating SQL workloads to Azure Virtual Machines is a popular option. Azure customers benefit from our unique built-in security and manageability capabilities, which automate tasks like patching and backups. In addition to providing these unparalleled innovations, it is important to provide customers with the best price-performance possible. Once again, SQL Server on Azure Virtual Machines comes out on top.

SQL Server on Azure leads in price-performance

GigaOm, an independent research firm, recently published a study comparing throughput performance between SQL Server on Azure Virtual Machines and SQL Server on AWS EC2. Azure emerged as the clear leader across both Windows and Linux for mission-critical workloads, up to 3.4 times faster and up to 87 percent less expensive than AWS EC2.1

The images above are performance and price-performance comparisons from the GigaOm report. The performance metric is throughput (transactions per second, tps); higher performance is better. The price-performance metric is three-year pricing divided by throughput (transactions per second, tps), lower price-performance is better.

With Azure Ultra Disk, GigaOm was able to achieve 80,000 input or output per second (IOPS) per single disk, maxing out the virtual machine’s throughput limit, and well exceeding the capabilities of AWS provisioned IOPS.2

A key reason why Azure price-performance is superior to AWS is Azure BlobCache, which provides free reads. Given that most online transaction processing (OLTP) workloads today come with a ten-to-one read-to-write ratio, this provides customers with significant savings.

Unmatched innovation from the team that brought SQL Server to the world

With a proven track record over 25 years, the engineering team behind SQL Server continues to drive security and innovation to meet our customers’ changing needs. Whether executing on-premises, in the cloud, or on the edge, the result is the most comprehensive, consistent, and secure solution for your data.

Azure SQL Virtual Machines offer unique built-in security and manageability, including automatic security patching and automated high-availability, and database recovery to a specific point in time. Azure’s unique security capabilities include advanced data security for SQL Server on Azure Virtual Machines, which enables both vulnerability assessments and advanced threat protection. Customers self-installing SQL Server on virtual machines in the cloud can now register with our resource provider to enable this same functionality.

Get started with SQL in Azure today

Migrate from SQL Server on-premises to SQL Server 2019 in Azure Virtual Machines today. Get started with preconfigured Azure SQL Virtual Machine images on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, and Windows in minutes. Take advantage of the Azure Hybrid Benefit to reuse your existing on-premises Windows server and SQL Server licenses in Azure for significant savings.

When you add it up, SQL databases are simply best on Azure. Learn more about why SQL Server is best on Azure, and use a $200 in Azure credits with a free account3 or Azure Dev or Test credits4 for additional cost savings.

 

1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 4x P30 1TB Storage Pool data (Read-Only Cache) + 1x P20 0.5TB log (No Cache) and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 4TB gp2 data + 1x 1TB gp2 log. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. The Field Test is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of October 2019. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs.  Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test.  Actual results and prices may vary based on configuration and region.

2Claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price-performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 1x Ultra 1.5TB with 650MB per sec throughput and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 1.5TB io1 provisioned log + data. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. The Field Test is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in north Europe for SQL Server on Azure Virtual Machines and Ireland for AWS EC2 as of October 2019. Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test.  Actual results and prices may vary based on configuration and region.

3Additional information about $200 Azure free account available at https://azure.microsoft.com/en-us/free/.

4Dev or Test Azure credits and pricing available for paid Visual Studio subscribers only.
Quelle: Azure