Azure Sphere’s customized Linux-based OS

Security and resource constraints are often at odds with each other. While some security measures involve making code smaller by removing attack surfaces, others require adding new features, which consume precious flash and RAM. How did Microsoft manage to create a secure Linux based OS that runs on the Azure Sphere MCU?

The Azure Sphere OS begins with a long-term support (LTS) Linux kernel. Then the Azure Sphere development team customizes the kernel to add additional security features, as well as some code targeted at slimming down resource utilization to fit within the limited resources available on an Azure Sphere chip. In addition, applications, including basic OS services, run isolated for security. Each application must opt in to use the peripherals or network resources it requires. The result is an OS purpose-built for Internet of Things (IoT) and security, which creates a trustworthy platform for IoT experiences.

At the 2018 Linux Security Summit, Ryan Fairfax, an Azure Sphere engineering lead, presented a deep dive into the Azure Sphere OS and the process of fitting Linux security in 4 MiB of RAM. In this talk, Ryan covers the security components of the system, including a custom Linux Security Module, modifications and extensions to existing kernel components, and user space components that form the security backbone of the OS. He also discusses the challenges of taking modern security techniques and fitting them in resource-constrained devices. I hope that you enjoy this presentation!

Watch the video to learn more about the development of Azure Sphere’s secure, Linux-based OS. You can also look forward to Ryan’s upcoming talk on Using Yocto to Build an IoT OS Targeting a Crossover SoC at the Embedded Linux Conference in San Diego on August 22.

Visit our website for documentation and more information on how to get started with Azure Sphere.

 

Quelle: Azure

Azure Security Center single click remediation and Azure Firewall JIT support

This blog post was co-authored by Rotem Lurie, Program Manager, Azure Security Center.​

Azure Security Center provides you with a bird’s eye security posture view across your Azure environment, enabling you to continuously monitor and improve your security posture using secure score in Azure. Security Center helps you identify and perform the hardening tasks recommended as security best practices and implement them across your machines, data services, and apps. This includes managing and enforcing your security policies and making sure your Azure Virtual Machines, non-Azure servers, and Azure PaaS services are compliant.

Today, we are announcing two new capabilities—the preview for remediating recommendations on a bulk of resources in a single click using secure score and the general availability (GA) of just-in-time (JIT) virtual machine (VM) access for Azure Firewall. Now you can secure your Azure Firewall protected environments with JIT, in addition to your network security group (NSG) protected environments.

Single click remediation for bulk resources in preview

With so many services offering security benefits, it's often hard to know what steps to take first to secure and harden your workload. Secure score in Azure reviews your security recommendations and prioritizes them for you, so you know which recommendations to perform first. This helps you find the most serious security vulnerabilities so you can prioritize investigation. Secure score is a tool that helps you assess your workload security posture.

In order to simplify remediation of security misconfigurations and to be able to quickly improve your secure score, we are introducing a new capability that allows you to remediate a recommendation on a bulk of resources in a single click.

This operation will allow you to select the resources you want to apply the remediation to and launch a remediation action that will configure the setting on your behalf. Single click remediation is available today for preview customers as part of the Security Center recommendations blade.

You can look for the 1-click fix label next to the recommendation and click on the recommendation:

Once you choose the resources you want to remediate and select Remediate, the remediation takes place and the resources move to the Healthy resources tab. Remediation actions are logged in the activity log to provide additional details in case of a failure.

Remediation is available for the following recommendations in preview:

Web Apps, Function Apps, and API Apps should only be accessible over HTTPS
Remote debugging should be turned off for Function Apps, Web Apps, and API Apps
CORS should not allow every resource to access your Function Apps, Web Apps, or API Apps
Secure transfer to storage accounts should be enabled
Transparent data encryption for Azure SQL Database should be enabled
Monitoring agent should be installed on your virtual machines
Diagnostic logs in Azure Key Vault and Azure Service Bus should be enabled
Diagnostic logs in Service Bus should be enabled
Vulnerability assessment should be enabled on your SQL servers
Advanced data security should be enabled on your SQL servers
Vulnerability assessment should be enabled on your SQL managed instances
Advanced data security should be enabled on your SQL managed instances

Single click remediation is part of Azure Security Center’s free tier.

Just-in-time virtual machine access for Azure Firewall is generally available

Announcing the general availability of just-in-time virtual machine access for Azure Firewall. Now you can secure your Azure Firewall protected environments with JIT, in addition to your NSG protected environments.

JIT VM access reduces your VM’s exposure to network volumetric attacks by providing controlled access to VMs only when needed, using your NSG and Azure Firewall rules.

When you enable JIT for your VMs, you create a policy that determines the ports to be protected, how long the ports are to remain open, and approved IP addresses from where these ports can be accessed. This policy helps you stay in control of what users can do when they request access.

Requests are logged in the activity log, so you can easily monitor and audit access. The JIT blade also helps you quickly identify existing virtual machines that have JIT enabled and virtual machines where JIT is recommended.

Azure Security Center displays your recently approved requests. The Configured VMs tab reflects the last user, the time, and the open ports for the previous approved JIT requests. When a user creates a JIT request for a VM protected by Azure Firewall, Security Center provides the user with the proper connection details to your virtual machine, translated directly from your Azure Firewall destination network address translation (DNAT).

This feature is available in the Standard pricing tier of Security Center, which you can try for free for the first 60 days.

To learn more about these features in Security Center, visit “Remediate recommendations in Azure Security Center,” just-in-time VM access documentation, and Azure Firewall documentation. To learn more about Azure Security Center, please visit the Azure Security Center home page.
Quelle: Azure

Announcing the general availability of Python support in Azure Functions

Python support for Azure Functions is now generally available and ready to host your production workloads across data science and machine learning, automated resource management, and more. You can now develop Python 3.6 apps to run on the cross-platform, open-source Functions 2.0 runtime. These can be published as code or Docker containers to a Linux-based serverless hosting platform in Azure. This stack powers the solution innovations of our early adopters, with customers such as General Electric Aviation and TCF Bank already using Azure Functions written in Python for their serverless production workloads. Our thanks to them for their continued partnership!

In the words of David Havera, blockchain Chief Technology Officer of the GE Aviation Digital Group, "GE Aviation Digital Group's hope is to have a common language that can be used for backend Data Engineering to front end Analytics and Machine Learning. Microsoft have been instrumental in supporting this vision by bringing Python support in Azure Functions from preview to life, enabling a real world data science and Blockchain implementation in our TRUEngine project."

Throughout the Python preview for Azure Functions we gathered feedback from the community to build easier authoring experiences, introduce an idiomatic programming model, and create a more performant and robust hosting platform on Linux. This post is a one-stop summary for everything you need to know about Python support in Azure Functions and includes resources to help you get started using the tools of your choice.

Bring your Python workloads to Azure Functions

Many Python workloads align very nicely with the serverless model, allowing you to focus on your unique business logic while letting Azure take care of how your code is run. We’ve been delighted by the interest from the Python community and by the productive solutions built using Python on Functions.

Workloads and design patterns

While this is by no means an exhaustive list, here are some examples of workloads and design patterns that translate well to Azure Functions written in Python.

Simplified data science pipelines

Python is a great language for data science and machine learning (ML). You can leverage the Python support in Azure Functions to provide serverless hosting for your intelligent applications. Consider a few ideas:

Use Azure Functions to deploy a trained ML model along with a scoring script to create an inferencing application.

Leverage triggers and data bindings to ingest, move prepare, transform, and process data using Functions.
Use Functions to introduce event-driven triggers to re-training and model update pipelines when new datasets become available.

Automated resource management

As an increasing number of assets and workloads move to the cloud, there's a clear need to provide more powerful ways to manage, govern, and automate the corresponding cloud resources. Such automation scenarios require custom logic that can be easily expressed using Python. Here are some common scenarios:

Process Azure Monitor alerts generated by Azure services.
React to Azure events captured by Azure Event Grid and apply operational requirements on resources.

Leverage Azure Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a Python function.
Perform scheduled operational tasks on virtual machines, SQL Server, web apps, and other Azure resources.

Powerful programming model

To power accelerated Python development, Azure Functions provides a productive programming model based on event triggers and data bindings. The programming model is supported by a world class end-to-end developer experience that spans from building and debugging locally to deploying and monitoring in the cloud.

The programming model is designed to provide a seamless experience for Python developers so you can quickly start writing functions using code constructs that you're already familiar with, or import existing .py scripts and modules to build the function. For example, you can implement your functions as asynchronous coroutines using the async def qualifier or send monitoring traces to the host using the standard logging module. Additional dependencies to pip install can be configured using the requirements.txt file.

With the event-driven programming model in Functions, based on triggers and bindings, you can easily configure the events that will trigger the function execution and any data sources the function needs to orchestrate with. This model helps increase productivity when developing apps that interact with multiple data sources by reducing the amount of boilerplate code, SDKs, and dependencies that you need to manage and support. Once configured, you can quickly retrieve data from the bindings or write back using the method attributes of your entry-point function. The Python SDK for Azure Functions provides a rich API layer for binding to HTTP requests, timer events, and other Azure services, such as Azure Storage, Azure Cosmos DB, Service Bus, Event Hubs, or Event Grid, so you can use productivity enhancements like autocomplete and Intellisense when writing your code. By leveraging the Azure Functions extensibility model, you can also bring your own bindings to use with your function, so you can also connect to other streams of data like Kafka or SignalR.

Easier development

As a Python developer, you can use your preferred tools to develop your functions. The Azure Functions Core Tools will enable you to get started using trigger-based templates, run locally to test against real-time events coming from the actual cloud sources, and publish directly to Azure, while automatically invoking a server-side dependency build on deployment. The Core Tools can be used in conjunction with the IDE or text editor of your choice for an enhanced authoring experience.

You can also choose to take advantage of the Azure Functions extension for Visual Studio Code for a tightly integrated editing experience to help you create a new app, add functions, and deploy, all within a matter of minutes. The one-click debugging experience enables you to test your functions locally, set breakpoints in your code, and evaluate the call stack, simply with the press of F5. Combine this with the Python extension for Visual Studio Code, and you have an enhanced Python development experience with auto-complete, Intellisense, linting, and debugging.

For a complete continuous delivery experience, you can now leverage the integration with Azure Pipelines, one of the services in Azure DevOps, via an Azure Functions-optimized task to build the dependencies for your app and publish them to the cloud. The pipeline can be configured using an Azure DevOps template or through the Azure CLI.

Advance observability and monitoring through Azure Application Insights is also available for functions written in Python, so you can monitor your apps using the live metrics stream, collect data, query execution logs, and view the distributed traces across a variety of services in Azure.

Host your Python apps with Azure Functions

Host your Python apps with the Azure Functions Consumption plan or the Azure Functions Premium plan on Linux.

The Consumption plan is now generally available for Linux-based hosting and ready for production workloads. This serverless plan provides event-driven dynamic scale and you are charged for compute resources only when your functions are running. Our Linux plan also now has support for managed identities, allowing your app to seamlessly work with Azure resources such as Azure Key Vault, without requiring additional secrets.

The Consumption plan for Linux hosting also includes a preview of integrated remote builds to simplify dependency management. This new capability is available as an option when publishing via the Azure Functions Core Tools and enables you to build in the cloud on the same environment used to host your apps as opposed to configuring your local build environment in alignment with Azure Functions hosting.

Workloads that require advanced features such as more powerful hardware, the ability to keep instances warm indefinitely, and virtual network connectivity can benefit from the Premium plan with Linux-based hosting now available in preview.

With the Premium plan for Linux hosting you can choose between bringing only your app code or bringing a custom Docker image to encapsulate all your dependencies, including the Azure Functions runtime as described in the documentation “Create a function on Linux using a custom image.” Both options benefit from avoiding cold start and from scaling dynamically based on events.

Next steps

Here are a few resources you can leverage to start building your Python apps in Azure Functions today:

Build your first Azure Functions in Python using the command line tools or Visual Studio Code.
Learn more about the programming model using the developer guide.
Explore the Serverless Library samples to find a suitable example for your data science, automation, or web workload.
Sign up for an Azure free account, if you don’t have one yet.

On the Azure Functions team, we are committed to providing a seamless and productive serverless experience for developing and hosting Python applications. With so much being released now and coming soon, we’d love to hear your feedback and learn more about your scenarios. You can reach the team on Twitter and on GitHub. We actively monitor StackOverflow and UserVoice as well, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!
Quelle: Azure

Azure Archive Storage expanded capabilities: faster, simpler, better

Since launching Azure Archive Storage, we have seen unprecedented interest and innovative usage from a variety of industries. Archive Storage is built as a scalable service for cost-effectively storing rarely accessed data for long periods of time. Cold data such as application backups, healthcare records, autonomous driving recordings, etc. that might have been previously deleted could be stored in Azure Storage’s Archive tier in an offline state, then rehydrated to an online tier when needed. Earlier this month, we made Azure Archive Storage even more affordable by reducing prices by up to 50 percent in some regions, as part of our commitment to provide the most cost-effective data storage offering.

We’ve gathered your feedback regarding Azure Archive Storage, and today, we’re happy to share three archive improvements in public preview that make our service even better.

1. Priority retrieval from Azure Archive

To read data stored in Azure Archive Storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and takes a matter of hours to complete. Today we’re sharing the public preview release of priority retrieval from archive allowing for much faster offline data access. Priority retrieval allows you to flag the rehydration of your data from the offline archive tier back into an online hot or cool tier as a high priority action. By paying a little bit more for the priority rehydration operation, your archive retrieval request is placed in front of other requests and your offline data is expected to be returned in less than one hour.

Priority retrieval is recommended to be used for emergency requests for a subset of an archive dataset. For the majority of use cases, our customers plan for and utilize standard archive retrievals which complete in less than 15 hours. But on rare occasions, a retrieval time of an hour or less is required. Priority retrieval requests can deliver archive data in a fraction of the time of a standard retrieval operation, allowing our customers to quickly resume business as usual. For more information, please see Blob Storage Rehydration.

The archive retrieval options now provided under the optional parameter are:

Standard rehydrate-priority is the new name for what Archive has provided over the past two years and is the default option for archive SetBlobTier and CopyBlob requests, with retrievals taking up to 15 hours.
High rehydrate-priority fulfills the need for urgent data access from archive, with retrievals for blobs under ten GB, typically taking less than one hour.

Regional priority retrieval demand at the time of request can affect the speed at which your data rehydration is completed. In most scenarios, a high rehydrate-priority request may return your Archive data in under one hour. In the rare scenario where archive receives an exceptionally large amount of concurrent high rehydrate-priority requests, your request will still be prioritized over standard rehydrate-priority but may take one to five hours to return your archive data. In the extremely rare case that any high rehydrate-priority requests take over five hours to return archive blobs under a few GB, you will not be charged the priority retrieval rates.

2. Upload blob direct to access tier of choice (hot, cool, or archive)

Blob-level tiering for general-purpose v2 and blob storage accounts allows you to easily store blobs in the hot, cool, or archive access tiers all within the same container. Previously when you uploaded an object to your container, it would inherit the access tier of your account and the blob’s access tier would show as hot (inferred) or cool (inferred) depending on your account configuration settings. As data usage patterns change, you would change the access tier of the blob manually with the SetBlobTier API or automate the process with blob lifecycle management rules.

Today we’re sharing the public preview release of Upload Blob Direct to Access tier, which allows you to upload your blob using PutBlob or PutBlockList directly to the access tier of your choice using the optional parameter x-ms-access-tier. This allows you to upload your object directly into the hot, cool, or archive tier regardless of your account’s default access tier setting. This new capability makes it simple for customers to upload objects directly to Azure Archive in a single transaction. For more information, please see Blob Storage Access Tiers.

3. CopyBlob enhanced capabilities

In certain scenarios, you may want to keep your original data untouched but work on a temporary copy of the data. This holds especially true for data in Archive that needs to be read but still kept in Archive. The public preview release of CopyBlob enhanced capabilities builds upon our existing CopyBlob API with added support for the archive access tier, priority retrieval from archive, and direct to access tier of choice.

The CopyBlob API is now able to support the archive access tier; allowing you to copy data into and out of the archive access tier within the same storage account. With our access tier of choice enhancement, you are now able to set the optional parameter x-ms-access-tier to specify which destination access tier you would like your data copy to inherit. If you are copying a blob from the archive tier, you will also be able to specify the x-ms-rehydrate-priority of how quickly you want the copy created in the destination hot or cool tier. Please see Blob Storage Rehydration and the following table for information on the new CopyBlob access tier capabilities.

 

Hot tier source

Cool tier source

Archive tier source

Hot tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Cool tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Archive tier destination

Supported

Supported

Unsupported

Getting Started

All of the features discussed today (upload blob direct to access tier, priority retrieval from archive, and CopyBlob enhancements) are supported by the most recent releases of the Azure Portal, .NET Client Library, Java Client Library, Python Client Library. As always you can also directly use the Storage Services REST API (version 2019-02-02 and greater). In general, we always recommend using the latest version regardless of whether you are using these new features.

Build it, use it, and tell us about it!

We will continue to improve our Archive and Blob Storage services and are looking forward to hearing your feedback about these features through email at ArchiveFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Thanks, from the entire Azure Storage Team!
Quelle: Azure

Announcing the general availability of Azure Ultra Disk Storage

Today, we are announcing the general availability (GA) of Microsoft Azure Ultra Disk Storage—a new Managed Disks offering that delivers unprecedented and extremely scalable performance with sub-millisecond latency for the most demanding Azure Virtual Machines and container workloads. With Ultra Disk Storage, customers are now able to lift-and-shift mission critical enterprise applications to the cloud including applications like SAP HANA, top tier SQL databases such as SQL Server, Oracle DB, MySQL, and PostgreSQL, as well as NoSQL databases such as MongoDB and Cassandra.With the introduction of Ultra Disk Storage, Azure now offers four types of persistent disks—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives our customers a comprehensive set of disk offerings for every workload.

Ultra Disk Storage is designed to provide customers with extreme flexibility when choosing the right performance characteristics for their workloads. Customers can now have granular control on the size, IOPS, and bandwidth of Ultra Disk Storage to meet their specific performance requirements. Organizations can achieve the maximum I/O limit of a virtual machine (VM) with Ultra Disk Storage without having to stripe multiple disks. Check out the blog post “Azure Ultra Disk Storage: Microsoft's service for your most I/O demanding workloads” from Azure’s Chief Technology Officer, Mark Russinovich, for a deep under-the-hood view.

Since we launched the preview for Ultra Disk Storage last September, our customers have used this capability on Azure on a wide range of workloads and have achieved new levels of performance and scale on the public cloud to maximize their virtual machine performance.

Below are some quotes from customers in our preview program:

“Ultra Disk Storage enabled SEGA to seamlessly migrate from our on-premise datacenter to Azure and take advantage of flexible performance controls.”

– Takaya Segawa, General Manager/Creative Officer, SEGA

“Ultra Disk Storage allows us to achieve incredible write performance for our most demanding PostgreSQL database workloads – giving us the ability to scale our applications in Azure.” 

– Andrew Tindula, Senior IT Manager, Online Trading Academy

Ultra Disk Storage performance characteristics

Ultra Disk Storage offers sizes ranging from 4 GiB up to 64 TiB with granular increments. In addition, it is possible to dynamically configure and scale the IOPS and bandwidth on the disk independent of capacity.

Customers can now maximize disk performance by leveraging:

Up to 300 IOPS per GiB, to a maximum of 160K IOPS per disk
Up to a maximum of 2000 MBps per disk

Pricing and availability

Ultra Disk is now available in East US 2, North Europe, and Southeast Asia. Please refer to the FAQ for latest supported regions. For pricing details for Ultra Disk, please refer to the pricing page. The general availability price takes effect from October 1, 2019 unless otherwise noted. Customers in preview will automatically transition to GA pricing on this date. No additional action is required by customers in preview.

Get started with Azure Ultra Disk Storage

You can request onboarding to Azure Ultra Disk Storage by submitting an online request or by reaching out to your Microsoft representative.
Quelle: Azure

Azure Ultra Disk Storage: Microsoft's service for your most I/O demanding workloads

Today, Tad Brockway, Corporate Vice President, Microsoft Azure, announced the general availability of Azure Ultra Disk Storage, an Azure Managed Disks offering that provides massive throughput with sub-millisecond latency for your most I/O demanding workloads. With the introduction of Ultra Disk Storage, Azure includes four types of persistent disk—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives you price and performance options tailored to meet the requirements of every workload. Ultra Disk Storage delivers consistent performance and low latency for I/O intensive workloads like SAP Hana, OLTP databases, NoSQL, and other transaction-heavy workloads. Further, you can reach maximum virtual machine (VM) I/O limits with a single Ultra disk, without having to stripe multiple disks.

Durability of data is essential to business-critical enterprise workloads. To ensure we keep our durability promise, we built Ultra Disk Storage on our existing locally redundant storage (LRS) technology, which stores three copies of data within the same availability zone. Any application that writes to storage will receive an acknowledgement only after it has been durably replicated to our LRS system.

Below is a clip from a presentation I delivered at Microsoft Ignite demonstrating the leading performance of Ultra Disk Storage:

Microsoft Ignite 2018: Azure Ultra Disk Storage demo

Below are some quotes from customers in our preview program:

“With Ultra Disk Storage, we achieved consistent sub-millisecond latency at high IOPS and throughput levels on a wide range of disk sizes. Ultra Disk Storage also allows us to fine tune performance characteristics based on the workload.”

– Amit Patolia, Storage Engineer, DEVON ENERGY

“Ultra Disk Storage provides powerful configuration options that can leverage the full throughput of a VM SKU. The ability to control IOPS and MBps is remarkable.”

– Edward Pantaleone, IT Administrator, Tricore HCM

Inside Ultra Disk Storage

Ultra Disk Storage is our next generation distributed block storage service that provides disk semantics for Azure IaaS VMs and containers. We designed Ultra Disk Storage with the goal of providing consistent performance at high IOPS without compromising our durability promise. Hence, every write operation replicates to the storage in three different racks (fault domains) before being acknowledged to the client. Compared to Azure Premium Storage, Ultra Disk Storage provides its extreme performance without relying on Azure Blob storage cache, our on-server SSD-based cache, and hence it only supports un-cached reads and writes. We also introduced a new simplified client on the compute host that we call virtual disk client (VDC). VDC has full knowledge of virtual disk metadata mappings to disks in the Ultra Disk Storage cluster backing them. That enables the client to talk directly to storage servers, bypassing load balancers and front-end servers used for initial disk connections. This simplified approach minimizes the layers that a read or write operation traverses, reducing latency and delivering performance comparable to enterprise flash disk arrays.

Below is a figure comparing the different layers an operation traverses when issued on an Ultra disk compared to a Premium SSD disk. The operation flows from the client to Hyper-V to the corresponding driver. For an operation done on a Premium SSD disk, the operation will flow from the Azure Blob storage cache driver to the load balancers, front end servers, partition servers then down to the stream layer servers as documented in this paper. For an operation done on an Ultra disk, the operation will flow directly from the virtual disk client to the corresponding storage servers.

Comparison between the IO flow for Ultra Disk Storage versus Premium SSD Storage

One key benefit of Ultra Disk Storage is that you can dynamically tune disk performance without detaching your disk or restarting your virtual machines. Thus, you can scale performance along with your workload. When you adjust either IOPS or throughput, the new performance settings take effect in less than an hour.

Azure implements two levels of throttles that can cap disk performance, a “leaky bucket” VM level throttle that is specific to each VM size, described in documentation. A key benefit of Ultra Disk Storage is a new time-based disk level throttle that is applied at the disk level. This new throttle system provides more realistic behavior of a disk for a given IOPS and throughput. Hitting a leaky bucket throttle can cause erratic performance, while the new time-based throttle provides consistent performance even at the throttle limit. To take advantage of this smoother performance, set your disk throttles slightly less than your VM throttle. We will publish another blog post in the future describing more details about our new throttle system.

Available regions

Currently, Ultra Disk Storage is available in the following regions:

East US 2
North Europe
Southeast Asia

We will expand the service to more regions soon. Please refer to the FAQ for the latest on supported regions.

Virtual machine sizes

Ultra Disk Storage is supported on DSv3 and ESv3 virtual machine types. Additional virtual machine types will be supported soon. Refer to the FAQ for the latest on supported VM sizes.

Get started today

You can request onboarding to Azure Ultra Disk Storage by submitting an online request or by reaching out to your Microsoft representative. For general availability limitations refer to the documentation.
Quelle: Azure

Better together, synergistic results from digital transformation

Intelligent manufacturing transformation can bring great changes, such as connecting the sales organization with field services. Moving to the cloud also provides benefits such as an intelligent supply chain and innovations enabled by connected products. As such, digital transformation is the goal of many, as it can mean finding a competitive advantage.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Leverage through Azure services

One company, PTC, is well-known for ThingWorx, a market-leading, end-to-end Industrial Internet of Things (IIoT) solution platform, built for industrial environments. PTC has moved its platform to Azure, and in doing so, leverages the resources and technical advantages of Microsoft. Together, the two create a synergy that can help any manufacturer make a successful move to the digital world.

Why things matter

The ThingWorx by PTC platform includes a number of components that can kickstart any effort to digitally transform a manufacturing floor. The platform consists of two notable components:

ThingWorx analytics
ThingWorx industrial connectivity

By implementing the platform, developers can create comprehensive, feature-rich IoT solutions and deliver faster time-to-insights, critical to the success of industrial implementations. Because the platform is customized for industrial environments and all aspects of manufacturing, as outlined below, it streamlines the digital transformation with capabilities unique to manufacturing. Add to that, PTC’s partnership with Microsoft and you get capabilities such as integrating HoloLens devices into mixed reality experiences.

Azure IoT Hub integration

Azure IoT Hub has a central role on the platform. The service is accessed through the ThingWorx Azure IoT Connector. Features include:

Ingress processing: Devices that are running Azure IoT Hub SDK applications send messages to the Azure IoT Hub. These messages arrive through an Azure Event Hub endpoint that is provided by the IoT Hub. Communication with the ThingWorx platform is asynchronous to allow for optimal message throughput.
Egress processing: Egress messages arrive from the ThingWorx platform and are pushed to the Azure IoT Hub through its service client.
Device methods as remote services: The Azure IoT Hub enables you to invoke device (direct) methods on edge devices from the cloud.
Azure IoT Blob Storage: allows integration with Azure Blob Storage accounts.
File transfers: The Azure IoT Hub Connector supports transferring files between edge devices and an Azure Storage container.

Next steps

To learn more, go to the Azure marketplace for ThingWorx for Azure and click Contact me.
To see more about Azure in manufacturing, go to Azure for Manufacturing.

Quelle: Azure

Geo Zone Redundant Storage in Azure now in preview

Announcing the preview of Geo Zone Redundant Storage in Azure. Geo Zone Redundant Storage provides a great balance of high performance, high availability, and disaster recovery and is beneficial when building highly available applications or services in Azure. Geo Zone Redundant Storage helps achieve higher data resiliency by doing the following:

Synchronously writing three replicas of your data across multiple Azure Availability Zones, such as zone-redundant storage today, protecting from cluster, datacenter, or entire zone failure.

Asynchronously replicating the data to another region within the same geo into a single zone, such as locally redundant storage, protecting from a regional outage.

When using Geo Zone Redundant Storage, you can continue to read and write the data even if one of the availability zones in the primary region is unavailable. In the event of a regional failure, you can also use Read Access Geo Zone Redundant Storage to continue having read access.

Please note that Read Access Geo Zone Redundant Storage requires a general purpose v2 account and is available for block blobs, non-disk page blobs, files, tables, queues, and Azure Data Lake Storage Gen2.

With the release of the Geo Zone Redundant Storage preview, Azure offers a compelling set of durability options for your storage needs:

Scenario

Locally redundant storage

Geo-redundant storage

Read Access geo-redundant storage

Zone-redundent storage

Geo Zone Redundant Storage

Read Access Geo Zone Redundant Storage

Node unavailability within a data center

Yes

An entire data center (zonal or non-zonal) becomes unavailable

No

Yes (failover is required)

Yes

Yes

A region-wide outage

No

Yes (failover is required)

No

Yes (failover is required)

Read access to your data (in a remote, geo-replicated region) in the event of region-wide unavailability

No

No

Yes

No

No

Yes

Designed to provide X% durability of objects over a given year

at least 11 9's

at least 16 9's

at least 12 9's

at least 16 9's

Supported storage account types

GPv2, GPv1, Blob

GPv2

Availability SLA for read requests

At least 99.9% (99% for Cool Access Tier)

At least 99.99% (99.9% for Cool Access Tier)

At least 99.9% (99% for cool access tier)

At least 99.99% (99.9% for Cool Access Tier)

Availability SLA for write requests

At least 99.9% (99% for Cool Access Tier)

 

Current Geo Zone Redundant Storage prices are discounted preview prices and will change at the time of general availability. For details on various redundancy options please refer to Azure Storage redundancy documentation. In regions where Read Access Geo Zone Redundant Storage is not available you can still use it to build highly available applications.

The preview of Geo Zone Redundant Storage and Read Access Geo Zone Redundant Storage is initially available in US East with more regions to follow in 2019. Please check our documentation for the latest list of regions where the preview is enabled.

You can create a Geo Zone Redundant Storage account using various methods including the Azure portal, Azure CLI, Azure PowerShell, Azure Resource Manager, and the Azure Storage Management SDK. Refer to Read Access Geo Zone Redundant Storage documentation for more details.

Converting from locally redundant storage, geo-redundant storage, read-access geo-redundant storage, or zone-redundant storage to Read Access Geo Zone Redundant Storage is supported. To convert from zone-redundant storage to Read Access Geo Zone Redundant Storage you can use Azure CLI, Azure PowerShell, Azure portal, Azure Resource Manager, and the Azure Storage Management SDK.

There are two options for migrating to Read Access Geo Zone Redundant Storage from non-zone-redundant storageaccounts:

Manually copy or move data to a new Read Access Geo Zone Redundant Storage account from an existing account.
Request a live migration.

Please let us know if you have any questions or need our assistance. We are looking forward to your participation in the preview and hearing your feedback.

Resources

For more details on the conversion process please refer to the Read Access Geo Zone Redundant Storage documentation.
Learn how to leverage Azure Storage in your applications with our quickstarts and tutorials.
Refer to the pricing page to learn more about the pricing.

Quelle: Azure

Improving Azure Virtual Machines resiliency with Project Tardigrade

"Our goal is to empower organizations to run their workloads reliably on Azure. With this as our guiding principle, we are continuously investing in evolving the Azure platform to become fault resilient, not only to boost business productivity but also to provide a seamless customer experience. Last month I published a blog post highlighting several initiatives underway to keep improving in this space, as part of our commitment to provide a trusted set of cloud services. Today I wanted to expand on the mention of Project Tardigrade – a platform resiliency initiative that improves high availability of our services even during the rare cases of spontaneous platform failures. The post that follows was written by Pujitha Desiraju and Anupama Vedapuri from our compute platform fundamentals team, who are leading these efforts.” Mark Russinovich, CTO, Azure

This post was co-authored by Jim Cavalaris, Principal Software Engineer, Azure Compute. 

 

Codenamed Project Tardigrade, this effort draws its inspiration from the eight-legged microscopic creature, the tardigrade also known as the water bear. Virtually impossible to kill, tardigrades can be exposed to extreme conditions, but somehow still manage to wiggle their way to survival. This is exactly what we envision our servers to emulate when we consider resiliency, hence the name Project Tardigrade. Similar to a tardigrade’s survival across a wide range of extreme conditions, this project involves building resiliency and self-healing mechanisms across multiple layers of the platform ranging from hardware to software, all with a view towards safeguarding your virtual machines (VMs) as much as possible.

How does it work?

Project Tardigrade is a broad platform resiliency initiative which employs numerous mitigation strategies with the purpose of ensuring your VMs are not impacted due to any unanticipated host behavior. This includes enabling components to self-heal and quickly recover from potential failures to prevent impact to your workloads. Even in the rare cases of critical host faults, our priority is to preserve and protect your VMs from these spontaneous events to allow your workloads to run seamlessly.

One example recovery workflow is highlighted below, for the uncommon event in which a customer initiated VM operation fails due to an underlying fault on the host server. To carry out the failed VM operation successfully, as well proactively prevent the issue from potentially affecting other VMs on the server, the Tardigrade recovery service will be notified and will begin executing failover operations.

The following phases briefly describe the Tardigrade recovery workflow:

Phase 1:

This step has no impact to running customer VMs. It simply recycles all services running on the host. In the rare case that the faulted service does not successfully restart, we proceed to Phase 2.

Phase 2:

Our diagnostics service runs on the host to collect all relevant logs/dumps systematically, to ensure that we can thoroughly diagnose the reason for failure in Phase 1. This comprehensive analysis allows us to ‘root cause’ the issue and thereby prevent reoccurrences in the future.

Phase 3:

At a high level, we reset the OS into a healthy state with minimal customer impact to mitigate the host issue. During this phase we preserve the states of each VM to RAM, after which we begin to reset the OS into a healthy state. While the OS swiftly resets underneath, running applications on all VMs hosted on the server briefly ‘freeze’ as the CPU is temporarily suspended. This experience is similar to a network connection temporarily lost but quickly resumed due to retry logic. After the OS is successfully reset, VMs consume their stored state and resume normal activity, thereby circumventing any potential VM reboots.

With the above principles we ensure that the failure of any single component in the host does not impact the entire system, making customer VMs more immune to unanticipated host faults. This also allows us to recover quickly from some of the most extreme forms of critical failures (like kernel level failures and firmware issues) while still retaining the virtual machine state that you care about.

Going forward

Currently we use the aforementioned Tardigrade recovery workflow to catch and quickly recover from potential software host failures in the Azure fleet. In parallel we are continuously innovating our technical capabilities and expanding to different host failure scenarios we can combat with this resiliency initiative.

We are also looking to explore the latest innovations in machine learning to harness the proactive capabilities of Project Tardigrade. For example, we plan to leverage machine learning to predict more types of host failures as early as possible. For example, to detect abnormal resource utilization patterns of the host that may potentially impact its workloads. We will also leverage machine learning to help recommend appropriate repair actions (like Tardigrade recovery steps, potentially live migration, etc.) thereby optimizing our fleetwide recovery options.

As customers continue to shift business-critical workloads onto the Microsoft Azure cloud platform, we are constantly learning and improving so that we can continue to meet customer expectations around interruptions from unplanned failures. Reliability is and continues to be a core tenet of our trusted cloud commitments, alongside compliance, security, privacy, and transparency. Across all of these areas, we know that customer trust is earned and must be maintained, not just by saying the right thing but by doing the right thing. Platform resiliency as practiced by Project Tardigrade is already strengthening VM availability by ensuring that underlying host issues do not affect your VMs.

We will continue to share further improvements on this project and others like it, to be as transparent as possible about how we’re constantly improving platform reliability to empower your organization.
Quelle: Azure

Your single source for Azure best practices

Optimizing your Azure workloads can feel like a time-consuming task. With so many services that are constantly evolving it’s challenging to stay on top of, let alone implement, the latest best practices and ensure you’re operating in a cost-efficient manner that delivers security, performance, and reliability.

Many Azure services offer best practices and advice. Examples include Azure Security Center, Azure Cost Management, and Azure SQL Database. But what if you want a single source for Azure best practices, a central location where you can see and act on every optimization recommendation available to you? That’s why we created Microsoft Azure Advisor, a service that helps you optimize your resources for high availability, security, performance, and cost, pulling in recommendations from across Azure and supplementing them with best practices of its own.

In this blog, we’ll explore how you can use Advisor as your single destination for resource optimization and start getting more out of Azure.

What is Azure Advisor and how does it work?

Advisor is your personalized guide to Azure best practices. It analyzes your usage and configurations and offers recommendations to help you optimize your Azure resources for high availability, security, performance, and cost. Each of Advisor’s recommendations includes suggested actions and sharing features to help you quickly and easily remediate your recommendations and optimize your deployments. You can also configure Advisor to only show recommendations for the subscriptions and resource groups that mean the most to you, so you can focus on critical fixes. Advisor is available from the Azure portal, command line, and via REST API, depending on your needs and preferences.

Ultimately, Advisor’s goal is to save you time while helping you get the most out of Azure. That’s why we’re making Advisor a single, central location for optimization that pulls in best practices from companion services like Azure Security Center.

How Azure Security Center integrates with Advisor

Our most recent integration with Advisor is Azure Security Center. Security Center helps you gain unmatched hybrid security management and threat protection. Microsoft uses a wide variety of physical, infrastructure, and operational controls to help secure Azure—but there are additional actions you need to take to help safeguard your workloads. Security Center can help you quickly strengthen your security posture and protect against threats.

Advisor has a new, streamlined experience for reviewing and remediating your security recommendations thanks to a tighter integration with Azure Security Center. As part of the enhanced integration, you’ll be able to:

See a detailed view of your security recommendations from Security Center directly in Advisor.
Get your security recommendations programmatically through the Advisor REST API, CLI, or PowerShell.
Review a summary of your security alerts from Security Center in Advisor.

The new Security Center experience in Advisor will help you more quickly and easily remediate security recommendations.

How Azure Cost Management integrates with Advisor

Another Azure service that provides best practice recommendations is Azure Cost Management, which helps you optimize cloud costs while maximizing your cloud potential. With Cost Management, you can monitor your spending, increase your organizational accountability, and boost your cloud efficiency.

Advisor and Cost Management are also tightly integrated. Cost Management’s integration with Advisor means that you can see any cost recommendation in either service and act to optimize your cloud costs by taking advantage of reservations, rightsizing, or removing idle resources.

Again, this will help you streamline your optimizations.

Azure SQL DB Advisor, Azure App Service Advisor, and more

There’s no shortage of advice in Azure. Many other services including Azure SQL Database and Azure App Service include Advisor-like tools designed to help you follow best practices for those services and succeed in the cloud.

Advisor pulls in and displays recommendations from these services, so the choice is yours. You can review the optimizations in context—in a given instance of an Azure SQL database, for example—or in a single, centralized location in Advisor.

We often recommend the Advisor approach. This way, you can see all your optimizations in a broader, more holistic context and remediate with the big picture in mind, without worrying that you’re missing anything. Plus, it’ll save you time switching between different resources.

Review your recommendations in one place with Advisor

Our recommendation? Use Advisor as your core resource optimization tool. You’ll find everything in a single location rather than having to visit different, more specialized locations. With the Advisor API, you can even integrate with your organization’s internal systems—like a ticketing application or dashboard—to get everything in one place on your end and plug into your own optimization workflows.

Visit Advisor in the Azure portal to get started reviewing, sharing, and remediating your recommendations. For more in-depth guidance, visit the Azure Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea here in the Advisor forums.
Quelle: Azure