Plan migration of your Hyper-V servers using Azure Migrate Server Assessment

Azure Migrate is focused on streamlining your migration journey to Azure. We recently announced the evolution of Azure Migrate, which provides a streamlined, comprehensive portfolio of Microsoft and partner tools to meet migration needs, all in one place. An important capability included in this release is upgrades to Server Assessment for at-scale assessments of VMware and Hyper-V virtual machines (VMs.)

This is the first in a series of blogs about the new capabilities in Azure Migrate. In this post, I will talk about capabilities in Server Assessment that help you plan for migration of Hyper-V servers. This capability is now generally available as part of the Server Assessment feature of Azure Migrate. After assessing your servers for migration, you can migrate your servers using Microsoft’s Server Migration solution available on Azure Migrate. You can get started right away by creating an Azure Migrate project.

Server Assessment earlier supported assessment of VMware VMs for migration to Azure. We’ve now included Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis for Hyper-V VMs. You can now plan at-scale, assessing up to 35,000 Hyper-V servers in one Azure Migrate project. If you use VMware as well, you can discover and assess both Hyper-V and VMware servers in the same Azure Migrate project. You can create groups of servers, assess by group, and refine the groups further using application dependency information.

Azure suitability analysis

The assessment determines whether a given server can be migrated as-is to Azure. Azure support is checked for each server discovered. If it is found that a server is not ready to be migrated, remediation guidance is automatically provided. You can customize your assessment and regenerate the assessment reports. You can apply subscription offers and reserved instance pricing on the cost estimates. You can also generate a cost estimate by choosing a VM series of your choice, and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing

Assessment reports provide detailed cost estimates. You can optimize on cost using performance-based rightsizing assessments. The performance data of your on-premise server is taken into consideration to recommend an appropriate Azure VM and disk SKU. This helps to optimize and right-size on cost as you migrate servers that might be over-provisioned in your on-premise data center.

Dependency analysis

Once you have established cost estimates and migration readiness, you can go ahead and plan your migration phases. Use the dependency analysis feature to understand the dependencies between your applications. This is helpful to understand which workloads are interdependent and need to be migrated together, ensuring you do not leave critical elements behind on-premises. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration using this feature.

Assess your Hyper-V servers in three simple steps:

Create an Azure Migrate project and add the Server Assessment solution to the project.
Set up the Azure Migrate appliance and start discovery of your Hyper-V virtual machines. To set up discovery, the Hyper-V host or cluster names are required. Each appliance supports discovery of 5,000 VMs from up to 300 Hyper-V hosts. You can set up more than one appliance if required.
Once you have successfully set up discovery, create assessments and review the assessment reports.
Use the application dependency analysis features to create and refine server groups to phase your migration.

Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Azure Government, Canada, Europe, India, Japan, United Kingdom, and United States geographies.

When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You will be able automatically carry over the assessment recommendations from Server Assessment into Server Migration. You can read more in our documentation “Migrate Hyper-V VMs to Azure.”

In the coming months, we will add assessment capabilities for physical servers. You will also be able to run a quick assessment by adding inventory information using a CSV file. Stay tuned!

In the upcoming blogs, we will talk about tools for scale assessments, scale migrations, and the partner integrations available in Azure Migrate.

Resources to get started

Tutorial on how to assess Hyper-V servers using the server assessment feature of Azure Migrate.
Prerequisites for assessment of Hyper-V servers.
Guide on how to plan an assessment for a large-scale environment. Each appliance supports discovery of 5,000 VMs from up to 300 Hyper-V hosts.
Tutorial on how to migrate Hyper-V servers using the Server Migration feature of Azure Migrate.

Quelle: Azure

IoT Plug and Play is now available in preview

Today we are announcing that IoT Plug and Play is now available in preview! At Microsoft Build in May 2019, we announced IoT Plug and Play and described how it will work seamlessly with IoT Central. We demonstrated how IoT Plug and Play simplifies device integration by enabling solution developers to connect and interact with IoT devices using device capability models defined with the Digital Twin definition language. We also announced a set of partners who have launched devices and solutions that are IoT Plug and Play enabled. You can find their IoT Plug and Play certified devices at the Azure Certified for IoT device catalog.

With today’s announcement, solution developers can start using Azure IoT Central or Azure IoT Hub to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play. We have also launched a new Azure Certified for IoT portal, for device partners interested to streamline the device certification submission process and get devices into the Azure IoT device catalog quickly.

This article outlines how solution developers can use IoT Plug and Play devices in their IoT solutions, and how device partners can build and certify their products to be listed in the catalog.

Faster device integration for solution developers

Azure IoT Central is a fully managed IoT Software as a Service (SaaS) offering that makes it easy to connect, monitor, and manage your IoT devices and products. Azure IoT Central simplifies the initial setup of your IoT solution and cuts the management burden, operational costs, and overhead of a typical IoT project. Azure IoT Central integration with IoT Plug and Play takes this one step further by allowing solution developers to integrate devices without writing any embedded code. IoT solution developers can choose devices from a large set of IoT Plug and Play certified devices to quickly build and customize their IoT solutions end-to-end. Solution developers can start with a certified device from the device catalog and customize the experience for the device, such as editing display names or units. Solution developers can also add dashboards for solution operators to visualize the data; as part of this new release, developers have a broader set of visualizations to choose from. There is also the option to auto generate dashboards and visualizations to get up and running quickly. Once the dashboard and visualizations are created, solution developers can run simulations based on real models from the device catalog. Developers can also integrate with the commands and properties exposed by IoT Plug and Play capability models to enable operators to effectively manage their device fleets. IoT Central will automatically load the capability model of any certified device, enabling a true Plug and Play experience!

Another option available for developers who’d like more customization is to build IoT solutions with Azure IoT Hub and IoT Plug and Play devices. With today’s release, Azure IoT Hub now supports RESTful digital twin APIs that expose the capabilities of IoT Plug and Play device capability models and interfaces. Developers can set properties to configure settings like alarm thresholds, send commands for operations such as resetting a device, route telemetry, and query which devices support a specific interface. The most convenient way is to use the Azure IoT SDK for Node.js (other languages are coming soon). And all devices enabled for IoT Plug and Play in the Azure Certified for IoT device catalog will work with IoT Hub just like they work with IoT Central.

Streamlined certification process for device partners

The Azure Certified for IoT device catalog allows customers to quickly find the right Azure IoT certified device to quickly start building IoT solutions. To help our device partners certify their products as IoT Plug and Play compatible, we have revamped and streamlined the Azure Certified for IoT program by launching a new portal and submission process. With the Azure Certified for IoT portal, device partners can define new products to be listed in the Azure Certified for IoT device catalog and specify product details such as physical dimensions, description, and geo availability. Device partners can manage their IoT Plug and Play models in their company model repository, which limits access to only their own employees and select partners, as well as the public model repository. The portal also allows device partners to certify their products by submitting to an automated validation process that verifies correct implementation of the Digital Twin definition language and required interfaces implementation.

Device partners will also benefit from investments in developer tooling to support IoT Plug and Play. The Azure IoT Device Workbench extension for VS Code adds IntelliSense for easy authoring of IoT Play and Play device models. It also enables code generation to create C device code that implements the IoT Plug and Play model and provides the logic to connect to IoT Central, without customers having to worry about provisioning or integration with IoT Device SDKs.

The new tolling capabilities also integrates with the model repository service for seamless publishing of device models. In addition to the Azure IoT Device Workbench, device developers can use tools like the Azure IoT explorer and the Azure IoT extension for Azure Command-line Interface. Device code can be developed with the Azure IoT SDK for C and for Node.js.

Connect sensors on Windows and Linux gateways to Azure

If you are using a Windows or Linux gateway device and you have sensors that are already connected to the gateway, then you can make these sensors available to Azure by simply editing a JSON configuration. We call this technology the IoT Plug and Play bridge. The bridge allows sensors on Windows and Linux to just work with Azure by bridging these sensors from the IoT gateway to IoT Central or IoT Hub. On the IoT gateway device, the sensor bridge leverages OS APIs and OS plug and play capabilities to connect to downstream sensors and uses the IoT Plug and Play APIs to communicate with IoT Central and IoT Hub on Azure. A solution builder can easily select from sensors enumerated on the IoT device and register them in IoT Central or IoT Hub. Once available in Azure, the sensors can be remotely accessed and managed. We have native support for Modbus and a simple serial protocol for managing and obtaining sensor data from MCUs or embedded devices and we are continuing to add native support for other protocols like MQTT. On Windows, we also support cameras, and general device health monitoring for any device the OS can recognize (such as USB peripherals). You can extend the bridge with your own adapters to talk to other types of devices (such as I2C/SPI), and we are working on adding support for more sensors and protocols (such as HID).

Next steps

Read IoT Central documentation to learn how to build solutions with IoT Plug and Play devices.
Read the IoT Plug and Play documentation to learn how to build solutions using the Azure IoT platform.
Learn how to build and certify IoT Plug and Play devices.
View the Digital Twin Definition Language specification on GitHub.
Tune in to the Internet of Things Show deep dive on September 11.
Browse IoT Plug and Play devices on the Azure IoT Device Catalog.
See a demo of IoT Plug and Play bridge with a MODBUS environmental sensor on the Channel 9 IoT Show.
Try IoT Plug and Play bridge on GitHub.
Learn how to implement IoT spatial analytics using Azure Maps and IoT Plug and Play location schema.

Quelle: Azure

IRAP protected compliance from infra to SAP application layer on Azure

Australian government organizations are looking for cloud managed services providers capable of providing deployment of a platform as a service (PaaS) environment suitable for the processing, storage, and transmission of AU-PROTECTED government data that is compliant with the objectives of the Australian Government Information Security Manual (ISM) produced by the Australian Signals Directorate (ASD).

One of Australia’s largest federal agencies that is responsible for improving and maintaining finances of the state was looking to implement the Information Security Registered Assessors Program (IRAP) which is critical to safeguard sensitive information and ensure security controls around transmission, storage, and retrieval.

The Information Security Registered Assessors Program is an Australian Signals Directorate initiative to provide high-quality information and communications technology (ICT) security assessment services to the government.

The Australian Signals Directorate endorses suitably-qualified information and communications technology professionals to provide relevant security services that aim to secure broader industry and Australian government information and associated systems.

Cloud4C took up this challenge to enable this federal client on the cloud delivery platforms. Cloud4C analyzed and assessed the stringent compliance requirements within the Information Security Registered Assessors Program guidelines.

Following internal baselining, Cloud4C divided the whole assessment into three distinct categories – physical, infrastructure, and managed services. The Information Security Registered Assessors Program has stringent security controls around these three specific areas.

Cloud4C realized that the best way to successfully meet this challenge was to partner and share responsibilities to achieve an improbable but successful and worthy assessment together. In April 2018, the Australian Cyber Security Center (ACSC) announced the certification of Azure and Office 365 at the PROTECTED classification. Microsoft became the first and only public cloud provider to achieve this level of certification. Cloud4C partnered with Microsoft to deploy the SAP applications and SAP HANA database on Azure and utilized all the Information Security Registered Assessors Program compliant infrastructure benefits to enable seamless integration of native and marketplace tools and technologies on Azure.

Cloud4C identified the right Azure data center in Australia, Australia Central and Australia Central 2, which had undergone a very stringent Information Security Registered Assessors Program assessment for physical security and information and communications equipment placements.

This compliance by Azure for infrastructure and disaster recovery gave Cloud4C a tremendous head-start as a managed service provider in focusing energies to address the majority of remaining controls that were focused solely for the cloud service provider.

The Information Security Registered Assessors Program assessment for Cloud4C involved meeting 412 high risks and 19 of the most critical security aspects distributed across 22 major categories, after taking out the controls that were addressed by Azure disaster recovery.

Solution overview

The scope of the engagement was to configure and manage the SAP landscape onto Azure with managed services up to the SAP basis layer while maintaining the Information Security Registered Assessors Program protected classification standards for the processing, storage, and retrieval of classified information. As the engagement model is PaaS, the responsibility matrix was up to the SAP basis layer and application managed services were outside the purview of this engagement.

Platform as a service with single service level agreement and Information Security Registered Assessors Program protected classification

The proposed solution included various SAP solutions including SAP ERP, SAP BW, SAP CRM, SAP GRC, SAP IDM, SAP Portal, SAP Solution Manager, Web Dispatcher, and Cloud Connector with a mix of databases including SAP HANA, SAP MaxDB, and former Sybase databases. Azure Australia Central, the primary disaster recovery, and Australia Central 2, the secondary disaster recovery region, were identified as the physical disaster recovery locations for building the Information Security Registered Assessors Program protected compliant environment. The proposed architecture encompassed certified virtual machine stock keeping units (SKUs) for SAP workloads, optimized storage and disks configuration, right network SKUs with adequate protection, a mechanism to achieve high availability, disaster recovery, backup, and monitoring, an adequate mix of native and external security tools, and most importantly, processes and guidelines around service delivery.

The following Azure services were considered as part of the proposed architecture:

Azure Availability Sets
Azure Active Directory
Azure Privileged Identity Management
Azure Multi-Factor Authentication
Azure ExpressRoute gateway
Azure application gateway with web application firewall
Azure Load Balancer
Azure Monitor
Azure Resource Manager
Azure Security Center
Azure storage and disk encryption
Azure DDoS Protection
Azure Virtual Machines (Certified virtual machines for SAP applications and SAP HANA database)
Azure Virtual Network
Azure Network Watcher
Network security groups

Information Security Registered Assessors Program compliance and assessment process

Cloud4C navigated through the accreditation framework with the help of the Information Security Registered Assessors Program assessor, who helped to understand and implement the Australian government security and build the technical feasibility of porting SAP applications and the SAP HANA database to the Information Security Registered Assessors Program protected setup on the Azure protected cloud.

The Information Security Registered Assessors Program assessor assessed the implementation, appropriateness, and effectiveness of the system's security controls. This was achieved through two security assessment stages, as dictated in the Australian Government Information Security Manual (ISM):

Stage 1: Security assessment identifies security deficiencies that the system owner rectifies or mitigates
Stage 2: Security assessment assesses residual compliance

Cloud4C has achieved successful assessment under all applicable information security manual controls, ensuring the zero risk environment and protection of the critical information systems with support from Microsoft.

The Microsoft team provided guidance around best practices on how to leverage Azure native tools to achieve compliance. The Microsoft solution architect and engineering team participated in the design discussions and brought an existing knowledge base around Azure native security tools, integration scenarios for third party security tools, and possible optimizations in the architecture.

During the assessment, Cloud4C and the Information Security Registered Assessors Program assessor performed the following activities:

Designed the system architecture incorporating all components and stakeholders involved in the overall communication
Mapped security compliance against the Australian government security policy
Identified physical facilities, the Azure Data centers Australia Central and Australia Central 2, that are certified by the Information Security Registered Assessors Program
Implemented Information Security Manual security controls
Defined mitigation strategies for any non-compliance
Identified risks to the system and defined the mitigation strategy

Steps to ensure automation and process improvement

Quick deployment using Azure Resource Manager (ARM) templates combined with tools. This helped in the deployment of large landscapes comprising of more than 100 virtual machines and 10 SAP solutions in less than a month.
Process automation using Robotic Process Automation (RPA) tools. This helped to identify the business as usual stage within the SAP eco-system deployed for the Information Security Registered Assessors Program environment and enhanced the process to ensure minimum disruption to actual business processes on top of automation that takes care of the infrastructure level ensuring the application availability.

Learnings and respective solutions that were implemented during the process

The Azure Central and Azure Central 2 regions were connected to each other over fibre links offering less than sub-ms latency, with the SAP application and SAP HANA database replication in synchronous mode and zero recovery point objective (RPO) was achieved.
Azure Active Directory Domain Services were not available in the Australia Central region, so the Azure South-East region was leveraged to ensure seamless delivery.
Azure Site Recovery was successfully used for replication of an SAP Max DB database.
Traffic flowing over Azure ExpressRoute was not encrypted by default, it was encrypted using a network virtual appliance from a Microsoft security partner.

Complying with the Information Security Registered Assessors Program requires Australian Signals Directorate defined qualifications to be fulfilled and to pass through assessment phases. Cloud4C offered the following benefits:

Reduced time to market – Cloud4C completed the assessment process in 9 months as compared to the industry achievement of nearly 1-2 years.
Cloud4C’s experience and knowledge of delivering multiple regions and industry specific compliances for customers on Azure helped in mapping the right controls with Azure native and external security tools.

The partnership with Microsoft helped Cloud4C reach another milestone and take advantage of all the security features that Azure Hyperscaler has to offer to meet stringent regulatory and geographic compliances.

Cloud4C has matured in the use of many of the security solutions that are readily available from Azure Native, as well as Azure Marketplace to reduce time-to-market. Cloud4C utilized the Azure portfolio to its fullest in terms of securing the customer's infrastructure as well as encourage a secure culture in supporting their clients as an Azure Expert Managed Service Provider (MSP). The Azure security portfolio has been growing and so has Cloud4C's use of its solution offerings.

Cloud4C and Microsoft plan to take this partnership to even greater heights in terms of providing an unmatched cloud experience to customers in the marketplace across various geographies and industry verticals.

Learn more

Azure Security Solutions from Microsoft
Azure Native Products
Workloads Migration to Azure
Cloud4C Azure Managed Services
Cloud4C solutions for SAP on Azure

Quelle: Azure

Reducing SAP implementations from months to minutes with Azure Logic Apps

It's always been a tricky business to handle mission-critical processes. Much of the technical debt that companies assume comes from having to architect systems that have multiple layers of redundancy, to mitigate the chance of outages that may severely impact customers. The process of both architecting and subsequently maintaining these systems has resulted in huge losses in productivity and agility throughout many enterprises across all industries.

The solutions that cloud computing provides help enterprises shift away from this cumbersome work. Instead of spending countless weeks or even months trying to craft an effective solution to the problem of handling critical workloads, cloud providers such as Azure now provide an out-of-the-box way to run your critical processes, without fear of outages, and without incurring costs associated with managing your own infrastructure.

One of the latest innovations in this category, developed by the Azure Logic Apps team, is a new SAP connector that helps companies easily integrate with the ERP systems that are critical to the day-to-day success of a business. Often, implementing these solutions can take teams of people months to get right. However, with the SAP connector from Logic Apps, this process often only takes days, or even hours!

What are some of the benefits of creating workflows with Logic Apps and SAP?

In addition to the broad value that cloud infrastructure provides, Logic Apps can also help:

Mitigate risk and reduce time-to-success from months to days when implementing new SAP integrations.
Make your migration to the cloud smoother by moving at your own speed.
Connect best-in-class cloud services to your SAP instance, no matter where SAP is hosted.

Logic Apps help you turn your SAP instances from worrisome assets that need to be managed, to value-generation centers by opening new possibilities and solutions.

What's an example of this?

Take the following scenario—an on-premises instance of SAP receives sales orders from an e-commerce site for software purchases. In order to complete the entirety of this transaction, there are several points of integration that must happen—between the on-premises instance of the SAP ERP software, the service that generates new software license keys for the customer, the service that generates the customer invoice, and finally a service that emails the newly generated key to the customer, along with the final invoice.

In this scenario, it is necessary to move between both on-premises environments and cloud environments, which can often be tricky to accomplish in a secure way. Logic Apps solves for this by connecting securely and bi-directionally via a virtual network, ensuring that data stays safe.

Leveraging both Azure and Logic Apps, this solution can be done with a team of one, in a minimal amount of time, and with diminished risk of impacting other key business activities.

If you’re interested in trying this for yourself, or learning more about how we implemented this solution, you can follow along with Microsoft Mechanics as they walk through, step-by-step, how they implemented this solution.

How do I get started?

Azure Logic Apps reduces the complexity of creating and managing critical workloads in the enterprise, freeing up your team to focus on delivering new processes that drive key business outcomes.

Get started today:

Logic Apps

Logic Apps and SAP
Quelle: Azure

Azure Sphere’s customized Linux-based OS

Security and resource constraints are often at odds with each other. While some security measures involve making code smaller by removing attack surfaces, others require adding new features, which consume precious flash and RAM. How did Microsoft manage to create a secure Linux based OS that runs on the Azure Sphere MCU?

The Azure Sphere OS begins with a long-term support (LTS) Linux kernel. Then the Azure Sphere development team customizes the kernel to add additional security features, as well as some code targeted at slimming down resource utilization to fit within the limited resources available on an Azure Sphere chip. In addition, applications, including basic OS services, run isolated for security. Each application must opt in to use the peripherals or network resources it requires. The result is an OS purpose-built for Internet of Things (IoT) and security, which creates a trustworthy platform for IoT experiences.

At the 2018 Linux Security Summit, Ryan Fairfax, an Azure Sphere engineering lead, presented a deep dive into the Azure Sphere OS and the process of fitting Linux security in 4 MiB of RAM. In this talk, Ryan covers the security components of the system, including a custom Linux Security Module, modifications and extensions to existing kernel components, and user space components that form the security backbone of the OS. He also discusses the challenges of taking modern security techniques and fitting them in resource-constrained devices. I hope that you enjoy this presentation!

Watch the video to learn more about the development of Azure Sphere’s secure, Linux-based OS. You can also look forward to Ryan’s upcoming talk on Using Yocto to Build an IoT OS Targeting a Crossover SoC at the Embedded Linux Conference in San Diego on August 22.

Visit our website for documentation and more information on how to get started with Azure Sphere.

 

Quelle: Azure

Azure Security Center single click remediation and Azure Firewall JIT support

This blog post was co-authored by Rotem Lurie, Program Manager, Azure Security Center.​

Azure Security Center provides you with a bird’s eye security posture view across your Azure environment, enabling you to continuously monitor and improve your security posture using secure score in Azure. Security Center helps you identify and perform the hardening tasks recommended as security best practices and implement them across your machines, data services, and apps. This includes managing and enforcing your security policies and making sure your Azure Virtual Machines, non-Azure servers, and Azure PaaS services are compliant.

Today, we are announcing two new capabilities—the preview for remediating recommendations on a bulk of resources in a single click using secure score and the general availability (GA) of just-in-time (JIT) virtual machine (VM) access for Azure Firewall. Now you can secure your Azure Firewall protected environments with JIT, in addition to your network security group (NSG) protected environments.

Single click remediation for bulk resources in preview

With so many services offering security benefits, it's often hard to know what steps to take first to secure and harden your workload. Secure score in Azure reviews your security recommendations and prioritizes them for you, so you know which recommendations to perform first. This helps you find the most serious security vulnerabilities so you can prioritize investigation. Secure score is a tool that helps you assess your workload security posture.

In order to simplify remediation of security misconfigurations and to be able to quickly improve your secure score, we are introducing a new capability that allows you to remediate a recommendation on a bulk of resources in a single click.

This operation will allow you to select the resources you want to apply the remediation to and launch a remediation action that will configure the setting on your behalf. Single click remediation is available today for preview customers as part of the Security Center recommendations blade.

You can look for the 1-click fix label next to the recommendation and click on the recommendation:

Once you choose the resources you want to remediate and select Remediate, the remediation takes place and the resources move to the Healthy resources tab. Remediation actions are logged in the activity log to provide additional details in case of a failure.

Remediation is available for the following recommendations in preview:

Web Apps, Function Apps, and API Apps should only be accessible over HTTPS
Remote debugging should be turned off for Function Apps, Web Apps, and API Apps
CORS should not allow every resource to access your Function Apps, Web Apps, or API Apps
Secure transfer to storage accounts should be enabled
Transparent data encryption for Azure SQL Database should be enabled
Monitoring agent should be installed on your virtual machines
Diagnostic logs in Azure Key Vault and Azure Service Bus should be enabled
Diagnostic logs in Service Bus should be enabled
Vulnerability assessment should be enabled on your SQL servers
Advanced data security should be enabled on your SQL servers
Vulnerability assessment should be enabled on your SQL managed instances
Advanced data security should be enabled on your SQL managed instances

Single click remediation is part of Azure Security Center’s free tier.

Just-in-time virtual machine access for Azure Firewall is generally available

Announcing the general availability of just-in-time virtual machine access for Azure Firewall. Now you can secure your Azure Firewall protected environments with JIT, in addition to your NSG protected environments.

JIT VM access reduces your VM’s exposure to network volumetric attacks by providing controlled access to VMs only when needed, using your NSG and Azure Firewall rules.

When you enable JIT for your VMs, you create a policy that determines the ports to be protected, how long the ports are to remain open, and approved IP addresses from where these ports can be accessed. This policy helps you stay in control of what users can do when they request access.

Requests are logged in the activity log, so you can easily monitor and audit access. The JIT blade also helps you quickly identify existing virtual machines that have JIT enabled and virtual machines where JIT is recommended.

Azure Security Center displays your recently approved requests. The Configured VMs tab reflects the last user, the time, and the open ports for the previous approved JIT requests. When a user creates a JIT request for a VM protected by Azure Firewall, Security Center provides the user with the proper connection details to your virtual machine, translated directly from your Azure Firewall destination network address translation (DNAT).

This feature is available in the Standard pricing tier of Security Center, which you can try for free for the first 60 days.

To learn more about these features in Security Center, visit “Remediate recommendations in Azure Security Center,” just-in-time VM access documentation, and Azure Firewall documentation. To learn more about Azure Security Center, please visit the Azure Security Center home page.
Quelle: Azure

Announcing the general availability of Python support in Azure Functions

Python support for Azure Functions is now generally available and ready to host your production workloads across data science and machine learning, automated resource management, and more. You can now develop Python 3.6 apps to run on the cross-platform, open-source Functions 2.0 runtime. These can be published as code or Docker containers to a Linux-based serverless hosting platform in Azure. This stack powers the solution innovations of our early adopters, with customers such as General Electric Aviation and TCF Bank already using Azure Functions written in Python for their serverless production workloads. Our thanks to them for their continued partnership!

In the words of David Havera, blockchain Chief Technology Officer of the GE Aviation Digital Group, "GE Aviation Digital Group's hope is to have a common language that can be used for backend Data Engineering to front end Analytics and Machine Learning. Microsoft have been instrumental in supporting this vision by bringing Python support in Azure Functions from preview to life, enabling a real world data science and Blockchain implementation in our TRUEngine project."

Throughout the Python preview for Azure Functions we gathered feedback from the community to build easier authoring experiences, introduce an idiomatic programming model, and create a more performant and robust hosting platform on Linux. This post is a one-stop summary for everything you need to know about Python support in Azure Functions and includes resources to help you get started using the tools of your choice.

Bring your Python workloads to Azure Functions

Many Python workloads align very nicely with the serverless model, allowing you to focus on your unique business logic while letting Azure take care of how your code is run. We’ve been delighted by the interest from the Python community and by the productive solutions built using Python on Functions.

Workloads and design patterns

While this is by no means an exhaustive list, here are some examples of workloads and design patterns that translate well to Azure Functions written in Python.

Simplified data science pipelines

Python is a great language for data science and machine learning (ML). You can leverage the Python support in Azure Functions to provide serverless hosting for your intelligent applications. Consider a few ideas:

Use Azure Functions to deploy a trained ML model along with a scoring script to create an inferencing application.

Leverage triggers and data bindings to ingest, move prepare, transform, and process data using Functions.
Use Functions to introduce event-driven triggers to re-training and model update pipelines when new datasets become available.

Automated resource management

As an increasing number of assets and workloads move to the cloud, there's a clear need to provide more powerful ways to manage, govern, and automate the corresponding cloud resources. Such automation scenarios require custom logic that can be easily expressed using Python. Here are some common scenarios:

Process Azure Monitor alerts generated by Azure services.
React to Azure events captured by Azure Event Grid and apply operational requirements on resources.

Leverage Azure Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a Python function.
Perform scheduled operational tasks on virtual machines, SQL Server, web apps, and other Azure resources.

Powerful programming model

To power accelerated Python development, Azure Functions provides a productive programming model based on event triggers and data bindings. The programming model is supported by a world class end-to-end developer experience that spans from building and debugging locally to deploying and monitoring in the cloud.

The programming model is designed to provide a seamless experience for Python developers so you can quickly start writing functions using code constructs that you're already familiar with, or import existing .py scripts and modules to build the function. For example, you can implement your functions as asynchronous coroutines using the async def qualifier or send monitoring traces to the host using the standard logging module. Additional dependencies to pip install can be configured using the requirements.txt file.

With the event-driven programming model in Functions, based on triggers and bindings, you can easily configure the events that will trigger the function execution and any data sources the function needs to orchestrate with. This model helps increase productivity when developing apps that interact with multiple data sources by reducing the amount of boilerplate code, SDKs, and dependencies that you need to manage and support. Once configured, you can quickly retrieve data from the bindings or write back using the method attributes of your entry-point function. The Python SDK for Azure Functions provides a rich API layer for binding to HTTP requests, timer events, and other Azure services, such as Azure Storage, Azure Cosmos DB, Service Bus, Event Hubs, or Event Grid, so you can use productivity enhancements like autocomplete and Intellisense when writing your code. By leveraging the Azure Functions extensibility model, you can also bring your own bindings to use with your function, so you can also connect to other streams of data like Kafka or SignalR.

Easier development

As a Python developer, you can use your preferred tools to develop your functions. The Azure Functions Core Tools will enable you to get started using trigger-based templates, run locally to test against real-time events coming from the actual cloud sources, and publish directly to Azure, while automatically invoking a server-side dependency build on deployment. The Core Tools can be used in conjunction with the IDE or text editor of your choice for an enhanced authoring experience.

You can also choose to take advantage of the Azure Functions extension for Visual Studio Code for a tightly integrated editing experience to help you create a new app, add functions, and deploy, all within a matter of minutes. The one-click debugging experience enables you to test your functions locally, set breakpoints in your code, and evaluate the call stack, simply with the press of F5. Combine this with the Python extension for Visual Studio Code, and you have an enhanced Python development experience with auto-complete, Intellisense, linting, and debugging.

For a complete continuous delivery experience, you can now leverage the integration with Azure Pipelines, one of the services in Azure DevOps, via an Azure Functions-optimized task to build the dependencies for your app and publish them to the cloud. The pipeline can be configured using an Azure DevOps template or through the Azure CLI.

Advance observability and monitoring through Azure Application Insights is also available for functions written in Python, so you can monitor your apps using the live metrics stream, collect data, query execution logs, and view the distributed traces across a variety of services in Azure.

Host your Python apps with Azure Functions

Host your Python apps with the Azure Functions Consumption plan or the Azure Functions Premium plan on Linux.

The Consumption plan is now generally available for Linux-based hosting and ready for production workloads. This serverless plan provides event-driven dynamic scale and you are charged for compute resources only when your functions are running. Our Linux plan also now has support for managed identities, allowing your app to seamlessly work with Azure resources such as Azure Key Vault, without requiring additional secrets.

The Consumption plan for Linux hosting also includes a preview of integrated remote builds to simplify dependency management. This new capability is available as an option when publishing via the Azure Functions Core Tools and enables you to build in the cloud on the same environment used to host your apps as opposed to configuring your local build environment in alignment with Azure Functions hosting.

Workloads that require advanced features such as more powerful hardware, the ability to keep instances warm indefinitely, and virtual network connectivity can benefit from the Premium plan with Linux-based hosting now available in preview.

With the Premium plan for Linux hosting you can choose between bringing only your app code or bringing a custom Docker image to encapsulate all your dependencies, including the Azure Functions runtime as described in the documentation “Create a function on Linux using a custom image.” Both options benefit from avoiding cold start and from scaling dynamically based on events.

Next steps

Here are a few resources you can leverage to start building your Python apps in Azure Functions today:

Build your first Azure Functions in Python using the command line tools or Visual Studio Code.
Learn more about the programming model using the developer guide.
Explore the Serverless Library samples to find a suitable example for your data science, automation, or web workload.
Sign up for an Azure free account, if you don’t have one yet.

On the Azure Functions team, we are committed to providing a seamless and productive serverless experience for developing and hosting Python applications. With so much being released now and coming soon, we’d love to hear your feedback and learn more about your scenarios. You can reach the team on Twitter and on GitHub. We actively monitor StackOverflow and UserVoice as well, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!
Quelle: Azure

Azure Archive Storage expanded capabilities: faster, simpler, better

Since launching Azure Archive Storage, we have seen unprecedented interest and innovative usage from a variety of industries. Archive Storage is built as a scalable service for cost-effectively storing rarely accessed data for long periods of time. Cold data such as application backups, healthcare records, autonomous driving recordings, etc. that might have been previously deleted could be stored in Azure Storage’s Archive tier in an offline state, then rehydrated to an online tier when needed. Earlier this month, we made Azure Archive Storage even more affordable by reducing prices by up to 50 percent in some regions, as part of our commitment to provide the most cost-effective data storage offering.

We’ve gathered your feedback regarding Azure Archive Storage, and today, we’re happy to share three archive improvements in public preview that make our service even better.

1. Priority retrieval from Azure Archive

To read data stored in Azure Archive Storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and takes a matter of hours to complete. Today we’re sharing the public preview release of priority retrieval from archive allowing for much faster offline data access. Priority retrieval allows you to flag the rehydration of your data from the offline archive tier back into an online hot or cool tier as a high priority action. By paying a little bit more for the priority rehydration operation, your archive retrieval request is placed in front of other requests and your offline data is expected to be returned in less than one hour.

Priority retrieval is recommended to be used for emergency requests for a subset of an archive dataset. For the majority of use cases, our customers plan for and utilize standard archive retrievals which complete in less than 15 hours. But on rare occasions, a retrieval time of an hour or less is required. Priority retrieval requests can deliver archive data in a fraction of the time of a standard retrieval operation, allowing our customers to quickly resume business as usual. For more information, please see Blob Storage Rehydration.

The archive retrieval options now provided under the optional parameter are:

Standard rehydrate-priority is the new name for what Archive has provided over the past two years and is the default option for archive SetBlobTier and CopyBlob requests, with retrievals taking up to 15 hours.
High rehydrate-priority fulfills the need for urgent data access from archive, with retrievals for blobs under ten GB, typically taking less than one hour.

Regional priority retrieval demand at the time of request can affect the speed at which your data rehydration is completed. In most scenarios, a high rehydrate-priority request may return your Archive data in under one hour. In the rare scenario where archive receives an exceptionally large amount of concurrent high rehydrate-priority requests, your request will still be prioritized over standard rehydrate-priority but may take one to five hours to return your archive data. In the extremely rare case that any high rehydrate-priority requests take over five hours to return archive blobs under a few GB, you will not be charged the priority retrieval rates.

2. Upload blob direct to access tier of choice (hot, cool, or archive)

Blob-level tiering for general-purpose v2 and blob storage accounts allows you to easily store blobs in the hot, cool, or archive access tiers all within the same container. Previously when you uploaded an object to your container, it would inherit the access tier of your account and the blob’s access tier would show as hot (inferred) or cool (inferred) depending on your account configuration settings. As data usage patterns change, you would change the access tier of the blob manually with the SetBlobTier API or automate the process with blob lifecycle management rules.

Today we’re sharing the public preview release of Upload Blob Direct to Access tier, which allows you to upload your blob using PutBlob or PutBlockList directly to the access tier of your choice using the optional parameter x-ms-access-tier. This allows you to upload your object directly into the hot, cool, or archive tier regardless of your account’s default access tier setting. This new capability makes it simple for customers to upload objects directly to Azure Archive in a single transaction. For more information, please see Blob Storage Access Tiers.

3. CopyBlob enhanced capabilities

In certain scenarios, you may want to keep your original data untouched but work on a temporary copy of the data. This holds especially true for data in Archive that needs to be read but still kept in Archive. The public preview release of CopyBlob enhanced capabilities builds upon our existing CopyBlob API with added support for the archive access tier, priority retrieval from archive, and direct to access tier of choice.

The CopyBlob API is now able to support the archive access tier; allowing you to copy data into and out of the archive access tier within the same storage account. With our access tier of choice enhancement, you are now able to set the optional parameter x-ms-access-tier to specify which destination access tier you would like your data copy to inherit. If you are copying a blob from the archive tier, you will also be able to specify the x-ms-rehydrate-priority of how quickly you want the copy created in the destination hot or cool tier. Please see Blob Storage Rehydration and the following table for information on the new CopyBlob access tier capabilities.

 

Hot tier source

Cool tier source

Archive tier source

Hot tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Cool tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Archive tier destination

Supported

Supported

Unsupported

Getting Started

All of the features discussed today (upload blob direct to access tier, priority retrieval from archive, and CopyBlob enhancements) are supported by the most recent releases of the Azure Portal, .NET Client Library, Java Client Library, Python Client Library. As always you can also directly use the Storage Services REST API (version 2019-02-02 and greater). In general, we always recommend using the latest version regardless of whether you are using these new features.

Build it, use it, and tell us about it!

We will continue to improve our Archive and Blob Storage services and are looking forward to hearing your feedback about these features through email at ArchiveFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Thanks, from the entire Azure Storage Team!
Quelle: Azure

Announcing the general availability of Azure Ultra Disk Storage

Today, we are announcing the general availability (GA) of Microsoft Azure Ultra Disk Storage—a new Managed Disks offering that delivers unprecedented and extremely scalable performance with sub-millisecond latency for the most demanding Azure Virtual Machines and container workloads. With Ultra Disk Storage, customers are now able to lift-and-shift mission critical enterprise applications to the cloud including applications like SAP HANA, top tier SQL databases such as SQL Server, Oracle DB, MySQL, and PostgreSQL, as well as NoSQL databases such as MongoDB and Cassandra.With the introduction of Ultra Disk Storage, Azure now offers four types of persistent disks—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives our customers a comprehensive set of disk offerings for every workload.

Ultra Disk Storage is designed to provide customers with extreme flexibility when choosing the right performance characteristics for their workloads. Customers can now have granular control on the size, IOPS, and bandwidth of Ultra Disk Storage to meet their specific performance requirements. Organizations can achieve the maximum I/O limit of a virtual machine (VM) with Ultra Disk Storage without having to stripe multiple disks. Check out the blog post “Azure Ultra Disk Storage: Microsoft's service for your most I/O demanding workloads” from Azure’s Chief Technology Officer, Mark Russinovich, for a deep under-the-hood view.

Since we launched the preview for Ultra Disk Storage last September, our customers have used this capability on Azure on a wide range of workloads and have achieved new levels of performance and scale on the public cloud to maximize their virtual machine performance.

Below are some quotes from customers in our preview program:

“Ultra Disk Storage enabled SEGA to seamlessly migrate from our on-premise datacenter to Azure and take advantage of flexible performance controls.”

– Takaya Segawa, General Manager/Creative Officer, SEGA

“Ultra Disk Storage allows us to achieve incredible write performance for our most demanding PostgreSQL database workloads – giving us the ability to scale our applications in Azure.” 

– Andrew Tindula, Senior IT Manager, Online Trading Academy

Ultra Disk Storage performance characteristics

Ultra Disk Storage offers sizes ranging from 4 GiB up to 64 TiB with granular increments. In addition, it is possible to dynamically configure and scale the IOPS and bandwidth on the disk independent of capacity.

Customers can now maximize disk performance by leveraging:

Up to 300 IOPS per GiB, to a maximum of 160K IOPS per disk
Up to a maximum of 2000 MBps per disk

Pricing and availability

Ultra Disk is now available in East US 2, North Europe, and Southeast Asia. Please refer to the FAQ for latest supported regions. For pricing details for Ultra Disk, please refer to the pricing page. The general availability price takes effect from October 1, 2019 unless otherwise noted. Customers in preview will automatically transition to GA pricing on this date. No additional action is required by customers in preview.

Get started with Azure Ultra Disk Storage

You can request onboarding to Azure Ultra Disk Storage by submitting an online request or by reaching out to your Microsoft representative.
Quelle: Azure

Azure Ultra Disk Storage: Microsoft's service for your most I/O demanding workloads

Today, Tad Brockway, Corporate Vice President, Microsoft Azure, announced the general availability of Azure Ultra Disk Storage, an Azure Managed Disks offering that provides massive throughput with sub-millisecond latency for your most I/O demanding workloads. With the introduction of Ultra Disk Storage, Azure includes four types of persistent disk—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives you price and performance options tailored to meet the requirements of every workload. Ultra Disk Storage delivers consistent performance and low latency for I/O intensive workloads like SAP Hana, OLTP databases, NoSQL, and other transaction-heavy workloads. Further, you can reach maximum virtual machine (VM) I/O limits with a single Ultra disk, without having to stripe multiple disks.

Durability of data is essential to business-critical enterprise workloads. To ensure we keep our durability promise, we built Ultra Disk Storage on our existing locally redundant storage (LRS) technology, which stores three copies of data within the same availability zone. Any application that writes to storage will receive an acknowledgement only after it has been durably replicated to our LRS system.

Below is a clip from a presentation I delivered at Microsoft Ignite demonstrating the leading performance of Ultra Disk Storage:

Microsoft Ignite 2018: Azure Ultra Disk Storage demo

Below are some quotes from customers in our preview program:

“With Ultra Disk Storage, we achieved consistent sub-millisecond latency at high IOPS and throughput levels on a wide range of disk sizes. Ultra Disk Storage also allows us to fine tune performance characteristics based on the workload.”

– Amit Patolia, Storage Engineer, DEVON ENERGY

“Ultra Disk Storage provides powerful configuration options that can leverage the full throughput of a VM SKU. The ability to control IOPS and MBps is remarkable.”

– Edward Pantaleone, IT Administrator, Tricore HCM

Inside Ultra Disk Storage

Ultra Disk Storage is our next generation distributed block storage service that provides disk semantics for Azure IaaS VMs and containers. We designed Ultra Disk Storage with the goal of providing consistent performance at high IOPS without compromising our durability promise. Hence, every write operation replicates to the storage in three different racks (fault domains) before being acknowledged to the client. Compared to Azure Premium Storage, Ultra Disk Storage provides its extreme performance without relying on Azure Blob storage cache, our on-server SSD-based cache, and hence it only supports un-cached reads and writes. We also introduced a new simplified client on the compute host that we call virtual disk client (VDC). VDC has full knowledge of virtual disk metadata mappings to disks in the Ultra Disk Storage cluster backing them. That enables the client to talk directly to storage servers, bypassing load balancers and front-end servers used for initial disk connections. This simplified approach minimizes the layers that a read or write operation traverses, reducing latency and delivering performance comparable to enterprise flash disk arrays.

Below is a figure comparing the different layers an operation traverses when issued on an Ultra disk compared to a Premium SSD disk. The operation flows from the client to Hyper-V to the corresponding driver. For an operation done on a Premium SSD disk, the operation will flow from the Azure Blob storage cache driver to the load balancers, front end servers, partition servers then down to the stream layer servers as documented in this paper. For an operation done on an Ultra disk, the operation will flow directly from the virtual disk client to the corresponding storage servers.

Comparison between the IO flow for Ultra Disk Storage versus Premium SSD Storage

One key benefit of Ultra Disk Storage is that you can dynamically tune disk performance without detaching your disk or restarting your virtual machines. Thus, you can scale performance along with your workload. When you adjust either IOPS or throughput, the new performance settings take effect in less than an hour.

Azure implements two levels of throttles that can cap disk performance, a “leaky bucket” VM level throttle that is specific to each VM size, described in documentation. A key benefit of Ultra Disk Storage is a new time-based disk level throttle that is applied at the disk level. This new throttle system provides more realistic behavior of a disk for a given IOPS and throughput. Hitting a leaky bucket throttle can cause erratic performance, while the new time-based throttle provides consistent performance even at the throttle limit. To take advantage of this smoother performance, set your disk throttles slightly less than your VM throttle. We will publish another blog post in the future describing more details about our new throttle system.

Available regions

Currently, Ultra Disk Storage is available in the following regions:

East US 2
North Europe
Southeast Asia

We will expand the service to more regions soon. Please refer to the FAQ for the latest on supported regions.

Virtual machine sizes

Ultra Disk Storage is supported on DSv3 and ESv3 virtual machine types. Additional virtual machine types will be supported soon. Refer to the FAQ for the latest on supported VM sizes.

Get started today

You can request onboarding to Azure Ultra Disk Storage by submitting an online request or by reaching out to your Microsoft representative. For general availability limitations refer to the documentation.
Quelle: Azure