Enhanced features in Azure Archive Storage now generally available

Since launching Azure Archive Storage, we've seen unprecedented interest and innovative usage from a variety of industries. Archive Storage is built as a scalable service for cost-effectively storing rarely accessed data for long periods of time. Cold data, including application backups, healthcare records, autonomous driving recordings, and other data sets that might have been previously deleted could be stored in Azure Storage’s Archive tier in an offline state, then rehydrated to an online tier when needed.

With your usage and feedback, we’ve made our archive improvements generally available, making our service even better.

Priority retrieval from Azure Archive

Priority retrieval allows you to flag the rehydration of your data from the offline archive tier back into an online hot or cool tier as a high priority action. By paying a little bit more for the priority rehydration operation, your archive retrieval request is placed in front of other requests and your offline data is expected to be returned online in less than one hour.

The two archive retrieval options are:

Standard priority is the default option for archive Set Blob Tier and Copy Blob requests, with retrievals taking up to 15 hours.
High priority fulfills the need for urgent data access from archive, with retrievals for blobs under 10 GB typically taking less than 1 hour.

Priority retrieval is recommended to be used for emergency requests for a subset of an archive dataset. For the majority of use cases, our customers plan for and utilize standard archive retrievals which complete in less than 15 hours. On rare occasions, a retrieval time of an hour or less is required for business continuity. Priority retrieval requests can deliver archive data in a fraction of the time of a standard retrieval operation, allowing our customers to quickly resume business as usual. For more information, please see the Azure Blob rehydration documentation.

Upload blob direct to access tier of choice (hot, cool, or archive)

You can upload your blob data using PutBlob or PutBlockList directly to the access tier of your choice using the optional parameter x-ms-access-tier. This allows you to upload your object directly into the hot, cool, or archive tier regardless of your account’s default access tier setting. This capability makes it simple for customers to upload objects directly to Azure Archive in a single transaction. Then, as data usage patterns change, you would change the access tier of the blob manually with the Set Blob Tier API or automate the process with blob lifecycle management rules. For more information, please see the Azure Blob storage access tiers documentation.

Copy Blob enhanced capabilities

In certain scenarios, you may want to keep your original data untouched but work on a temporary copy of the data. The Copy Blob API is now able to support the archive access tier; allowing you to copy data into and out of the archive access tier within the same storage account. With our access tier of choice enhancement, you can set the optional parameter x-ms-access-tier to specify which destination access tier you would like your data copy to inherit. If you are copying a blob from the archive tier, you can also specify the x-ms-rehydrate-priority of how quickly you want the copy created in the destination hot or cool tier. Please see the Azure Blog rehydration documentation for more information.

Getting started

All of the features discussed today (upload blob direct to access tier, priority retrieval from archive, and Copy Blob enhancements) are supported by the most recent releases of the Azure Portal, AzCopy, .NET Client Library, Java Client Library, Python Client Library, and Storage Services REST API (version 2019-02-02 or higher). In general, we always recommend using the latest version of our tools and SDKs.

In addition to our first party tools, Archive Storage has an extensive network of partners who can help you discover and retain value from your data. As we improve our service with new features, we're also working to build our ecosystem and onboard additional partners. Please visit the Azure update to see the latest additions to our partner network.

Build it, use it, and tell us about it!

We will continue to improve our Archive and Blob Storage services and are looking forward to hearing your feedback about these features through email. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.
Quelle: Azure

Azure Maps updates offer new features and expanded availability

This blog post was co-authored by Chad Raynor, Principal Program Manager, Azure Maps.

Updates to Azure Maps services include new and recently added features, including the general availability of Azure Maps services on Microsoft Azure Government cloud. Here is a rundown of the new and recently added features for Azure Maps services:

Azure Maps is now generally available on Azure Government cloud

The general availability of Azure Maps for Azure Government cloud allows you to easily include geospatial and location intelligence capabilities in solutions deployed on Azure Government cloud with the quality, performance, and reliability required for enterprise grade applications. Microsoft Azure Government delivers a cloud platform built upon the foundational principles of security, privacy and control, compliance, and transparency. Public sector entities receive a physically isolated instance of Microsoft Azure that employs world-class security and compliance services critical to the US government for all systems and applications built on its architecture.

Azure Maps Batch services are generally available

Azure Maps Batch capabilities available through Search and Route services are now generally available. Batch services allows customers to send batches of queries using just a single API request.

Batch capabilities are supported by the following APIs:

Post Search Address Batch
Post Search Address Reverse Batch
Post Search Fuzzy Batch
Post Route Directions Batch

What’s new for the Azure Maps Batch services?

Users have now an option to submit synchronous (sync) request, which is designed for lightweight batch requests. When the service receives a request, it will respond as soon as the batch items are calculated instead of returning a 202 along with a redirect URL. With sync API there will be no possibility to retrieve the results later. When Azure Maps receives sync request, it responds as soon as the batch items are calculated. For large batches, we recommend continuing to use the Asynchronous API that is appropriate for processing big volumes of relatively complex route requests.

For Search APIs, the Asynchronous API allows developers to batch up to 10,000 queries and sync API up to 100 queries. For Route APIs, the Asynchronous API allows developers to batch up to 700 queries and sync API up to 100 queries.

Azure Maps Matrix Routing service is generally available

The Matrix Routing API is now generally available. The service allows calculation of a matrix of route summaries for a set of routes defined by origin and destination locations. For every given origin, the service calculates the travel time and distance of routing from that origin to every given destination.

For example, let's say a food delivery company has 20 drivers and they need to find the closest driver to pick up the delivery from the restaurant. To solve this use case, they can call Matrix Route API.

What’s new in the Azure Maps Matrix Routing service?

The team worked to improve the Matrix Routing performance and added support to submit synchronous request like for the batch services described above. The maximum size of a matrix for asynchronous request is 700 and for synchronous request it's 100 (the number of origins multiplied by the number of destinations).

For Asynchronous API calls we introduced new waitForResults parameter. If this parameter is set to be true, user will get a 200 response if the request is finished under 120 seconds. Otherwise, user will get a 202 response right away and async API will return users an URL to check the progress of async request in the location header of the response.

Updates for Render services

Introducing Get Map tile v2 API in preview

Like Azure Maps Get Map Tiles API v1, our new Get Map Tile version 2 API, in preview, allows users to request map tiles in vector or raster format typically to be integrated into a map control or SDK. The service allows to request various map tiles, such as Azure Maps road tiles or real-time Weather Radar tiles. By default, Azure Maps uses vector map tiles for its SDKs.

The new version will offer users more consistent way to request data. The new version introduces a concept of tileset, a collection of raster or vector data that are further broken up into a uniform grid of square tiles at preset zoom levels. Every tileset has a tilesetId to request a specific tileset. For example, microsoft.base.

Also, Get Map Tile v2now supports the option to call imagery data that was earlier only available through Get Map Imagery Tile API. In addition, Azure Maps Weather Service radar and infrared map tiles are only available through the version 2.

Dark grey map style available through Get Map Tile and Get Map Image APIs

In addition to serve the Azure Maps dark grey map style through our SDKs, customers can now also access it through Get Map Tile APIs (version 1 and version 2) and Get Map Image API in vector and raster format. This empowers customers to create rich map visualizations, such as embedding a map image into a web page.

Azure Maps dark grey map style.

Route service: Avoid border crossings, pass in custom areas to avoid

The Azure Maps team has continued to make improvements to the Routing APIs. We have added new parameter value avoid=borderCrossings to support routing scenarios where vehicles are required to avoid country/region border crossings, and keep the route within one country.

To offer more advanced vehicle routing capabilities, customers can now include areas to avoid in their POST Route Directions API request. For example, a customer might want to avoid sending their vehicles to a specific area because they are not allowed to operate in the area without a permission form the local authority. As a solution, users can now pass in the route request POST body polygons in GeoJSON format as a list of areas to avoid.

Cartographic and styling updates

Display building models

Through Azure Maps map control, users have now option to render 2.5D building models on the map. By default, all buildings are rendered as just their footprints. By setting showBuildingModels to true, buildings will be rendered with their 2.5D models. Try the feature now.

Display building models.

Islands, borders, and country/region polygons

To improve the user experience and give more detailed views, we reduced the boundary data simplification reduction to offer better visual experience at higher zoom levels. User can now see more detailed polygon boundary data.

Left: Before boundary data simplification reduction. Right: After boundary data simplification reduction.

National Park labeling and data rendering

Based on feedback from our users, we simplified labels for scatters polygons by reducing the number of labels. Also, National park and National Forest labels are displayed already on zoom level 6.

National Park and National Forest labels displayed on zoom level 6.

Send us your feedback

We always appreciate feedback from the community. Feel free to comment below, post questions to Stack Overflow, or submit feature requests to the Azure Maps Feedback UserVoice.
Quelle: Azure

Azure Security Center enhancements

At Microsoft Ignite 2019, we announced the preview of more than 15 new features. This blog provides an update for the features that are now generally available to our customers.

As the world comes together to combat COVID-19, and remote work becomes a critical capability for many companies, it’s extremely important to maintain the security posture of your cloud assets while enabling more remote workers to access them.

Azure Security Center can help prioritize the actions you need to take to protect your security posture and provide threat protection for all your cloud resources.

Enhanced threat protection for your cloud resources with Azure Security Center

Azure Security Center continues to extend its threat protection capabilities to counter sophisticated threats on cloud platforms:

Scan container images in Azure Container Registry for vulnerabilities generally available

Azure Security Center can scan container images in Azure Container Registry (ACR) for vulnerabilities.

The image scanning works by parsing through the packages or other dependencies defined in the container image file, then checking to see whether there are any known vulnerabilities in those packages or dependencies (powered by a Qualys vulnerability assessment database).

The scan is automatically triggered when pushing new container images to Azure Container Registry. Found vulnerabilities will surface as Security Center recommendations and be included in the Secure Score together with information on how to patch them to reduce the attack surface they allowed.

Since we launched the preview at Ignite 2019, registered subscriptions initiated over 1.5 million container image scans. We have carefully analyzed the feedback we received and incorporated it into this generally available version. We have added scanning status to reflect the progress of the scan (Unscanned, Scan in progress, Scan error, and Completed) and improved the user experience based on the feedback we've received from you.

Threat protection for Azure Kubernetes Service Support in Security Center generally available

The popular, open source, platform Kubernetes has been adopted so widely that it is now an industry standard for container orchestration. Despite this widespread implementation, there’s still a lack of understanding regarding how to secure a Kubernetes environment. Defending the attack surfaces of a containerized application requires expertise. You need to ensure the infrastructure is configured securely, and constantly monitor for potential threats. Support for Security Center Azure Kubernetes Service (AKS) is now generally available.
 
The capabilities include: 

Discovery and Visibility: Continuous discovery of managed AKS instances within Security Center’s registered subscriptions.
Secure Score recommendations: Actionable items to help customers comply with security best practices in AKS as part of the customer’s Secure Score, such as "Role-Based Access Control should be used to restrict access to a Kubernetes Service Cluster."
Threat Protection: Host and cluster-based analytics, such as “A privileged container detected.”

For the generally available release, we've added new alerts (for the full list please visit Alerts for Azure Kubernetes Service clusters and Alerts for containers – host level sections of the alerts reference table), and alert details were fine tuned to reduce false positives.
  

Cloud security posture management enhancements

Misconfiguration is the most common cause of security breaches for cloud workloads. Azure Security Center provides you with a bird’s eye security posture view across your Azure environment, enabling you to continuously monitor and improve your security posture using the Azure Secure Score. Security Center helps manage and enforce your security policies to identify and fix such misconfigurations across your different resources and maintain compliance. We continue to expand our resource coverage and the depth insights that are available in security posture management.

Support for custom policies generally available

Our customers have been wanting to extend their current security assessments coverage in Security Center with their own security assessments based on policies that they create in Azure Policy. With support for custom policies, this is now possible.

We're also announcing that Azure Security Center’s support for custom policies is generally available. These new policies will be part of the Azure Security Center recommendations experience, Secure Score, and the regulatory compliance standards dashboard. With the support for custom policies, you are now able to create a custom initiative in Azure policy and add it as a policy in Azure Security Center through a simple click-through onboarding experience and visualize them as recommendations.

For this release, we've added the ability to edit the custom recommendation metadata to include severity, remediation steps, threat information, and more.

Assessment API generally available

We are introducing a new API to get Azure Security Center recommendations with information and provide you the reason why assessments failed. The new API includes two APIS:

Assessments metadata API: Gets recommendation metadata.
Assessments API: Provides the assessment results of each recommendation on a resource.

We advise that our customers using the existing Tasks API should use the new Assessments API for their reporting.

Regulatory compliance dynamic compliance packages generally available  

You can now add ‘dynamic compliance packages,’ or additional standards beyond the ‘built-in’ compliance packages in regulatory compliance.

The regulatory compliance dashboard in Azure Security Center provides insights into your compliance posture relative to a set of industry standards, regulations, and benchmarks. Assessments continually monitor the security state of your resources and are used to analyze how well your environment is meeting the requirements for specific compliance controls. Those assessments also include actionable recommendations for how to remediate the state of your resources and thus improve your compliance status.

Initially, the compliance dashboard included a very limited set of standards that were ‘built-in’ to the dashboard and relied on a static set of rules included with Security Center. With the dynamic compliance packages feature, you can add new standards and benchmarks that are important to you to your dashboard. Compliance packages are essentially initiatives defined in Azure Policy. When you add a compliance package to your subscription or management group from the ASC Security Policy, that essentially assigns the regulatory initiative to your selected scope (subscription or management group). You can see that standard or benchmark appear in your compliance dashboard with all associated compliance data mapped as assessments.

In this way, you can track newly published regulatory initiatives as compliance standards in your Security Center regulatory compliance dashboard, Additionally, when Microsoft releases new content for the initiative (new policies that map to more controls in the standard), the additional content appears automatically in your dashboard. You are also able to download a summary report for any of the standards that have been onboarded to your dashboard.

There are several supported regulatory standards and benchmarks that can be onboarded to your dashboard. The newest one is the Azure Security Benchmark, which is the Microsoft-authored Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. Additional standards will be supported by the dashboard as they become available.  

For more information about dynamic compliance packages, see the documentation here.

Workflow automation with Azure Logic Apps generally available 

Organizations with centrally managed security and IT operations implement internal workflow processes to drive required action within the organization when discrepancies are discovered in their environments. In many cases, these workflows are repeatable processes and automation can greatly reduce overhead streamline processes within the organization.

Workflow automation in Azure Security Center, now generally available, allows customers to create automation configurations leveraging Azure Logic Apps and to create policies that will automatically trigger them based on specific Security Center findings such as Recommendations or Alerts. Azure Logic App can be configured to do any custom action supported by the vast community of Logic App connectors or use one of the templates provided by Security Center such as sending an email. In addition, users are now able to manually trigger a Logic App on an individual alert or recommendation directly from the recommendation (with a ‘quick fix’ option) or alert page in Azure Security Center.

Advanced integrations with export of Security Center recommendations and alerts generally available

The continuous export feature of Azure Security Center, which supports the export of your security alerts and recommendations, is now generally available, also via policies. Use it to easily connect the security data from your Security Center environment to the monitoring tools used by your organization, by exporting to Azure Event Hubs or Azure Log Analytics workspaces.

This capability supports enterprise-scale scenarios, among others, via the following integrations:

Export to Azure Event Hubs enables integration with Azure Sentinel, third party SIEMs, Azure Data Explorer, and Azure Functions.
Export to Azure Log Analytics workspaces enables integration with Microsoft Power BI, custom dashboards, and Azure Monitor.

For more information, read about continuous export.

Building a secure foundation

With these additions, Azure continues to provide a secure foundation and gives you built-in native security tools and intelligent insights to help you rapidly improve your security posture in the cloud. Azure Security Center strengthens its role as the unified security management and advanced threat protection solution for your hybrid cloud.

Security can’t wait. Get started with Azure Security Center today and visit Azure Security Center Tech Community, where you can engage with other security-minded users like yourselves.
Quelle: Azure

Updates to Azure Maps Web SDK includes powerful new features

Today, we are announcing updates to the Azure Maps Web SDK, which adds support for common spatial file formats, introduces a new data driven template framework for popups, includes several OGC services, and much more.

Spatial IO module

 

With as little as three lines of code this module makes it easy to integrate spatial data with the Azure Maps Web SDK. The robust features in this module allow developers to:

Read and write common spatial data files to unlock great spatial data that already exists without having to manually convert between file types. Supported file formats include: KML, KMZ, GPX, GeoRSS, GML, GeoJSON, and CSV files containing columns with spatial information.
Use new tools for reading and writing Well-Known Text (WKT). Well-Known Text is a standard way to represent spatial geometries as a string and is supported by most GIS systems. (Docs)
Connect to Open Geospatial Consortium (OGC) services and integrate with Azure Maps web SDK.

Overlay Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers on the map. (Docs)
Query data in a Web Feature Service (WFS). (Docs)

Overlay complex data sets that contain style information and have them render automatically using minimal code. For example, if your data aligns with the GitHub GeoJSON styling schema, many of these will automatically be used to customize how each shape is rendered. (Docs)
Leverage high-speed XML and delimited file reader and writer classes. (Docs)

Try out these features in the sample gallery.

WMS overlay of world geological survey.

Popup templates

Popup templates make it easy to create data driven layouts for popups. Templates allow you to define how data should be rendered in a popup. In the simplest case, passing a JSON object of data into a popup template will generate a key value table of the properties in the object. A string with placeholders for properties can be used as a template. Additionally, details about individual properties can be specified to alter how they are rendered. For example, URLs can be displayed as a string, an image, a link to a web page or as a mail-to link. (Docs | Samples)

A popup template displaying data using a template with multiple layouts.

Additional Web SDK enhancements

Popup auto-anchor—The popup now automatically repositions itself to try and stay within the map view. Previously the popup always opened centered above the position it was anchored to. Now, if the position it is anchored to is near a corner or edge, the popup will adjust the direction it opens so that is stays within the map view. For example, if the anchored position is in the top right corner of the map, the popup would open down and to the left of the position.
Drawing tools events and editing—The drawing tools module now exposes events and supports editing of shapes. This is great for triggering post draw scenarios, such as searching within the area the user just drew. Additionally, shapes also support being dragged as a whole. This is useful in several scenarios, such as copying and pasting a shape then dragging it to a new location. (Docs | Samples)
Style picker layout options—The style picker now has two layout options. The standard flyout of icons or a list view of all the styles. (Docs | Sample)

Style picker icon layout.

Code sample gallery

The Azure Maps code sample gallery has grown to well over 200 hundred samples. Nearly every single sample was created as a response to a technical query we had from a developer using Azure Maps.

An Azure Maps Government Cloud sample gallery has also been created and contains all the same samples as the commercial cloud sample gallery, ported over to the government cloud.

Here are a few of the more recently added samples:

The Route along GeoJSON network sample loads a GeoJSON file of line data that represent a network of paths and calculates the shortest path between two points. Drag the pins around on the map to calculate a new path. The network can be any GeoJSON file containing a feature collection of linestrings, such as a transit network, maritime trade routes, or transmission line network. Try the feature out.

Map showing shortest path between points along shipping routes.

The Census group block analysis sample uses census block group data to estimate the population within an area drawn by the user. Not only does it take into consideration the population of each census block group, but also the amount of overlap they have with the drawn area as well. Try the feature out.

Map showing aggregated population data for a drawn area.

The Get current weather at a location sample retrieves the current weather for anywhere the user clicks on the map and displays the details in a nicely formatted popup, complete with weather icon. Try the feature out.

Map showing weather information for Paris.

Send us your feedback

We always appreciate feedback from the community. Feel free to comment below, post questions to Stack Overflow, or submit feature requests to the Azure Maps Feedback UserVoice.
Quelle: Azure

Microsoft Receives 2020 SAP® Pinnacle Award: Public and Private Cloud Provider Partner of the Year

I’m pleased to share that SAP recently named Microsoft its Partner of the Year for the 2020 SAP® Pinnacle Award category of Public and Private Cloud Provider. SAP presents these awards annually to the top partners that have excelled in developing and growing their partnership with SAP and helping customers run better. Winners and finalists in multiple categories were chosen based on recommendations from the SAP field, customer feedback and performance indicators.

Microsoft and SAP have a long history of partnership to serve our mutual customers with enterprise-class products, service, and support they rely on to run their most mission-critical business processes. Customers like CONA Services have increased agility and performance to handle over 160,000 sales orders a day by running a 28 TB SAP HANA® system on Azure. Daimler AG reduced operational costs by 50 percent and increased agility by spinning up resources on-demand in 30 minutes with SAP S/4HANA® and Azure, empowering 400,000 global suppliers. Carlsberg modernized its infrastructure and optimized its SAP production landscape by migrating everything, 1,600 TB of data, to Azure within six months.

In October 2019 we deepened this relationship even more by announcing the Embrace initiative, whereby Microsoft and SAP signed a unique go-to-market partnership. As part of this initiative, SAP, Microsoft and our joint partner ecosystem have been working together to give our customers a well-defined path to migrate their SAP ERP to SAP S/4HANA® to the cloud, where they can harness the power of their applications to truly drive innovation. Specifically, our teams have been working on:

A simplified migration from on-premises editions of SAP ERP to SAP S/4HANA® for customers with integrated product and industry solutions. Industry market bundles will create a roadmap to the cloud for customers in focused industries, with a singular reference architecture and path to streamline implementation.
Collaborative support model for simplified resolution. In response to customer feedback, a combined support model for Azure and SAP Cloud Platform will help ease migration and improve communication.
Jointly developed market journeys to support customer needs. Designed in collaboration with SAP, Microsoft and system integrator partners will provide roadmaps to the digital enterprise with recommended solutions and reference architectures for customers. These offer a harmonized approach by industry for products, services, and practices across Microsoft, SAP and system integrators.

The past few months I had the privilege of working even closer with my colleagues in SAP marketing to really make the promise of the Embrace initiative real for our customers. This promise is all about bringing value to accelerate customers’ transformation to the intelligent enterprise. But I am not alone! Sales, marketing, engineering and support teams across the two organizations have been teaming up to make it easier and clearer for customers to build and run their mission-critical SAP solutions on Azure. In these current times of global disruptions and uncertainty, I believe both companies feel a bigger responsibility to help our customers reduce complexity, empower their employees, drive innovation, and run their businesses more efficiently.

SAP and Microsoft have been partners for more than 25 years. In fact, Microsoft is the only cloud provider that’s been running SAP for its own finance, HR and supply chains for the last 20+ years, including SAP S/4HANA®. Our Microsoft IT team will share their experience of migrating and running our SAP business applications in the cloud in this upcoming virtual session on April 22, 2020. Likewise, SAP has chosen Azure to run a growing number of its own internal system landscapes, also including those based on SAP S/4HANA®. Our commitment to work together to deliver a world-class experience for our customers has grown stronger over the years. I truly believe in and see every day how this partnership is taking a more unified approach to accelerate the value customers get in the cloud and open up new opportunities for growth, innovation and business transformation.

Learn more about SAP solutions on Azure.
Quelle: Azure

Next Generation SAP HANA Large Instances with Intel® Optane™ drive lower TCO

At Microsoft Ignite 2019, we announced general availability of the new SAP HANA Large Instances powered by the 2nd Generation Intel Xeon Scalable processors, formally Cascade Lake, supporting Intel® Optane™ persistent memory (PMem).

Microsoft’s largest SAP customers are continuing to consolidate their business functions and growing their footprint. S/4 HANA workloads demand increasingly larger nodes as they scale up. Some scenarios for high availability/disaster recovery (HA/DR) and multi-tier data needs are adding to the complexity of operations.

In partnership with Intel and SAP, we have worked to develop the new HANA Large Instances with Intel Optane PMem offering higher memory density and in-memory data persistence capabilities. Coupled with 2nd Generation Intel Xeon Scalable processors, these instances provide higher performance and higher memory to processor ratio.

For SAP HANA solutions, these new offerings help lower total cost of ownership (TCO), simplify the complex architectures for HA/DR and multi-tier data, and offer 22 times faster reload times. The new HANA large instances extend the broad array of the existing large instances offering with the purpose built capabilities critical for running SAP HANA workloads.

Available now

The new S224 HANA Large Instances support 3 TB to 9 TB of memory with four socket 224 vCPUs. The new instances support both DRAM-only and DRAM plus Intel® Optane™ persistent memory combinations.

The variety of SKUs gives our customers the ability to choose the best solution for their SAP HANA in-memory workload needs, with higher memory capacity and lower cost as compared to DRAM-only instances. S224 SKUs with a higher core to working memory ratio are performance-optimized for OLAP while higher working memory to core ratio are better priced for OLTP.

The S224 instances with Intel Optane PMem come in 1:1, 1:2, and 1:4 ratios. Each ratio indicates the size of DRAM memory paired with Intel Optane memory. The architecture options available with these offerings are discussed in the next section. The new HANA Large Instances are available in several Azure regions where HANA Large Instances are available.

Key benefits of deploying S224 instances

Platform consolidation

SAP HANA is an in-memory data platform and its hybrid structure for processing both OLTP and OLAP workloads in real-time with low latency is a major benefit for enterprises using SAP HANA. The 2nd Generation Intel Xeon Scalable processors offer 50 percent higher performance and higher memory to processor ratioi  compared to the previous generation processors. Coupled with Intel Optane, the new instances offer even higher memory densities with >3TB per socket.

SAP HANA uses Intel Optane PMem as an extension to DRAM memory by selectively placing the data structures in persistent memory, in a mode called app direct. The column store data which attributes for majority of the data in most HANA systems is enabled for placement in Intel Optane persistent memory where-as working DRAM memory is used for delta merges, row store and cache data.

For organizations with growing data needs, the higher memory densities enable a deployment to scale up or scale out with fewer of the S224 SKU’s (seen in Figure 1) as compared to a larger number of DRAM-only nodes on previous generation processors. This enables organizations to consolidate their platform footprint and reduce operational complexity, realizing reduced TCO.

Figure 1: Platform consolidation with higher memory density nodes from larger scale out to fewer scale up.

Faster reload times

The data stored in the Intel Optane PMem is persistent. This means for SAP HANA deployments using the new instances with Optane PMem, there is no need to load data from disks or slower storage tiers in the event of system reboot. As mentioned previously, SAP HANA leverages app direct mode to store most of the database into Optane persistent memory. When system reboot occurs during upgrades as an example, the data reload time is cut down dramatically, enabling a faster return to normal operations as compared to DRAM-only systems.

In recent testing conducted using two S224 instances, a DRAM-only system running 6 TB of memory, and a system with 9 TB of memory consisting of 3 TB DRAM and 6 TB of Optane PMem in a 1:2 ratio, the data reload time on the Optane system was 22 times faster as compared to the reload time on the DRAM-only system. The load time on the DRAM system post system reboot is around 44 minutes versus 2 minutes on the Optane node.

Figure 2: Internal testing using 3 TB HANA dataset Shows 22x improvement in DB restart times on the new SAP HANA large instances using Intel Optane.

The faster reload and recovery times may help some deployments to run without HA for non-production workloads with reduced service windows, and remove clustering complexity and downtimes needed for upgrades and/or patches. Each SAP HANA large instance region comes with hot spares to cover the scenario of complete system failure and recover the DB using hot spares.

Lower TCO for HA/DR

The higher memory density offered with the new instances also enable new deployment options available to enterprises for business continuity purposes. The smaller DRAM-only node at the primary site can replicate the data into a larger Intel Optane node offered in 1:2 and 1:4 ratios, with the data preloaded in persistent memory. Higher density Optane node can be used as a dual-purpose node (as seen in Figure 3) for QA testing and also act as primary node in the event of a failover at the primary site, thereby lowering cost by eliminating the need for standalone instances for QA and DR. The data on the larger Optane node is pre-loaded into Optane PMem, which eliminates the need to load the data from disks and cuts the downtime, thus achieving better RTO and RPO times.

Figure 3: Lower TCO with a dual-purpose node at DR site serving the needs for QA/Dev test and DR.

Similarly, HSR replicated configurations in a scale out S/4 HANA setup can be replicated into a single shared HA Optane node in a 1:4 ratio (as seen in Figure 4), reducing the complexity of managing multiple HA instances, thereby lowering TCO and achieving reduced service windows.

Figure 4: Lower TCO for HA and DR using shared higher memory node for scale out deployments.

Enabling SAP HANA on Intel Optane

Supported OS versions

Below is the guidance on the supported OS and HANA versions for using Intel Optane persistent memory technology (PMem).

Following OS versions support Intel Optane in App direct mode:

RHEL 7.6 or later
SLES* 12 SP4 or later
SLES 15 or later

SAP HANA support

SAP HANA 2.0 SPS 03 is the first SAP HANA version to support Intel Optane in app direct mode. The recommended version is SAP HANA 2.0 SPS 04 (or a later version) for customers using Optane nodes. SAP HANA can leverage Intel Optane in app direct mode by configuring PMem regions, namespaces, and file system. The HANA large instance operations team will drive the configuration setup before handing over the Optane node to customers.

SAP HANA configuration

SAP HANA needs to recognize the new Intel Optane PMem DIMMs. The directory that SAP HANA uses as base path must mount onto the file system that were created for PMem. SAP HANA SPS04 or a later version is a requirement for Optane usage. Below is the specific command to set up the base path for the PMem volumes:

In the [persistence]section of the global.ini file, provide a line with a comma-separated list of all mounted PMem volumes by running the following command. Following this, SAP HANA recognizes the PMem devices and loads column store data into the modules.

[persistence]
basepath_persistent_memory_volumes=/hana/pmem/nvmem0; /hana/pmem/nvmem1; /hana/pmem/nvmem2; /hana/pmem/nvmem3

Learn more

If you are interested in learning more about the S224 SKUs, please contact your Microsoft account team. To learn more about running SAP solutions on Azure, visit SAP on Azure or download a free SAP on Azure implementation guide.

i Intel Shows 1.59x Performance Improvement in Upcoming Intel Xeon Processor Scalable Family
Quelle: Azure

Accelerating digital transformation in manufacturing

Digital transformation in manufacturing has the potential to increase annual global economic value by $4.5 trillion according to the IDC MarketScape.i With so much upside, manufacturers are looking at how technologies like IoT, machine learning, and artificial intelligence (AI) can be used to optimize supply chains, improve factory performance, accelerate product innovation, and enhance service offerings.

Digital transformation starts by collecting data from machines on the plant floor, assets in the supply chain, or products being used by customers. This data can be combined with other business data and then modeled and analyzed to gain actionable insights.

Let’s take a look at three manufacturers—Festo, Kao, and AkzoNobel—and see how each one is using technologies like IoT, machine learning, and AI to accelerate their digital transformation.

Providing predictive maintenance as a service

Based in Germany, Festo sells electric and pneumatic drive solutions to 300,000 customers in 176 countries. The company’s goal is to increase uptime for customers by providing predictive maintenance offerings as software as a service (SaaS) offerings. Festo’s strategy is to connect machines to the cloud with Azure IoT and then enable customers to visualize data along the entire value chain.

One of the first SaaS offerings is Festo Dashboards built on Azure. Festo Dashboards provides a clear and intuitive status of equipment like sensor temperatures and valve switches. With Festo Dashboards, manufacturers can more easily monitor energy consumption, quickly diagnose faults, and optimize production availability.

Anticipating consumer trends for better manufacturing forecasting

Kao, one of Japan’s leading consumer brands, sees the consumer market evolving. Today, consumers prioritize their product experience over product quality. They also look to social media for purchasing guidance. These behaviors lead to forecasting challenges. To keep up with these changes, Kao sought to better understand individual customers and categorize trends into micro-segments. The company terms this approach “small mass marketing.” Kao designed a data analysis platform using Microsoft Azure Synapse Analytics and Microsoft Power BI to predict consumer trends for their detergent, cosmetic, and toiletry products. The Kao team combined data from real-time purchases, social media, and historical sales. Kao competes more effectively using predictive models, and chain store employees are empowered with real-time information for selling.

Reducing the development time of new paint colors

Dutch paint and coatings leader, AkzoNobel, is active in more than 100 countries. The company has honed the art of color matching for two centuries for cars, buildings, and interiors. One of the company’s businesses is developing the paint to repair cars when drivers have an accident. Manufacturers in the car and other industries constantly dream up new finishes to give their models an edge on the competition.

To keep up with rapid rate of change, AkzoNobel introduced Azure Machine Learning into its color prediction process. Previously, scientists labored painstakingly in labs to adjust, recalibrate, and tweak a color until it was just right. The company worked with its scientist and technicians to integrate machine mearning into their development process. The main impact is seen in the lab, where teams are now able to create more color recipes, more accurately, in less time. Previously, it could take up to two years to get a car color ready. Now AkzoNobel is seeing new paint colors ready in one month.

Next steps

For ideas on accelerating your digital transformation journey download, The Road to Intelligent Manufacturing: Leveraging a Platform, co-authored by Microsoft and Capgemini.

i IDC MarketScape: Worldwide Industrial IoT Platforms in Manufacturing 2019 Vendor Assessment
Quelle: Azure

Solutions and guidance to help content producers and creators work remotely

The global health pandemic has impacted every organization on the planet—no matter the size—their employees, and the customers they serve. The emphasis on social distancing and shelter in place orders have disrupted virtually every industry and form of business. The Media & Entertainment (M&E) industry is no exception. Most physical productions have been shut down for the foreseeable future. Remote access to post-production tools and content is theoretically possible, but in practice is fraught with numerous issues, given the historically evolved, fragmented nature of the available toolsets, vendor landscape, and the overall structure of the business

At the same time, more so today than ever before, people are turning to stories, content, and information to connect us with each other. If you need help or assistance with general remote work and collaboration, please visit this blog.

If you’d like to learn more about best practices and solutions for M&E workloads, such as VFX, editorial, and other post-production workflows—which are more sensitive to network latency, require specialized high-performance hardware and software in custom pipelines, and where assets are mostly stored on-premises (sometimes in air-gapped environments)—read on.

First, leveraging existing on-premises hardware can be a quick solution to get your creative teams up and running. This works when you have devices inside the perimeter firewall, tied to specific hardware and network configurations that can be hard to replicate in the cloud. It also enables cloud as a next step rather than a first step, helping you fully leverage existing assets and only pay for cloud as you need it. Solutions such as Teradici Cloud Access Software running on your artists’ machines enables full utilization of desktop computing power, while your networking teams provide a secure tunnel to that machine. No data movement is necessary, and latency impacts between storage and machine are minimized, making this a simple, fast solution to get your creatives working again. For more information, read Teradici’s Work-From-Home Rapid Response Guide and specific guidance for standalone computers with Consumer Grade NVIDIA GPUs.

Customers who need to enable remote artists with cloud workstations, while maintaining data on-premises, can also try out an experimental way to use Avere vFXT for Azure caching policies to further reduce latency. This new approach optimizes creation, deletion, and listing of files on remote NFS shares often impacted by increased latency. 

Second, several Azure partners have accelerated work already in progress to provide customers with new remote options, starting with editorial.

Avid has made their new Avid Edit on Demand solution immediately available through their Early Access Program. This is a great solution for broadcasters and studios who want to spin up editorial workgroups of up to 30 users. While the solution will work for customers anywhere in the world, it is currently deployed in US West 2, East US 2, North Europe, and Japan East so customers closest to those regions will have the best user experience. You can apply to the Early Access Program here, and applications take about two days to process. Avid is also working to create a standardized Bring Your Own License (BYOL) and Software as a Service (SaaS) that addresses enterprise post-production requirements.
Adobe customers who purchase Creative Cloud for individuals or teams can use Adobe Premiere Pro for editing in a variety of remote work scenarios. Adobe has also extended existing subscriptions for an additional two months. For qualified  Enterprise customers who would like to virtualize and deploy Creative Cloud applications in their environments, Adobe wanted us to let you know, “it is permitted as outlined in the Creative Cloud Enterprise Terms of Use.” Customers can contact their Adobe Enterprise representative for more details and guidance on best practices and eligibility.
BeBop, powered by Microsoft Azure, enables visual effects artists, editors, animators, and post-production professionals to create and collaborate from any corner of the globe, with high security, using just a modest internet connection. Customers can remotely access Adobe Creative Cloud applications, Foundry software, and Autodesk products and subscriptions including Over the Shoulder capabilities and BeBop Rocket File Transfer. You can sign up at Bebop’s website.
StratusCore provides a comprehensive platform for the remote content creation workforce including industry leading software tools through StratusCore’s marketplace; virtual workstation, render nodes and fast storage; project management, budget and analytics for a variety of scenarios. Individuals and small teams can sign up here and enterprises can email them here.

Third, while these solutions work well for small to medium projects, teams, and creative workflows, we know major studios, enterprise broadcasters, advertisers, and publishers have unique needs. If you are in this segment and need help enabling creative—or other Media and Entertainment specific workflows for remote work—please reach out to your Microsoft sales, support, or product group contacts so we can help

I know that we all want to get people in this industry back to work, while keeping everyone as healthy and safe as possible!

We’ll keep you updated as more guidance becomes available, but until then thank you for everything everyone is doing as we manage through an unprecedented time, together.
Quelle: Azure

Using Azure Monitor source map support to debug JavaScript errors

Azure Monitor’s new source map support expands a growing list of tools that empower developers to observe, diagnose, and debug their JavaScript applications.

Difficult to debug

As organizations rapidly adopt modern JavaScript frontend frameworks such as React, Angular, and Vue, they are left with an observability challenge. Developers frequently minify/uglify/bundle their JavaScript application upon deployment to make their pages more performant and lightweight which obfuscates the telemetry collected from uncaught errors and makes those errors difficult to discern.

Source maps help solve this challenge. However, it’s difficult to associate the captured stack trace with the correct source map. Add in the need to support multiple versions of a page, A/B testing, and safe-deploy flighting, and it’s nearly impossible to quickly troubleshoot and fix production errors.

Unminify with one-click

Azure Monitor’s new source map integration enables users to link an Azure Monitor Application Insights Resource to an Azure Blob Services Container and unminify their call stacks from the Azure Portal with a single click. Configure continuous integration and continuous delivery (CI/CD) pipelines to automatically upload your source maps to Blob storage for a seamless end-to-end experience.

Microsoft Cloud App Security’s story

The Microsoft Cloud App Security (MCAS) Team at Microsoft manages a highly scalable service with a React JavaScript frontend and uses Azure Monitor Application Insights for clientside observability.

Over the last five years, they’ve grown in their agility to deploying multiple versions per day. Each deployment results in hundreds of source map files, which are automatically uploaded to Azure Blob container folders according to version and type and stored for 30 days.

Daniel Goltz, Senior Software Engineering Manager, on the MCAS Team explains, “The Source Map Integration is a game-changer for our team. Before it was very hard and sometimes impossible to debug and resolve JavaScript based on the unminified stack trace of exceptions. Now with the integration enabled, we are able to track errors to the exact line that faulted and fix the bug within minutes.”

Debugging JavaScript demo

Here’s an example scenario from a demo application:

Get started

Configure source map support once, and all users of the Application Insights Resource benefit. Here are three steps to get started:

Enable web monitoring using our JavaScript SDK.
Configure a Source Map storage account.

End-to-end transaction details blade.
Properties blade.

Configure CI/CD pipeline.

Note: Add an Azure File Copy task to your Azure DevOps Build pipeline to upload source map files to Blob each time a new version of your application deploys to ensure relevant source map files are available.

 

Manually drag source map

If source map storage is not yet configured or if your source map file is missing from the configured Azure Blob storage container, it’s still possible to manually drag and drop a source map file onto the call stack in the Azure Portal.

 

Submit your feedback

Finally, this feature is only possible because our Azure Monitor community spoke out on GitHub. Please keep talking, and we’ll keep listening. Join the conversation by entering an idea on UserVoice, creating a new issue on GitHub, asking a question on StackOverflow, or posting a comment below.
Quelle: Azure

Detect large-scale cryptocurrency mining attack against Kubernetes clusters

Azure Security Center's threat protection enables you to detect and prevent threats across a wide variety of services from Infrastructure as a Service (IaaS) layer to Platform as a Service (PaaS) resources in Azure, such as IoT, App Service, and on-premises virtual machines.

At Ignite 2019 we announced new threat protection capabilities to counter sophisticated threats on cloud platforms, including preview for threat protection for Azure Kubernetes Service (AKS) Support in Security Center and preview for vulnerability assessment for Azure Container Registry (ACR) images.

Azure Security Center and Kubernetes clusters 

In this blog, we will describe a recent large-scale cryptocurrency mining attack against Kubernetes clusters that was recently discovered by Azure Security Center. This is one of the many examples Azure Security Center can help you protect your Kubernetes clusters from threats.

Crypto mining attacks in containerized environments aren’t new. In Azure Security Center, we regularly detect a wide range of mining activities that run inside containers. Usually, those activities are running inside vulnerable containers, such as web applications, with known vulnerabilities that are exploited.

Recently, Azure Security Center detected a new crypto mining campaign that targets specifically Kubernetes environments. What differs this attack from other crypto mining attacks is its scale: within only two hours a malicious container was deployed on tens of Kubernetes clusters.

The containers ran an image from a public repository: kannix/monero-miner. This image runs XMRig, a very popular open source Monero miner.

The telemetries showed that container was deployed by a Kubernetes Deployment named kube-control.

As can be shown in the Deployment configuration below, the Deployment, in this case, ensures that 10 replicas of the pod would run on each cluster:

In addition, the same actor that deployed the crypto mining containers also enumerated the cluster resources including Kubernetes secrets. This might lead to exposure of connection strings, passwords, and other secrets which might enable lateral movement.

The interesting part is that the identity in this activity is system:serviceaccount:kube-system:kubernetes-dashboard which is the dashboard’s service account.
This fact indicates that the malicious container was deployed by the Kubernetes dashboard. The resources enumeration was also initiated by the dashboard’s service account.

There are three options for how an attacker can take advantage of the Kubernetes dashboard:

Exposed dashboard: The cluster owner exposed the dashboard to the internet, and the attacker found it by scanning.
The attacker gained access to a single container in the cluster and used the internal networking of the cluster for accessing the dashboard (which is possible by the default behavior of Kubernetes).
Legitimate browsing to the dashboard using cloud or cluster credentials.

The question is which one of the three options above was involved in this attack? To answer this question, we can use a hint that Azure Security Center gives, security alerts on the exposure of the Kubernetes dashboard. Azure Security Center alerts when the Kubernetes dashboard is exposed to the Internet. The fact that this security alert was triggered on some of the attacked clusters implies that the access vector here is an exposed dashboard to the Internet.

A representation of this attack on the Kubernetes attack matrix would look like:

 

Avoiding cryptocurrency mining attacks

How could this be avoided?

Do not expose the Kubernetes dashboard to the Internet: Exposing the dashboard to the Internet means exposing a management interface.
Apply RBAC in the cluster: When RBAC is enabled, the dashboard’s service account has by default very limited permissions which won’t allow any functionality, including deploying new containers.
Grant only necessary permissions to the service accounts: If the dashboard is used, make sure to apply only necessary permissions to the dashboard’s service account. For example, if the dashboard is used for monitoring only, grant only “get” permissions to the service account.
Allow only trusted images: Enforce deployment of only trusted containers, from trusted registries.

Learn more

Kubernetes is quickly becoming the new standard for deploying and managing software in the cloud. Few people have extensive experience with Kubernetes and many only focuses on general engineering and administration and overlook the security aspect. Kubernetes environment needs to be configured carefully to be secure, making sure no container focused attack surface doors are not left open is exposed for attackers. Azure Security Center provides:

Discovery and Visibility: Continuous discovery of managed AKS instances within Security Center’s registered subscriptions.
Secure Score recommendations: Actionable items to help customers comply with security best practices in AKS as part of the customer’s Secure Score, such as "Role-Based Access Control should be used to restrict access to a Kubernetes Service Cluster."
Threat Detection: Host and cluster-based analytics, such as “A privileged container detected."

To learn more about AKS Support in Azure Security Center, please visit the documentation here.
Quelle: Azure