Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions

Furthering our commitment to be the most trusted cloud for Government, today Microsoft is proud to announce two milestone achievements in support of the US Department of Defense.

Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency

Azure Government is the first commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency. This provisional authorization allows all US Department of Defense (DoD) customers to leverage Azure Government for the most sensitive controlled unclassified information (CUI), including CUI of National Security Systems. 

DoD Authorizing Officials can use this Provisional Authorization as a baseline for input into their authorization decisions on behalf of mission owner systems using the Azure Government cloud DOD Region. 

This achievement is the result of the collective efforts of Microsoft, DISA and its mission partners to work through requirements pertaining to the adoption of cloud computing for infrastructure, platform and productivity across the DoD enterprise.

General Availability of DoD Regions

Information Impact Level 5 requires processing in dedicated infrastructure that ensures physical separation of DoD customers from non-DoD customers. Over the past few months, we ran a preview program with more than 50 customers across the Department of Defense, including all branches of the military, unified combatant commands and defense agencies.

We are thrilled to announce the general availability of the DOD Region to all validated DoD customers. Key services covering compute, storage, networking and database are available today with full service level agreements and dedicated Azure Government support.

Dave Milton, Chief Technology Officer for Permuta Technologies, a leading provider of business solutions tailored for the military affirmed the significance of the general availability of the Azure DoD regions, saying:

“Azure Government DOD Regions has given us the ability to deploy our SaaS offering, DefenseReady Cloud, to the US Department of Defense in a scalable, secure, and cost-effective environment. The mission-critical nature of DefenseReady Cloud requires high availability, compliance with DoD’s SRG Impact Level 5 requirements, and scalability to support our customers changing demand, with a flexible pricing structure that allow us to offer capability to large enterprises as well as local commands. With Azure Government DOD Region, we are now able to onboard a customer in weeks, not months, allowing for a time-to-value that is unparalleled when compared with on-premises or other government-sponsored options. Through our partnership, Microsoft provided direct access to product group engineers, compliance support, training, and other resources needed to bring our SaaS solution to DoD.”

These accomplishments and the commentary of our customers and partners further reinforce our commitment to, and the strength of, our long-standing partnership with the US Department of Defense. For more information on Microsoft Cloud for Government services with Information Impact Level 5 provision authorization visit the Microsoft in Government blog, and for more detail on the Information Impact Level 5 Provision authorization (including in-scope services), please visit our Microsoft Trust Center.

To get started today, customers and mission partners may request access to our Azure Government Trial program.
Quelle: Azure

Join Microsoft at the NVIDIA GPU Technology Conference

The world of computing goes deep and wide on working on issues related to our environment, economy, energy, and public health systems. These needs require modern, advanced solutions that were traditionally limited to a few organizations, are hard to scale, and take a long time to deliver. Microsoft Azure delivers High Performance Computing (HPC) capability and tools to power solutions that address these challenges integrated into a global-scale cloud platform. 

Whether it’s a manufacturer running advanced simulations, an energy company optimizing drilling through real-time well monitoring, or a financial services company using AI to navigate market risk  Microsoft’s partnership with NVIDIA makes access to NVIDIA GPUs easier than ever.

Join us in San Jose next week at NVIDIA’s GPU Technology Conference to learn how Azure customers combine the flexibility and elasticity of the cloud with the capability of NVIDIA GPUs. We will share examples of work we’ve done in oil & gas, automotive, artificial intelligence, and much more. Also, be on the lookout for new and exciting integrations between Azure AI and NVIDIA that bring GPU acceleration to more developers.

Microsoft sessions at the conference include:

Using ONNX for Accelerated Inferencing on Cloud and Edge – Prasanth Pulavarthi (Microsoft)
Accelerated Data Science Pipeline with RAPIDS on Azure – Kaarthik Sivashanmugam (Microsoft) & Manuel Reyes-Gomez(NVIDIA)
Distributed Deep Learning – Llia Karmanov & Mathew Salvaris (Microsoft)
Real-Time Streaming of 3D Enterprise Applications To Low-Powered Devices – Andrei Ermilov
Minimizing Risk While Maximizing Gain: Full Feature Space Representation While Upgrading Minimal Subset of PCs – Tom Drabas
Using Deep Learning to Transform Internet Scale Web Searches – Adi Oltean & Guhan Suriyanarayanan
Dask and V100s for Fast, Distributed Batch Scoring of Computer Vision Workloads – Danielle Dean, Fidan Boylu Uz & Mathew Salvaris (Microsoft)
Microsoft Azure: GPUs for Visualization, AI and HPC – Ian Finder (Microsoft)

If you are participating in any of the many NVIDIA DLI training classes, you will get a chance to experience firsthand the breadth of Azure GPU compute options through the interactive classes which are now Azure GPU powered.

Please come by and say “hello” at our Microsoft Booth – 1122 – where Microsoft and partners (including Teradici and Workspot) will have demos of customer use cases and we will have experts on hand to talk about how Azure is the cloud for any GPU workload.  Additionally, we will be demoing Microsoft Bing – which uses the power of NVIDIA GPUs on Azure to execute a variety of tasks such as generating instant answers to complex questions and analyzing images to help you find similar looking items or products.

As you can see, NVIDIA GPUs are a key part of the Microsoft High Performance Computing strategy that Azure customers rely on to drive innovation.

We’re looking forward to talking to you next week.
Quelle: Azure

Maximize existing vision systems in quality assurance with Cognitive AI

Quality assurance matters to manufacturers. The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can run in the hundreds of dollars per unit. To mitigate this, many manufacturers install cameras to monitor their products as they move along the production line. But the data may not always be useful. For example, cameras alone often struggle with identifying defects at high volume of images moving at high speed. Now, a solution provider has developed a way to integrate such existing systems into quality assurance management. Mariner, with its Spyglass solution, uses AI from Azure to achieve visibility over the entire line, and to prevent product defects before they become a problem.

Quality assurance expenses

Quality assurance (QA) management in manufacturing is time-consuming and expensive, but critical. The effects of poor quality are substantial, as they result in:

Re-work costs
Production inefficiencies
Wasted materials
Expensive and embarrassing recalls 

And worst of all, dissatisfied customers that demand returns. 

Multiple variables across multiple facilities

Too many variables make product defect analysis and prediction difficult. Manufacturers need to perform a root cause analysis across a manufacturing process that has complex variables. They want to determine which combinations of variables create high-quality products versus those that create inferior products. But to achieve this precision, the manufacturer needs to aggregate data across multiple systems to return a comprehensive view.

Legacy vision systems lack the precision of AI-based defect detection systems. Manufacturing processes can be incredibly complex, and older vision systems are often unable to consistently and accurately identify small flaws that may have a large impact on customer satisfaction. Also, false positives can bog down production schedules.

Additionally, the inability to aggregate data from multiple production lines or factories to determine the cause of variations in quality across multiple sites prevents a holistic view of operational efficiency.

Integrating legacy systems and AI on Azure

Spyglass Visual Inspection, powered by Microsoft Azure is an easily implemented, rapid time-to-value QA solution that can reduce costs associated with product defects and increase customer satisfaction. Manufacturers work with images from any vision system so companies who already have systems in place can leverage them for additional return-on-investment (ROI).

By using cameras and other devices already in use on the production floor, the solution takes a lean approach to implementing new and emerging technologies like IoT, Cognitive AI, and computer vision. This ensures that manufacturers control costs and achieve value at every stage of production.

The figure above outlines the architecture of the solution. Data from existing systems is placed at the front. Edge computing provides on-premises processing. The data moves to storage on Azure, where it is further processed. AI can then be applied, and the results viewed using Power BI for insights into the system.

Benefits

Spyglass Visual Inspection harnesses the power of AI, IoT, and machine vision. The result is that manufacturers minimize defects and reduce costs through advanced analytics. For the manufacturer, the benefits that matter are:

Rapid ROI: Easy implementation and ramp-up enables immediate process improvements and a rapid return on your investment.
Greater visibility: Predictive analytics and root cause analysis drive quality improvements across multiple lines or sites.
Leverages existing vision systems: Extracts more value from existing industrial cameras and devices by augmenting them with AI-driven real-time insights.

Azure services

Spyglass Visual Inspection is powered by Microsoft Azure. It leverages the following Azure services:

Microsoft Deep Learning Virtual Machine, A neural network extracts rich information from images to identify defects.
Azure IoT Edge ingests images from industrial cameras on the production line and runs cloud AI algorithms locally.
Azure IoT Hub receives images, meta data from images, and results from the defect detection analysis on the Edge.
Azure Stream Analytics enables users to create dashboards that offer deep insights into the types and causes of defects that are occurring across a massive number of variables.
Azure Data Lake Storage/Blob Storage stores the data. Because heterogeneous data from multiple streams can be stored, additional data types can be added to image-based analysis.
Azure SQL Database is used to store the business rules that define what a good or bad product is and what alerts should be generated in the analytics.
Azure Functions/Service Bus generates rules that trigger alerts so you can capture the most meaningful data for business users.
Power BI provides interactive dashboards that make data easy to access and understand, so users can make analytics-driven decisions.
Power Apps creates additional applications for manufacturers to act on the data and insights they have received.

Recommended next steps

Go to the marketplace listing for Spyglass and select Contact me.
Quelle: Azure

Microsoft Azure portal March 2019 update

This month’s updates include an improved “All services” view, Virtual Network Gateway overview updates, an improved DNS Zone and Load Balancer creation experience, Management Group integration into Activity Log, redesigned overview screens for certain services within Azure DB, an improved creation experience for Azure SQL Database, multiple changes to the Security Center, and more updates to Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. Download the Azure mobile app to stay connected to your Azure resources anytime, anywhere.

Here’s the list of March updates to the Azure portal:

Shell

Improved “All services” view

IaaS

Virtual network gateway overview updates
New full-screen DNS zone and Load Balancer create blades

Management experiences

Management Group integration into Activity Log

SQL

Redesigned overview blade for Azure Database for MySQL, PostgreSQL, and MariaDB services
Improved creation experience for Azure SQL Database

Azure Security Center

Secure score added as a dashboard KPI
New regulatory compliance dashboard
Updated security policies
Updated security recommendations

Other

Updates to Microsoft Intune

Shell

Improved “All services” view

We have improved the “All services” view, the view that shows all available services and resources in Azure:

The entire screen’s real estate is now utilized to show more services.
A category index has been added at the left to help navigate the Azure offering.

IaaS

Virtual network gateway overview updates

We've made significant updates to the overview page for virtual network gateways. We've added shortcut tiles in the center of the page to make it easier to find troubleshooting tools, and we've added a tile that brings up documentation so you can quickly learn more about your resource. We've also added metric charts so you can see at a glance what the tunnel ingress and egress are for your gateway.

Go to any virtual network gateway resource to try out the changes.

Improved creation experience for DNS Zones and Load Balancer

We are continuing our efforts to bring improved and consistent instance creation experiences for our top-level resources. As part of that effort, we’ve just updated DNS Zones and Load Balancer. This updated flow eliminates horizontal scrolling during the creation workflow and follows the same UI patterns that we use in other popular services like Virtual Machine, Storage, Cosmos DB, and Azure Kubernetes Service, resulting in easier to learn and better customer experiences.

Bring up either the DNS zone or Load Balancer resource in the Azure portal
Select Add to launch the new create experience

Management experiences

Management Group integration into Activity Log

Azure Management Groups provide a level of scope above subscriptions and is being adopted across the Azure Portal.  Users had been asking to view Management Groups events in the Activity Log, and now, integration of Management Group events and filtering into the Activity Logs allows users to audit their Management Groups.  An authorized user of a Management Group can go to the Activity Log and see all actions that have happened on a Management Group such as create, edit, delete, and parent change. In addition, you can now audit Policy Assignments on Management Groups.

If you have access to Management Groups in your current tenant, simply navigate to the Activity Log.
Select what Management Group you want to filter by using the first pill on the left.

SQL

Redesigned overview blades for Azure Database for MySQL, PostgreSQL, and MariaDB services

We have redesigned the overview blade for MySQL, PostgreSQL, and MariaDB, which provides an at-a-glance understanding of the status of your server. It is also aligned with the overview design of Azure SQL Database, Elastic Pools, Managed Instance, and Data Warehouse. In the overview, you can now see the resource usage over the last hour, common tasks, features available, and whether the features have been configured. Clicking on any of these tiles in the overview takes you to the full details and settings.

Select All Services
Search and select either Azure Database for MySQL, Azure Database for PostgreSQL, or Azure Database for MariaDB
Select any server from the list
Observe the overview blade

Improved creation experience for Azure SQL Database

We are continuing our efforts to bring improved and consistent creation experiences for our top-level resources. As part of that effort, we’ve just updated the SQL database create workflow. This updated flow eliminates horizontal scrolling during the creation workflow and follows the same UI patterns that we use in other popular services like VM, Storage, Cosmos DB and AKS, resulting in easier to learn and better customer experiences.

Azure Security Center

Secure score as a dashboard KPI

Secure score is now the main compliance KPI in the Azure Security Center dashboard, replacing the previous percentage-based compliance metric.

New regulatory compliance dashboard

The new Azure Security Center regulatory compliance dashboard helps streamline the process for meeting regulatory compliance requirements by providing insights into your compliance posture. The information provided is based on continuous assessments of your Azure environment.

Updated security policies

We are updating Azure Security Center policies to use Azure Policy. You will be migrated automatically, no action is required by you. For more information see our documentation, “Working with security policies.”

Updated security recommendations

Azure App Service security recommendations have been improved to provide greater accuracy and environment compatibility. For more information see out documentation, “Protecting your machines and applications in Azure Security Center.”

Other

Updates to Microsoft Intune

The Microsoft Intune team has made updates to Microsoft Intune. You can find them on the What's new in Microsoft Intune page.

Did you know?

We now have several new videos in the recently launched Azure portal “how to” video series!  This weekly series highlights specific aspects of the portal so you can be more efficient and productive while deploying your cloud workloads from the portal. Recent videos include a demonstration of how to create, share, and use dashboards, how to manage virtual machines while on the go using the Azure mobile app, and how to configure a virtual machine with the Azure portal. Keep checking in to our playlist on YouTube for a new video each week.

Next steps

The Azure portal’s large team of engineers always wants to hear from you, so please keep providing us with your feedback in the comments section below or on Twitter @AzurePortal.

Don’t forget to sign in the Azure portal and download the Azure mobile app today to see everything that’s new. See you next month!
Quelle: Azure

IoT in Action: Thriving partner ecosystem key to transformation

The Internet of Things (IoT) is an ongoing journey. Three years ago, when I entered this business, the world of IoT was in its infancy. Traditional industry technology adopters understood the importance of innovation and implemented isolated solutions to address discrete business issues such as inventory management, loss prevention, logistics management, and other such processes that could be automated.

Digital transformation requires that these solutions be connected so that the data can be collected and analyzed more effectively across systems to drive exponential improvements in operations, profitability, and customer and employee loyalty. The advent of sensors and analytics at the edge plus advancements in cloud platforms and data analytics is enabling this. Systems and services are now connected to provide more holistic solutions that deliver value through operational or profitability improvements and in many cases, through new revenue streams.

The creation of these solutions typically requires an ecosystem of partners. This is where Microsoft provides a distinct advantage, through our partner-plus-platform-approach, that is driving change in IoT technology adoption. Microsoft has committed $5 billion in IoT-focused investments to grow and support our partner ecosystem–specifically through unrelenting R&D innovation in critical areas, like security, new development tools and intelligent services, artificial intelligence, and emerging technologies. Our goal is to create trusted, connected solutions that improve business and customer experiences as well as the daily lives of people all over the world.

Transformation is about reducing complexity and developing new ways of doing business

We have some excellent examples of partners who have developed business applications that leverage advanced analytics to deliver intelligent action executed on the Microsoft platform.

The retail industry is not new to technology. However, with the advancement of digital technology, customer expectations have evolved rapidly, which is impacting nearly every aspect of how retailers operate, including how they engage customers and handle their products. There is a growing need to increase connectivity across solutions, such as digital signage, interactive kiosks, smart PoS systems, and omni-channel and other key systems, that typically consist of a range of platforms and applications from multiple vendors.

The meldCX platform, powered by Microsoft Azure, makes it simpler and more cost-effective for retailers to develop, deploy, and manage the applications and IoT solutions for these connected devices. meldCX, a partner headquartered in Australia with operations in key global markets including USA, EU, and Asia, provides a single, integrated, and powerful dashboard that enables real-time data analysis as well as device and application control, empowering the retailer to focus on the job of delivering the best customer experience. Leveraging the meldCX solution, retailers and retail suppliers can improve the customer experience, significantly reduce loss prevention challenges, and enable unstaffed retail stores to become a reality.

Importance of the partner ecosystem

Microsoft is committed to the success of our partners, and this is manifested in several ways. First, we continue to build, support, and engage our rich ecosystem of partners, which connects partners with complementary capabilities to accelerate solution development and delivery for customers. Partners are connected to opportunities that they may not have otherwise been part of.

Second, we have established a core group of solution aggregator partners that have the resources, expertise, and service offerings needed to deliver holistic, end-to-end solutions for customers, thereby simplifying the procurement process.

Third, we have built a set of solution accelerators. These first-party, open-source, and preconfigured solutions are based on a common framework and address specific IoT use cases (horizontal and vertical-based). Partners can leverage these accelerators as a starting point to fast-track solution development and speed time to market. Accelerators also help them to minimize risk, reduce development costs, and focus on unique value differentiation.

Microsoft partners are also creating third-party, open-source solution accelerators, building platforms that can be used by other partners and customers to speed their solution development and time to value. WillowTwin™ epitomizes this approach.

Based on the Azure Digital Twin platform, Willow is empowering people and organizations to connect with the built world in a whole new way. Willow has created a solution that brings together data from multiple sources, including static, historical, IOT devices, and live operating data, to create actionable insights designed to transform the operation and experience of smart buildings and infrastructure networks.

Developers and owners can now make decisions around how their company is addressing energy efficiency, spatial utilization, occupant experience, and the regulatory compliance of buildings and infrastructure networks. WillowTwin goes beyond connecting and managing IoT devices. The data, managed in an open protocol software platform, is capable of modelling the relationship and interactions between people, places, and devices. The WillowTwin solution has been implemented across a range of customers including Thyssenkrupp Elevator division to optimize building usage and management.

Together with our partners, Microsoft delivers solutions that genuinely transform our customers' business.

Achieving successful transformation

Real business model transformation must be a planned outcome. Directly addressing operational efficiencies or the bottom line is not enough. In a world where people are always connected, expectations have shifted. We expect a value/information exchange, and organizations must be prepared to rise to those expectations. Organizations that reimagine their approach and transform the way their employees, customers, and constituents interact with and use their services have the highest chance of success in the digital transformation race. IoT is a great place to start.

To learn more, register for Microsoft IoT in Action event in Sydney on Tuesday, March 19, 2019. This global in-person event provides a forum for partners and customers to meet and share their experiences and to hear first-hand from Microsoft how we are delivering digital transformation solutions. I hope to meet you there. Please also visit our event series website for an upcoming event in a city near you.
Quelle: Azure

Hardware innovation for data growth challenges at cloud-scale

The Open Compute Project (OCP) Global Summit 2019 kicks off today in San Jose where a vibrant and growing community is sharing the latest in innovation to make hardware more efficient, flexible, and scalable.

For Microsoft, our journey with OCP began in 2014 when we joined the foundation and contributed the very same server and datacenter designs that power our global Azure cloud, but it didn’t stop there. Each year at the OCP summit, we contribute innovation that addresses the most pressing challenges for our industry, including a modular and globally compatible server design and universal motherboard with Project Olympus to enabling hardware security with Project Cerberus to a next generation specification for SSD storage with Project Denali.

This year we’re turning our attention to the exploding volume of data being created daily. Data is at the heart of digital transformation and companies are leveraging data to improve customer experiences, open new markets, make employees and processes more productive, and create new sources of competitive advantage as they work toward the future of tomorrow.

Data – the engine of Digital Transformation

The Global Datasphere* which quantifies and analyzes the amount of data created, captured, and replicated in any given year across the world is growing exponentially and the growth is seemingly never-ending. IDC predicts* that the Global Datasphere will grow from 33 zettabytes (ZB) in 2018 to 175 ZB by 2025. To keep up with the storage demands stemming from all this data creation, IDC forecasts* that over 22 ZB o storage capacity must ship across all media types from 2018 to 2025, with nearly 59 percent of that capacity supplied from the HDD industry.

With this challenge on the horizon, the enterprise is fast becoming the world's data steward once again. In the recent past, consumers were responsible for much of their own data, but their reliance on and trust of today’s cloud services, especially from connectivity, performance, and convenience perspectives, continues to increase and the desire to store and manage data locally continues to decrease.

Moreover, businesses are looking to centralize data management and delivery (e.g., online video streaming, data analytics, data security, and privacy) as well as to leverage data to control their businesses and the user experience (e.g., machine-to-machine communication, IoT, and persistent personalization profiling). The responsibility to maintain and manage all this consumer and business data is driving the growth of cloud provider datacenters. As a result, the enterprise’s role as a data steward continues to grow, and consumers are not just allowing this, but expecting it. Beginning in 2019, more data will be stored in the enterprise core than in all the world's existing endpoints.

The demand for data storage

A few years ago, we started looking at scale challenges in the cloud regarding the growth of data and the future of data storage needs. The amount of data created in the Global Datasphere is the focus of the storage industry. Even with the amount of data that is discarded, overwritten, or sensed and never stored longer than milliseconds, there still exists a growing demand for storage capacity across industries, governments, enterprises, and consumers.

To live in a digitized world where artificial intelligence drives business processes, customer engagements, and autonomous infrastructure or where consumers' lives are hyper-personalized in nearly every aspect of behavior – including what time we'll be awakened based on the previous day's activities, overnight sleep patterns, and the next day's calendar – will require creating and storing more data than ever before.

IDC currently calculates Data Age 2025* storage capacity shipments across all media types (HDD, SSD, NVM-flash/other, tape, and optical) over the next 4 years (2018–2021) will need to exceed the 6.9 ZB shipped across all media types over the past 20 years. IDC forecasts* that over 22 ZB of storage capacity must ship across all media types from 2018 to 2025 to keep up with storage demands. Around 59 percent of the capacity will need to come from the HDD industry and 26 percent from flash technology over that same time frame, with optical storage the only medium to show signs of fatigue as consumers continue to abandon DVDs in favor of streaming video and audio.

Introducing Microsoft’s Project Zipline

The ability to store and process data extremely efficiently is core to the cloud's value proposition and Azure continues to grow dramatically as does the amount of data that Azure stores with many very data-intensive workloads. To address this, we’ve developed a cutting-edge compression algorithm and optimized the hardware implementation for the types of data we see in our cloud storage workloads. By engineering innovation at the systems level, we've been able to simultaneously achieve higher compression ratios, higher throughput, and lower latency than the other algorithms that are currently available. This enables compression without compromise, allowing always-on data processing for various industry usage models ranging from the cloud to the edge.

Microsoft’s Project Zipline compression algorithm yields dramatically better results, up to 2X high compression ratios versus the commonly used Zlib-L4 64KB model. Enhancements like this can lead to direct customer benefits in the potential for cost savings, for instance, and indirectly, access to petabytes or exabytes of capacity in a cost-effective way could enable new scenarios for our customers.

We are open sourcing Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) with initial content available today and more coming soon. This contribution will provide collateral for integration into a variety of silicon components (e.g. edge devices, networking, offload accelerators etc.) across the industry for this new high-performance compression standard. Contributing RTL at this level of detail as open source to OCP is industry leading. It sets a new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level. Over time, we anticipate Project Zipline compression technology will make its way into several market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices.

Project Zipline is a cutting-edge compression technology optimized for a large variety of datasets, and our release of RTL allows hardware vendors to use the reference design to produce hardware chips to allow the highest compression, lowest cost, and lowest power out of the algorithm. It's available to the OCP ecosystem, so they can contribute to it, and create further benefit for the entire ecosystem, including Azure and our customers.

Project Zipline partners and ecosystem

As a leader in the cloud storage space, I'm particularly proud that we're able to take all the investment and innovation we've created and share it through OCP so that our partners can provide better solutions for their customers as well.

I look forward to seeing more of the industry joining OCP and collaborating so their customers can also see the benefit.

You can follow these links to learn more about Microsoft’s Project Zipline from our GitHub specification and more about our open source hardware development.

** Source: Data Age 2025, sponsored by Seagate with data from IDC Global DataSphere, Nov 2018
Quelle: Azure

Now available for preview: Workload importance for Azure SQL Data Warehouse

Azure SQL Data Warehouse is a fast, flexible and secure analytics platform for enterprises of all sizes. Today we are announcing the preview availability of workload importance on the Gen2 platform to help customers manage resources more efficiently. Workload importance gives data engineers the ability to use importance to classify requests. Requests with higher importance are guaranteed quicker access to resources which helps meet SLAs.

“More with less” is often the motto when it comes to operating data warehousing solutions. The ability to easily scale up compute resources gives data engineers tremendous flexibility. However, when there is budget pressure and scaling down is required, problems can arise.  Workload importance allows high business value work to meet SLAs in a shared environment with fewer resources.

An example of workload importance is shown below. The CEO’s request was submitted last and classified with high importance. Because the CEO’s request has high importance, it is granted access to resources before the Analyst requests allowing it to complete sooner.

Get started now classifying requests with importance

Classifying requests is done with the new CREATE WORKLOAD CLASSIFIER syntax. Below is an example that maps the login for the ExecutiveReports role to ABOVE_NORMAL importance and the AdhocUsers role to BELOW_NORMAL importance. With this configuration, members of the ExecutiveReports role have their queries complete sooner because they get access to resources before members of the AdhocUsers role.

CREATE WORKLOAD CLASSIFIER ExecReportsClassifier
WITH (WORKLOAD_GROUP = 'mediumrc'
,MEMBERNAME = 'ExecutiveReports'
,IMPORTANCE = above_normal);

CREATE WORKLOAD CLASSIFIER AdhocClassifier
WITH (WORKLOAD_GROUP = 'smallrc'
,MEMBERNAME = 'AdhocUsers'
,IMPORTANCE = below_normal);

For more information on workload importance refer to the Classification and Importance overview topics in the documentation. Check out the CREATE WORKLOAD CLASSIFIER doc as well.

See workload importance in action in the below videos:

Workload Importance concepts
Workload Importance scenarios

Next Steps

To get started today, create an Azure SQL Data Warehouse.
For feature requests, please vote on our UserVoice.
To stay up-to-date on the latest Azure SQL Data Warehouse news and features, follow us on Twitter @AzureSQLDW.

Quelle: Azure

Achieve more with Microsoft Game Stack

This blog post was authored by Kareem Choudhry, Corporate Vice President, Microsoft Gaming Cloud.

Microsoft is built on the belief of empowering people and organizations to achieve more – it is the DNA of our company. Today we are announcing a new initiative, Microsoft Game Stack, in which we commit to bringing together Microsoft tools and services that will empower game developers like yourself, whether you’re an indie developer just starting out or a AAA studio, to achieve more.

This is the start of a new journey, and today we are only taking the first steps. We believe Microsoft is uniquely suited to deliver on that commitment. Our company has a long legacy in games – and in building developer-focused platforms.

There are 2 billion gamers in the world today, playing a broad range of games, on a broad range of devices. There is as much focus on video streaming, watching, and sharing within a community as there is on playing or competing. As game creators, you strive every day to continuously engage your players, to spark their imaginations, and inspire them, regardless of where they are, or what device they’re using. Today, we’re introducing Microsoft Game Stack, to help you do exactly that.

What exactly is Microsoft Game Stack?

Game Stack brings together all of our game-development platforms, tools, and services—such as Azure, PlayFab, DirectX, Visual Studio, Xbox Live, App Center, and Havok—into a robust ecosystem that any game developer can use. The goal of Game Stack is to help you easily discover the tools and services you need to create and operate your game.

The cloud plays a critical role in Game Stack, and Azure fills this vital need. Azure provides the building blocks like compute and storage, as well as cloud-native services from machine learning and AI, to push notifications and mixed reality spatial anchors. Azure is already available in 54 regions globally, including China, and continues to invest in building highly secure and sustainable cloud infrastructure and additional services for game developers. Azure’s global scale is what will give Project xCloud streaming technology the scale to deliver a great gaming experience for players worldwide, regardless of their device and location.

Already with Azure, companies like Rare, Ubisoft, and Wizards of the Coast are hosting multiplayer game servers, safely and securely storing player data, analyzing game telemetry, protecting their games from DDOS attacks, and training AI to create more immersive gameplay.

While Azure is part of Game Stack, it’s important to call out that Game Stack is cloud, network, and device agnostic. And we’re not stopping here.

What’s new?

The next piece of Game Stack is PlayFab, a complete backend service for building and operating live games. A year ago, we welcomed PlayFab into Microsoft through an acquisition. Today we’re excited to announce we are bringing PlayFab into the Azure family. Together, Azure and PlayFab are a powerful combination: Azure brings reliability, global scale, and enterprise-level security; PlayFab provides Game Stack with managed game-development services, real-time analytics, and LiveOps capabilities. Last fall, we saw what these two platforms can do together with PlayFab Multiplayer Servers, which allows you to safely launch and scale up multiplayer games by dynamically hosting your servers with Azure cloud compute.

To quote PlayFab’s co-founder James Gwertzman, “Modern game creators are less like movie directors, and more like cruise directors. Long-term success requires engaging players in a continuous cycle of creation, experimentation, and operation. It’s no longer possible to just ship your game and move on.” This is why a year ago, we welcomed PlayFab into Microsoft through an acquisition. PlayFab supports all major devices, from iOS and Android, to PC and Web, to Xbox, Sony PlayStation, and Nintendo Switch; and all major game engines, including Unity and Unreal. PlayFab will also continue to support all major clouds going forward.

Today we’re also excited to announce five new PlayFab services in preview.

In public preview today:

PlayFab Matchmaking: Powerful matchmaking for multiplayer games, adapted from Xbox Live matchmaking, but now available to all games and all devices.

In private preview today (contact us to join the preview):

PlayFab Party: Voice and chat services, adapted from Xbox Party Chat, but now available to all games and for all devices. Party leverages Azure Cognitive Services for real-time translation and transcription to make games accessible to more players.
PlayFab Game Insights: Combines robust real-time game telemetry with game data from multiple other sources to measure your game’s performance and create actionable insights. Powered by Azure Data Explorer, Game Insights will offer connectors to existing first- and third-party data sources including Xbox Live.
PlayFab Pub Sub: Subscribe your game client to messages pushed from PlayFab’s servers via a persistent connection, powered by Azure SignalR. This enables scenarios such as real-time content updates, matchmaking notifications, and simple multiplayer gameplay.
PlayFab User Generated Content: Engage your community by allowing players to create and safely share user generated content with other players. This technology was originally built to support the Minecraft marketplace.

Growing the Xbox Live community

Another major component of Game Stack is Xbox Live. Over the past 16 years, Xbox Live has become one of the most vibrant and engaged gaming communities in the world. It is also a safe and inclusive network that has broken down boundaries in how gamers connect across devices.

Today, we’re excited for Xbox Live to become part of Microsoft Game Stack, providing identity and community services. Under Game Stack, Xbox Live will expand its cross-platform capabilities, as we introduce a new SDK that brings this community to iOS and Android devices.

Mobile developers will now be able to reach some of the most highly engaged and passionate gamers on the planet with Xbox Live. These are just a few of the benefits for mobile developers:

Trusted Game Identity: With the new Xbox Live SDK, developers can focus on creating great games and leverage Microsoft‘s trusted identity network to support log-in, privacy, online safety, and child accounts. 
Frictionless Integration: New a la carte service offerings and no Xbox Live certification pass give mobile developers flexibility in how they build and update their games. Developers just use the services that best fit their needs.
Vibrant Gaming Community: Reach Xbox Live’s growing community and connect gamers across a multitude of platforms. Find creative ways to enable achievements, Gamerscore, and “hero” stats, which have their own out-of-game experience, to keep gamers engaged.

Other Game Stack components

Other components of Game Stack include Visual Studio, Mixer, DirectX, Azure App Center, Visual Studio, Visual Studio Code, and Havok. In the coming months, as we work to improve and grow Game Stack, you’ll see deeper connections between these services as we unify them to work more seamlessly together.

As an example of how this integration is already underway, today we’re bringing together PlayFab and these Game Stack components:

App Center: Crash log data from App Center is now connected to PlayFab, allowing you to better understand and respond to problems in your game in real-time by tying crash logs back to individual player profiles.
Visual Studio Code: With PlayFab’s new plug-in for Visual Studio Code, editing and updating Cloud Script just got a lot easier.

Create your world today and achieve more

As we expand our focus to the cloud, the nature of the platform may be changing, but our commitment to empower game developers like yourself is unwavering, and we’re looking forward to the journey ahead with Microsoft Game Stack. Our teams are inspired and excited by the possibilities as we start to pull together all these great services and technologies. Please be sure to share your feedback with us as we go, so we can help you achieve more. If you’re at GDC, stop by the Microsoft booth in the South Hall of the Moscone Center to try out many of the new services, and to learn more about the exciting opportunities ahead.
Quelle: Azure

Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints

I’m excited to announce the release of our first Azure Blueprint built specifically for a compliance standard, the ISO 27001 Shared Services blueprint sample which maps a set of foundational Azure infrastructure, such as virtual networks and policies, to specific ISO controls.

Microsoft Azure leads the industry with over 90 compliance offerings. Azure meets a broad set of international and industry-specific compliance standards, such as General Data Protection Regulation (GDPR), ISO 27001, HIPAA, PCI, SOC 1 and SOC 2, as well as country-specific standards, including FedRAMP and other NIST 800-53 derived standards, Australia IRAP, UK G-Cloud, and Singapore MTCS. Many of our customers have expressed their interest in being able to leverage and build upon our internal compliance practices for their environments with a service that maps compliance settings automatically.

To help our customers simplify the creation of their environments in Azure while successfully interpreting US and international governance requirements, we are announcing a series of built-in Blueprints Architectures that can be leveraged during your cloud-adoption journey. Azure Blueprints is a free service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, deployment templates, and role-based access controls. This service is built to help customers set up governed Azure environments and can scale to support production implementations for large-scale migrations.

The ISO 27001 Shared Services Blueprint is already available to your Azure tenant. Simply navigate to the Blueprints page, click “Create blueprint”, and choose the ISO27001 Shared Services blueprint from the list.

The ISO 27001 blueprint is designed to help you deploy production ready, secure end-to-end solutions in one click and includes:

Hardened infrastructure resources: Azure Resource Manager templates are used to automatically deploy the components of the architecture into Azure by specifying configuration parameters during setup. The infrastructure components include Azure Firewall, Active Directory, Key Vault, Azure Monitor, Log Analytics, Virtual Networks with subnets, Network Security Groups, and Role Based Access Control definitions. Additionally, these resources can be locked by Blueprints as a security measure to protect the consistency of the defined blueprint and the environment it was designed to create.
Policy controls: Set of Azure policies that help provide real-time enforcement, compliance assessment, and remediation.
Proven virtual datacenter architectures: The infrastructure resources provided are based on the Microsoft approved virtual datacenter (VDC) architectures which take into consideration scale, performance, security, and governance.
Security and compliance controls: You still benefit from all the controls for which Microsoft is responsible as your cloud provider, and now this blueprint helps you configure a number of the remaining controls to meet ISO 27001 requirements.
Documentation: Step by step deployment guide outlining the shared services infrastructure and the policy control mapping matrix.
Migration runway: Provides a prescriptive set of instructions for deploying an Azure recommended foundation to accelerate migrations via the Azure migration center.

At Microsoft, we are committed to helping our customers leverage Azure in a secure and compliant manner. Over the next few months you will continue to see new built-in blueprints released for HITRUST, PCI DSS, UK National Health Service (NHS) Information Governance (IG) Toolkit, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you would like to participate in any early previews please sign up, or if have a suggestion for a compliance blueprint, please share it via the Azure Governance Feedback Forum.

Learn more about the Azure ISO 27001 Blueprints.
Quelle: Azure

Monitoring on HDInsight Part 1: An Overview

Azure HDInsight offers several ways to monitor your Hadoop, Spark, or Kafka clusters. Monitoring on HDInsight can be broken down into three main categories:

Cluster health and availability
Resource utilization and performance
Job status and logs

Two main monitoring tools are offered on Azure HDInsight, Apache Ambari which is included with all HDInsight clusters and optional integration with Azure Monitor logs, which can be enabled on all HDInsight clusters. While these tools contain some of the same information, each has advantages in certain scenarios. Read on for an overview of the best way to monitor various aspects of your HDInsight clusters using these tools.

Cluster health and availability

Azure HDInsight is a high-availability service that has redundant gateway nodes, head nodes, and ZooKeeper nodes to keep your HDInsight clusters running smoothly. While this ensures that a single failure will not affect the functionality of a cluster, you may still want to monitor cluster health so you are alerted when an issue does arise. Monitoring cluster health refers to monitoring whether all nodes in your cluster and the components that run on them are available and functioning correctly. Ambari is the recommended way to monitor the health for any given HDInsight cluster. You can learn more about monitoring cluster availability using Ambari in our documentation, “Availability and reliability of Apache Hadoop clusters in HDInsight.”

Ambari portal view showing the status of all components on a head node

Cluster resource utilization and performance

To maintain optimal performance on your cluster, it is essential to monitor resource utilization. This can be accomplished using Ambari and Azure Monitor logs.

With Ambari

Ambari is the recommended way to monitor utilization across the whole cluster. The Ambari dashboard shows easily glanceable widgets that display metrics such as CPU, network, YARN memory, and HDFS disk usage. The “Hosts” tab shows metrics for individual nodes so you can ensure the load on your cluster is evenly distributed. The “YARN Queue Manager” is also accessible through Ambari. This allows you to manage the capacity of each of your job queues to see how jobs are distributed between them and whether any jobs are resource constrained. Read more about using Ambari to monitor cluster performance in our documentation, “Monitor cluster performance.”

The Ambari Portal dashboard that shows the utilization of your entire cluster at a glance

With Azure Monitor logs

You can monitor resource utilization at the virtual machine (VM) level using Azure Monitor logs. All VMs in an HDInsight cluster push performance counters into the Perf table in your Log Analytics workspace, including CPU, memory, and disk usage. Like any other Log Analytics table, you can query the Perf table, create visualizations with view designer, and configure alerts. One of the key benefits of Log Analytics is that you can push metrics and logs from multiple HDInsight clusters to the same Log Analytics workspace, allowing you to monitor multiple clusters in one place. You can read more about working with performance data in Azure Monitor logs by visiting our documentation, “View or analyze data collected with Log Analytics log search.”

Job status and logs

Another key part of monitoring HDInsight clusters is monitoring the status of submitted jobs and viewing relevant logs to assist with debugging. You may want to know how many jobs are currently running or when a job fails.

With Azure Monitor logs

The recommended way to do this on Azure HDInsight is through Azure Monitor logs. HDInsight clusters emit workload-specific logs from the OSS components and metrics with each line being a record. An example of this would be the number of apps pending, failed, and killed for Spark/Hadoop clusters and incoming messages for Kafka clusters. You can query the tables and set up alerts when certain metrics meet your defined thresholds. For example, you could set up an alert that fires and sends you an email or takes some other action whenever a Spark job fails.

HDInsight monitoring solutions

Workload-specific HDInsight monitoring solutions that build on top of the Azure Monitor logs integration are also available. These solutions are premade dashboards that contain visualizations for the aforementioned workload metrics. For example, the Spark solution shows graphs of metrics like pending, failed, and killed apps over time. Because these solutions are backed by a Log Analytics workspace, the visualizations show data for all clusters that emit metrics to the workspace. In result, you can see visualizations of these workload metrics from multiple clusters of the same type and all in one place.

The HDInsight Spark monitoring solution

With Ambari

You can also view workload information from Spark/Hadoop clusters in the YARN ResourceManager UI, which is accessible via the Ambari portal.  The YARN UI shows detailed information about all job submissions and provide a link to the capacity scheduler, where you can view information about your job queues. You can also access raw ResourceManager log files through the Ambari portal if you need to further debug jobs.

Try HDInsight now

Between Apache Ambari and Azure Log Analytics integration, HDInight offers comprehensive tools for monitoring all aspects of your HDInsight cluster. We hope you will take full advantage of monitoring on HDInsight and we are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #AzureHDInsight and @AzureHDInsight. For questions and feedback, reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 36 public regions and Azure Government and National Clouds. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.
Quelle: Azure