Expanded Jobs functionality in Azure IoT Central

Since announcing the release of our Jobs feature during the Azure IoT Central general availability launch, we are excited to share how we are working to improve your device management workflow through additional jobs functionalities. Today, you are now able to copy an existing job you’ve created, save a job to continue working on later, stop or resume a running job, and download a job details report once your job has completed running. These additional Jobs functionalities make managing your devices at scale much easier.

In order to copy a job you’ve created, simply select a job from your main jobs list and select “Copy”. This will open a copy of the job where you can optionally update any part of the job configuration. If any changes have been made to your device set since its creation, your copied job will reflect those changes for you to edit.

While you are editing your job, you now have the option to save the job to continue working on later by selecting “Save”. This saved job will appear on your main jobs list with a status of “Saved” and you can open it again at any time to continue editing.

Once you have chosen to run your job, you can select the “Stop” button to stop the job from executing any further. You can open a stopped job from your list and select “Run” again at any time you’d like.

Whether your job has been stopped or is completed running, you can select “Download Device Report” near your device list in order to download a .csv file that lists the device ID, time the job was completed or stopped, status of the device, and the error message (if applicable). This can be used to troubleshoot devices or as a sorting tool.

We are continually working on improving your device management experience to make managing devices at scale easier than ever. If you have any suggestions for the device management or Jobs functionalities you would find useful in your workflow, please leave us feedback.

Learn more about how to run a job in Azure IoT Central.
Quelle: Azure

Data integration with ADLS Gen2 and Azure Data Explorer using Data Factory

Microsoft announced the general availability of Azure Data Lake Storage (ADLS) Gen2 and Azure Data Explorer in early February, which arms Azure with unmatched price performance and security as one of the best clouds for analytics. Azure Data Factory (ADF), is a fully-managed data integration service, that empowers you to copy data from over 80 data sources with a simple drag-and-drop experience and operationalize and manage the ETL/ELT flows with flexible control flow, rich monitoring, and continuous integration and continuous delivery (CI/CD) capabilities. In this blog post, we’re excited to update you on the latest integration in Azure Data Factory with ADLS Gen2 and Azure Data Explorer. You can now meet the advanced needs of your analytics workloads by leveraging these services.

Ingest and transform data with ADLS Gen2

Azure Data Lake Storage is a no-compromises data lake platform that combines the rich feature set of advanced data lake solutions with the economics, global scale, and enterprise grade security of Azure Blob Storage. Our recent post provides you with a comprehensive insider view on this powerful service.

Azure Data Factory supports ADLS Gen2 as a preview connector since ADLS Gen2 limited public preview. Now the connector has also reached general availability along with ADLS Gen2. Moreover, with ADF, you can now:

Ingest data from over 80 data sources located on-premises and in the cloud into ADLS Gen2 with great performance.
Orchestrate data transformation using Databricks Notebook, Apache Spark in Python, and Spark JAR against data stored in ADLS Gen2.
Orchestrate data transformation using HDInsights with ADLS Gen2 as the primary store and script store on either bring-your-own or on-demand cluster.
Egress data from ADLS Gen2 to a data warehouse for reporting.
Leverage Azure’s Role Based Access Control (RBAC) and Portable Operating System Interface (POSIX) compliant access control lists (ACLs) that restrict access to only authorized accounts.
Invoke control flow operations like Lookup and GetMetadata against ADLS Gen2.

Get started today

Tutorial on ingesting data into ADLS Gen2
ADLS Gen2 connector
Databricks Notebook activity to transform data in ADLS Gen2
HDInsights activity to transform data in ADLS Gen2

Populate Azure Data Explorer for real-time analysis

Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. It helps you handle the many data streams emitted by modern software and is designed for analyzing large volumes of diverse data.

Bringing data into Azure Data Explorer is the first challenge customers often face to adopt the service. Complimentary to Azure Data Explorer’s native support on continuous data ingestion from event streams, Azure Data Factory enables you to batch ingress data from a broad set of data stores in a codeless manner. With simple drag-and-drop features in ADF, you can now:

Ingest data from over 80 data sources – on-premises and cloud-based, structured, semi-structured, and unstructured into Azure Data Explorer for real-time analysis.
Egress data from Azure Data Explorer based on the Keyword Query Language (KQL) query.
Lookup Azure Data Explorer for control flow operations.

Get started

Azure Data Explorer connector

 

We will keep adding new features in ADF to tighten the integration with ADLS Gen2 and Azure Data Explorer. Stay tuned and let us know your feedback!
Quelle: Azure

Windows Virtual Desktop now in public preview on Azure

We recently shared the public preview of the Windows Virtual Desktop service on Azure. Now customers can access the only service that delivers simplified management, multi-session Windows 10, optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops and apps. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes, while enjoying built-in security and compliance.

This means customers can now virtualize using multi-session Windows 10, Windows 7, and Windows Server desktops and apps (RDS) to Windows Virtual Desktop for a simplified management and deployment experience with Azure. We also built Windows Virtual Desktop as an extensible solution for our partners, including Citrix, Samsung, and Microsoft Cloud Solution Providers (CSP).

Access to Windows Virtual Desktop is available through applicable RDS and Windows Enterprise licenses. With the appropriate license, you just need to set up an Azure subscription to get started today. You can choose the type of virtual machines and storage you want to suit your environment. You can optimize costs by taking advantage of Reserved Instances with up to a 72 percent discount and using multi-session Windows 10.

You can read more detail about Windows Virtual Desktop in the Microsoft 365 blog published today by Julia White and Brad Anderson.

Get started with the public preview today.
Quelle: Azure

Microsoft’s Azure Cosmos DB is named a leader in the Forrester Wave: Big Data NoSQL

We’re excited to announce that Forrester has named Microsoft as a Leader in The Forrester Wave™: Big Data NoSQL, Q1 2019 based on their evaluation of Azure Cosmos DB. We believe Forrester’s findings validate the exceptional market momentum of Azure Cosmos DB and how happy our customers are with the product.

NoSQL platforms are on the rise

According to Forrester, “half of global data and analytics technology decision makers have either implemented or are implementing NoSQL platforms, taking advantage of the benefits of a flexible database that serves a broad range of use cases…While many organizations are complementing their relational databases with NoSQL, some have started to replace them to support improved performance, scale, and lower their database costs.”

Azure Cosmos DB has market momentum

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service for mission-critical workloads. Azure Cosmos DB provides turnkey global distribution with unlimited endpoint scalability, elastic scaling of throughput (at multiple granularities, e.g., database, key-space, tables and collections) and storage worldwide, single-digit millisecond latencies at the 99th percentile, five well-defined consistency models, and guaranteed high availability, all backed by the industry-leading comprehensive SLAs. Azure Cosmos DB automatically indexes all data without requiring developers to deal with schema or index management. It is a multi-model service, which natively supports document, key-value, graph, and column-family data models. As a natively born in the cloud service, Azure Cosmos DB is carefully engineered with multitenancy and global distribution from the ground up. As a foundational service in Azure, Azure Cosmos DB is ubiquitous, running in all public regions, DoD and sovereign clouds, with industry-leading compliance certification list, enterprise grade security – all without any extra cost.

Azure Cosmos DB’s unique approach of providing wire protocol compatible APIs for the popular open source-databases ensures that you can continue to use Azure Cosmos DB in a cloud-agnostic manner while still leveraging a robust database platform natively designed for the cloud. You get the flexibility to run your Cassandra, Gremlin, MongoDB apps fully managed with no vendor lock-in. While Azure Cosmos DB exposes APIs for the popular open source databases, it does not rely on the implementations of those databases for realizing the semantics of the corresponding APIs.

According to the Forrester report, Azure Cosmos DB is starting to achieve strong traction and “Its simplified database with relaxed consistency levels and low-latency access makes it easier to develop globally distributed apps.” Forrester mentioned specifically that “Customer references like its resilience, low maintenance, cost effectiveness, high scalability, multi-model support, and faster time-to-value.”

Forrester notes Azure Cosmos DB’s global availability across all Azure regions and how customers use it for operational apps, real-time analytics, streaming analytics and Internet-of-Things (IoT) analytics. Azure Cosmos DB powers many worldwide enterprises and Microsoft services such as XBox, Skype, Teams, Azure, Office 365, and LinkedIn.

To fulfill their vision, in addition to operational data processing, organizations using Azure Cosmos DB increasingly invest in artificial intelligence (AI) and machine learning (ML) running on top of globally-distributed data in Azure Cosmos DB. Azure Cosmos DB enables customers to seamlessly build, deploy, and operate low latency machine learning solutions on the planet scale data. The deep integration with Spark and Azure Cosmos DB enables the end-to-end ML workflow – managing, training and inferencing of machine learning models on top of multi-model globally-distributed data for time series forecasting, deep learning, predictive analytics, fraud detection and many other use-cases.

Azure Cosmos DB’s commitment

We are committed to making Azure Cosmos DB the best globally distributed database for all businesses and modern applications. With Azure Cosmos DB, we believe that you will be able to write amazingly powerful, intelligent, modern apps and transform the world.

If you are using our service, please feel free to reach out to us at AskCosmosDB@microsoft.com any time. If you are not yet using Azure Cosmos DB, you can try Azure Cosmos DB for free today, no sign up or credit card is required. If you need any help or have questions or feedback, please reach out to us any time. For the latest Azure Cosmos DB news and features, please stay up-to-date by following us on Twitter #CosmosDB, @AzureCosmosDB. We look forward to see what you will build with Azure Cosmos DB!

Download the full Forrester report and learn more about Azure Cosmos DB.
Quelle: Azure

Azure Stack IaaS – part five

Self-service is core to Infrastructure-as-a-Service (IaaS). Back in the virtualization days, you had to wait for someone to create a VLAN for you, carve out a LUN, and find space on a host. If Microsoft Azure ran that way, we would have needed to hire more and more admins as our cloud business grew.

Do it yourself

A different approach was required, which is why IaaS is important. Azure's IaaS gives the owner of the subscription everything they need to create virtual machines (VMs) and other resources on their own, without involving an administrator. To learn more visit our documentation, “Introduction to Azure Virtual Machines” and “Introduction to Azure Stack virtual machines.”

Let me give you a few examples that show Azure and Azure Stack self-service management of VMs.

Deployment

Creating a VM is as simple as going through a wizard. You can create the VM by specifying everything needed for the VM in the “Create virtual machine” blade. You can include the operating system image or marketplace template, the size (memory, CPUs, number of disks, and NICs), high availability, storage, networking, monitoring, and even in guest configuration.

Learn more by visiting the following resources:

Deploy Azure Linux VM – five minute quickstart
Deploy Azure Windows VM – five minute quickstart
Azure Stack VM Sizes
Azure Stack Marketplace
Azure Stack Supported Guest OSes
Azure Stack VM Considerations
Azure Stack Networking Considerations

Daily operations

That’s great for deployment, but what about later down the road when you need to quickly change the VM? Azure and Azure Stack have you covered there too. The settings section of the VM allows you to make changes to networking, disks, size CPUs, memory, and more, in-guest configuration extensions and high availability.

One thing that was always a pain in the virtualization days was getting the right firewall ports open. Now you can manage this on your own without waiting on the networking team. In Azure and Azure Stack firewall rules are called network security groups. This can all be configured in a self-service manner as shown below.

Learn more about managing Azure VMs firewall ports by visiting our documentation, “How to open ports to a virtual machine with the Azure portal.”

Disks and image self-service is important too. In the virtualization days this was also a big pain point. I had to give these to my admin to get them into the system for usage. Fortunately, storage is self-service in Azure and Azure Stack. Your IaaS subscription includes access to both storage accounts and managed disks from where you can upload and download your disks and images.

You can learn more by visiting our documentation, “Upload a generalized VHD an use it to create new VMs in Azure” and “Download a Linux VHD from Azure.”

Managed disks also give you the option to create and export snapshots.

Find more information by visiting the following resources:

Azure Managed Disks Overview
Managed Disks Snapshots
Azure Stack Managed Disks Considerations
Attach a managed data disk to an Azure VM

Other resources a VM owner can manage include load balancer configuration, DNS, VPN gateways, subnets, attach/detach disks, scale up/down, scale in/out, and so many other things it is astounding.

Support and troubleshooting

When there is a problem, no one wants to wait for someone else to help. The more tools you have to correct the situation the better. While operating one of the largest public clouds, the Azure IaaS team has learned what the top issues are facing customers and their support needs. To empower VM owners to solve these issues themselves, they have created a number of self-service support and troubleshooting features. Perhaps the most widely used is the Reset Password feature. Why wasn’t this feature around in the virtualization days?

Learn more by visiting our documentation, for re-setting access on an Azure Windows VM and re-setting access on an Azure Linux VM.

I need to mention a setting that has prevented me from creating a support problem because of my absentmindedness. It is the Lock feature. A lock can prevent any change or deletion on a VM or any other resource.

Learn more about locking VMs and other Azure resources by visiting our documentation, “Locking resources tp prevent unexpected changes.”

Other useful troubleshooting and support features include re-deploying your VM to another host if you suspect your VM is having problems on the host it is currently on, checking boot diagnostics to see the state of the VM before it fully boots and is ready for connections, and reviewing performance diagnostics. As we learn and build these features in Azure, they eventually find their way to Azure Stack so that your admins don’t have to work so hard to support you.

Learn more by visit our documentation, “Troubleshooting Azure Virtual Machines.”

Happy infrastructure admins

When you can take care of yourself, your admins can manage the underlying infrastructure without being interrupted by you. This means they can work on the things important to them and you can focus on what is important to you.

In this blog series

We hope you come back to read future posts in this series. Here are some of our planned upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Foundation of Azure Stack IaaS
Protect your stuff
Pay for what you use
It takes a team
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Breaking the wall between data scientists and app developers with Azure DevOps

As data scientists, we are used to developing and training machine learning models in our favorite Python notebook or an integrated development environment (IDE), like Visual Studio Code (VSCode). Then, we hand off the resultant model to an app developer who integrates it into the larger application and deploys it. Often times, any bugs or performance issues go undiscovered until the application has already been deployed. The resulting friction between app developers and data scientists to identify and fix the root cause can be a slow, frustrating, and expensive process.

As AI is infused into more business-critical applications, it is increasingly clear that we need to collaborate closely with our app developer colleagues to build and deploy AI-powered applications more efficiently. As data scientists, we are focused on the data science lifecycle, namely data ingestion and preparation, model development, and deployment. We are also interested in periodically retraining and redeploying the model to adjust for freshly labeled data, data drift, user feedback, or changes in model inputs.

The app developer is focused on the application lifecycle – building, maintaining, and continuously updating the larger business application that the model is part of. Both parties are motivated to make the business application and model work well together to meet end-to-end performance, quality, and reliability goals.

What is needed is a way to bridge the data science and application lifecycles more effectively. This is where Azure Machine Learning and Azure DevOps come in. Together, these platform features enable data scientists and app developers to collaborate more efficiently while continuing to use the tools and languages we are already familiar and comfortable with.

The data science lifecycle or “inner loop” for (re)training your model, including data ingestion, preparation, and machine learning experimentation, can be automated with the Azure Machine Learning pipeline. Likewise, the application lifecycle or “outer loop”, including unit and integration testing of the model and the larger business application, can also be automated with the Azure DevOps pipeline. In short, the data science process is now part of the enterprise application’s Continuous Integration (CI) and Continuous Delivery (CD) pipeline. No more finger pointing when there are unexpected delays in deploying apps, or when bugs are discovered after the app has been deployed in production. 

Azure DevOps: Integrating the data science and app development cycles

Let’s walk through the diagram below to understand how this integration between the data science cycle and the app development cycle is achieved.

A starting assumption is that both the data scientists and app developers in your enterprise use Git as your code repository. As a data scientist, any changes you make to training code will trigger the Azure DevOps CI/CD pipeline to orchestrate and execute multiple steps including unit tests, training, integration tests, and a code deployment push. Likewise, any changes the app developer or you make to application or inferencing code will trigger integration tests followed by a code deployment push. You can also set specific triggers on your data lake to execute both model retraining and code deployment steps. Your model is also registered in the model store, which lets you look up the exact experiment run that generated the deployed model.

With this approach, you as the data scientist retain full control over model training. You can continue to write and train models in your favorite Python environment. You get to decide when to execute a new ETL / ELT run to refresh the data to retrain your model. Likewise, you continue to own the Azure Machine Learning pipeline definition including the specifics for each of its data wrangling, feature extraction, and experimentation steps, such as compute target, framework, and algorithm. At the same time, your app developer counterpart can sleep comfortably knowing that any changes you commit will pass through the required unit, integration testing, and human approval steps for the overall application.

With the soon to be released Data Prep Services (box in bottom left of above diagram), you will also be able to set thresholds for data drift and automate the retraining of your models! 

In subsequent blog posts, we will cover in detail more topics related to CI/CD, including the following:

Best practices to manage compute costs with Azure DevOps for Machine Learning
Managing model drift with Azure Machine Learning Data Prep Services
Best practices for controlled rollout and A/B testing of deployed models

Learn more

Azure CI/CD Pipeline documentation
Azure Machine Learning Pipeline documentation
Learn more about the Azure Machine Learning service.
Get started with a trial of Azure Machine Learning service.

Quelle: Azure

The Value of IoT-Enabled Intelligent Manufacturing

As the manufacturing industry tackles some significant challenges including an aging workforce, compliance issues, and declining revenue, the Internet of Things (IoT) is helping reinvent factories and key processes. At the heart of this transformation journey is the design and use of IoT-enabled machines that help lead to reduced downtime, increased productivity, and optimized equipment performance.

Learn how you can apply insights from real-world use cases of IoT-enabled intelligent manufacturing when you attend the Manufacturing IoT webinar on March 28th. For additional hands-on, actionable insights around intelligent edge and intelligent cloud IoT solutions, join us on April 19th for the Houston Solution Builder Conference.

Using IoT solutions to move from a reactive to predictive model

In the past, factory managers often had no way of knowing when a machine might begin to perform poorly or completely shut down. When something went wrong, getting the equipment back up and running was often time consuming and based on trial-and-error troubleshooting. And for the company, any unplanned downtime meant slowed or halted production, resulting in lower productivity and higher costs.

The development of IoT-enabled machines with sensors allows companies to improve overall efficiency, performance, and profitability. Rockwell Automation found it time consuming and challenging to monitor its equipment in remote locations. Using Microsoft Azure to connect them, Rockwell Automation now sees real-time performance information and can proactively maintain equipment before an incident occurs.

Kontron S&T, a Microsoft partner, also recently developed the SUSiEtec platform, an end-to-end IoT solution that enables companies to build scalable edge computing solutions using Microsoft Azure IoT Edge integration and customization services. With SUSiEtec, companies can dynamically decide where data analysis will take place and manage distributed IoT devices regardless of where they’re located or how many devices are used. Join the Manufacturing IoT webinar to learn more about SUSiEtec and how to develop secure, manageable IoT solutions for manufacturing.

Keeping IoT data secure with Azure Sphere

Using IoT to create the factory of the future also means additional access points into the factory network and systems, so creating a secure network is top priority. Factory managers typically access IoT data using mobile devices, which creates even more access points. For a true connected IoT experience and factory, security is foundational.

Azure Sphere provides a foundation of security and connectivity that starts in the silicon and extends to the cloud. Together, Azure Sphere microcontrollers (MCUs), secured OS, and turnkey cloud security service guard every Azure Sphere device accessing IoT data, IoT sensors, and IoT-enabled machines. By adding useful software to Edge hardware, factories are protected with IT-proven standards as well as new Operational Technology (OT) network security.

Getting ready to develop IoT solutions

Moving to a factory of the future starts with determining what you want to achieve through the IoT-enabled machine. If predictive maintenance is the end goal, start by conducting an inventory of data sources. Identify all potential sources and types of relevant data to determine what is most essential. Then you’ll need to lay the groundwork for a robust predictive model by pulling in data that includes both expected behavior and failure logs.

With the initial logistics determined, the next step is to create a model and test and iterate to figure out which model is best at forecasting the timing of unit failures. By moving to a live operational setting, you can apply the model to live, streaming data to observe how it works in real-world conditions. After adjusting your maintenance processes, systems, and resources to act on the new insights, the final step is to integrate the model with Azure IoT Central into operations.

Of course, not all companies have the skillset or resources to develop an IoT solution from scratch. To accelerate the design, development, and implementation process, partners can utilize the Microsoft Accelerator program. By using open-source code or leveraging proven architectures, companies can create a fully customizable solution and quickly connect devices to existing systems in minutes. For instance, the Predictive Maintenance solution accelerator combines key Azure IoT services like IoT Hub and Stream analytics to proactively optimize maintenance and create automatic alerts and actions for remote diagnostics, maintenance requests, and other workflows.

Digitally transforming your own business and building or deploying IoT solutions that are highly scalable and economical to manage takes partnerships. Join Microsoft and Kontron S&T on March 28th for the webinar, Go from Reaction to Prediction – IoT in Manufacturing, and discover new approaches for achieving your business goals.
Quelle: Azure

Microsoft and NVIDIA bring GPU-accelerated machine learning to more developers

With ever-increasing data volume and latency requirements, GPUs have become an indispensable tool for doing machine learning (ML) at scale. This week, we are excited to announce two integrations that Microsoft and NVIDIA have built together to unlock industry-leading GPU acceleration for more developers and data scientists.

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, an open source software library from NVIDIA that allows traditional machine learning practitioners to easily accelerate their pipelines with NVIDIA GPUs
ONNX Runtime has integrated the NVIDIA TensorRT acceleration library, enabling deep learning practitioners to achieve lightning-fast inferencing regardless of their choice of framework.

These integrations build on an already-rich infusion of NVIDIA GPU technology on Azure to speed up the entire ML pipeline.

“NVIDIA and Microsoft are committed to accelerating the end-to-end data science pipeline for developers and data scientists regardless of their choice of framework,” says Kari Briski, Senior Director of Product Management for Accelerated Computing Software at NVIDIA. “By integrating NVIDIA TensorRT with ONNX Runtime and RAPIDS with Azure Machine Learning service, we’ve made it easier for machine learning practitioners to leverage NVIDIA GPUs across their data science workflows.”

Azure Machine Learning service integration with NVIDIA RAPIDS

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, providing up to 20x speedup for traditional machine learning pipelines. RAPIDS is a suite of libraries built on NVIDIA CUDA for doing GPU-accelerated machine learning, enabling faster data preparation and model training. RAPIDS dramatically accelerates common data science tasks by leveraging the power of NVIDIA GPUs.

Exposed on Azure Machine Learning service as a simple Jupyter Notebook, RAPIDS uses NVIDIA CUDA for high-performance GPU execution, exposing GPU parallelism and high memory bandwidth through a user-friendly Python interface. It includes a dataframe library called cuDF which will be familiar to Pandas users, as well as an ML library called cuML that provides GPU versions of all machine learning algorithms available in Scikit-learn. And with DASK, RAPIDS can take advantage of multi-node, multi-GPU configurations on Azure.

Learn more about RAPIDS on Azure Machine Learning service or attend the RAPIDS on Azure session at NVIDIA GTC.

ONNX Runtime integration with NVIDIA TensorRT in preview

We are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, MXNet and many other popular frameworks. Today, ONNX Runtime powers core scenarios that serve billions of users in Bing, Office, and more.

With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

To learn more, check out our in-depth blog on the ONNX Runtime and TensorRT integration or attend the ONNX session at NVIDIA GTC.

Accelerating machine learning for all

Our collaboration with NVIDIA marks another milestone in our venture to help developers and data scientists deliver innovation faster. We are committed to accelerating the productivity of all machine learning practitioners regardless of their choice of framework, tool, and application. We hope these new integrations make it easier to drive AI innovation and strongly encourage the community to try it out. Looking forward to your feedback!
Quelle: Azure

Microsoft Azure for the Gaming Industry

This blog post was co-authored by Patrick Mendenall, Principal Program Manager, Azure. 

We are excited to join the Game Developers Conference (GDC) this week to learn what’s new and share our work in Azure focused on enabling modern, global games via cloud and cloud-native technologies.

Cloud computing is increasingly important for today’s global gaming ecosystem, empowering developers of any size to reach gamers in any part of the world. Azure’s 54 datacenter regions, and its robust global network, provides globally available, high performance services, as well as a platform that is secure, reliable, and scalable to meet current and emerging infrastructure needs. For example, earlier this month we announced the availability of Azure South Africa regions. Azure services enable every phase of the game development lifecycle from designing, building, testing, publishing, monetizing, measurement, engagement, and growth, providing:

Compute: Gaming services rely on a robust, reliable, and scalable compute platform. Azure customers can choose from a range of compute- and memory-optimized Linux and Windows VMs to run their workloads, services, and servers, including auto-scaling, microservices, and functions for modern, cloud-native games.
Data: The cloud is changing the way applications are designed, including how data is processed and stored. Azure provides high availability, global data, and analytics solutions based on both relational databases as well as big data solutions.
Networking: Azure operates one of the largest dedicated long-haul network infrastructures worldwide, with over 70,000 miles of fiber and sub-sea cable, and over 130+ edge sites. Azure offers customizable networking options to allow for fast, scalable, and secure network connectivity between customer premises and global Azure regions.
Scalability: Azure offers nearly unlimited scalability. Given the cyclical usage patterns of many games, using Azure enables organizations to rapidly increase and/or decrease the number of cores needed, while only having to pay for the resources that are used.
Security: Azure offers a wide array of security tools and capabilities, to enable customers to secure their platform, maintain privacy and controls, meet compliance requirements (including GDPR), and ensure transparency.
Global presence: Azure has more regions globally than any other cloud provider, offering the scale needed to bring games and data closer to users around the world, preserving data residency, and providing comprehensive compliance and resiliency options for customers. Using Azure’s footprint, the cost, the time, and the complexity of operating a game at global scale can be reduced.
Open: with Azure you can use the software you choose whether it be operating systems, engines, database solutions, or open source – run it on Azure.

We’re also excited to bring PlayFab into the Azure family. Together, Azure and PlayFab are a powerful combination for game developers. Azure brings reliability, global scale, and enterprise-level security, while PlayFab provides Game Stack with managed game services, real-time analytics, and comprehensive LiveOps capabilities.

We look forward to meeting many of you at GDC 2019 to learn about your ideas in gaming, discussing where cloud and cloud-native technologies can enable your vision, and sharing more details on Azure for gaming. Join us at the conference or contact our gaming industry team at azuregaming@microsoft.com.

Details on all of these are available via links below.

Learn more about Microsoft Game Stack.
Talks at GDC:

Thursday, March 21, 2019 at 11:30 AM: Best Practices for Building Resilient, Scalable, Game Services in Microsoft Azure
Thursday, March 21, 2019 at 12:45 PM: Save Time for Creativity: Unlocking the Potential for Your Game's Data with Microsoft Azure

Azure Gaming Reference Architectures: Landing Page

Multiplayer/Game Servers
Analytics
Leaderboards
Cognitive Services

GDC Booth demos for Azure:

AI Training with Containers – Use Azure and Kubernetes to power Unity ML Agents
Game Telemetry – Build better game balance and design
Build NoSQL Data Platforms – Azure Cosmos DB: a globally distributed, massively scalable NoSQL database service
Cross Realms with SQL – Build powerful databases with Azure SQL

Quelle: Azure

March 2019 changes to Azure Monitor Availability Testing

Azure Monitor Availability Testing allows you to monitor the availability and responsiveness of any HTTP or HTTPS endpoint that is accessible from the public internet. You don't have to add anything to the web site you're testing. It doesn't even have to be your site, you could test a REST API service you depend on. This service sends web requests to your application at regular intervals from points around the world. It alerts you if your application doesn't respond, or if it responds slowly.

At the end of this month we are deploying some major changes to this service, these changes will improve performance and reliability, as well as allow us to make more improvements to the service in the future. This post will highlight some of the changes, as well as describe some of the changes you should be aware of to ensure that your tests continue running without any interruption.

Reliability improvements

We are deploying a new version of the availability testing service. This new version should improve the reliability of the service, resulting in fewer false alarms. This change also increases the capacity for the creation of new availability tests, which is greatly needed as Application Insights usage continues to grow. Additionally, the architecture of this new design enables us to add new regions much more easily. Expect to see additional regions from which you can test your app’s availability in the future!

New UI

Along with the new backend architecture, we are updating the availability testing UI with a brand new design. See the image below for a sneak peek of the UI that we will be rolling out for all customers in the next few weeks. 

The new design is more consistent with other experiences in Application Insights. It reduces the number of clicks needed to see highly requested information, and surfaces insights about your availability tests to the right side of the availability scatter plot. The new chart supports time brushing, you can click and drag over a section of the chart to zoom into just that time period. Additionally, this design loads faster than the previous one!

IP address changes

If you have whitelisted certain IP addresses because you are running web tests on your app, but your web server is restricted to serving specific clients, then you should be aware that we are deploying our service on new IP ranges. We are increasing the capacity of our service, and this requires adding additional test agents.

Effective March 20, 2019, we will begin running tests from our new test agents, and this will require you to update your whitelist. The list containing all of the necessary whitelisted IPs, including our previous IP ranges and the new IP ranges is published in our documentation, “IP addresses used by Application Insights and Log Analytics.”

France South changes

France South will no longer be offered as a region from which you can perform availability tests. All existing tests in France South will be moved to a duplicate service running in France Central which will appear in the portal as “France Central (formerly France South).”  If you already have a test running in France Central, this means that your test will run from France Central twice per time period. Your existing alert rules will not be affected.

New testing region

We will be adding an additional region within Europe from which to run availability tests. An announcement will be made when this region is available.

Next steps

Log into your Azure account today to get started with the new Application Insights Availability UX. You can also learn more about how to get started by visiting our “Azure Monitor Documentation.”
Quelle: Azure