Protecting Windows Virtual Desktop environments with Azure Security Center

With massive workforces now remote, IT admins and security professionals are under increased pressure to keep everyone productive and connected while combatting evolving threats.

Windows Virtual Desktop is a comprehensive desktop and application virtualization service running in Azure, delivering simplified management for virtual desktop infrastructure (VDI).

While organizations go through this transformation, allowing their employees to remain productive, IT and security professionals required to ensure the deployment of Windows Virtual Desktop is done in accordance with security best practices so it doesn’t add unnecessary risk to the business. In this blog, we will explore how Azure Security Center can help maintain your Windows Virtual Desktop environment configuration hygiene and compliance, and protect it against threats.

Overview of Windows Virtual Desktop Host Pool architecture

When setting up your Windows Virtual Desktop environment, you first need to create a Host Pool which is a collection of one or more identical virtual machines (VMs). To support the remote workforce use case, these VMs will usually run a Windows 10 multi-session OS. Below is an overview of the architecture:
 
You can find the VMs running in your host pool by checking the Host Pool details and clicking on the Resource Group name:

 

This will bring up the resource group details. Filtering by Virtual Machine will show the list of VMs:

Securing Windows Virtual Desktop deployment with Azure Security Center

Considering the shared responsibility model, here are the security needs customers are responsible for in Windows Virtual Desktop deployment:

Network.
Deployment Configuration.
Session host OS.
Application security.
Identity.

These needs should be examined both in the context of security posture as well as threat protection. Here is an example:

Misconfiguration of the VMs Network layer can increase the attack surface and result in a compromised endpoint. One thing we want to ensure is that all management ports should be closed on your Windows Virtual Desktop virtual machines.
Once your users are connected to their Windows Virtual Desktop session, they might be manipulated to browse to a malicious site or connect to a malicious machine. This can also happen in case there is malware on the machine. Analyzing the network traffic to detect that your machine has communicated with what is possibly a Command and Control center is another protection layer.

Azure Security Center the following security posture management and threat protection capabilities for Windows Virtual Desktop VMs:

Secure configuration assessment and Secure Score.
Industry-tested vulnerability assessment.
Host level detections.
Agentless cloud network micro-segmentation & detection.
File integrity monitoring.
Just in time VM access.
Adaptive Application Controls.

Here is a table that maps Azure Security Center protection capabilities Windows Virtual Desktop security needs:

You can find the complete list of recommendations and alerts in the following Azure Security Center reference guides:

Security Recommendations.
Alerts list.

Switching to the Azure Security Center portal, we can see the Windows Virtual Desktop host pool VMs under Compute & apps followed by the VMs and Servers tab, as well as their respective Secure Score and status:

 

Drilling down to a specific VM will show the full recommendation list as well as the Severity level:

 

These VMs are also assessed for compliance with different regulatory requirements, built-in or custom ones, and any compliance issues will be flagged out under the Regulatory Compliance dashboard.

In addition, security alerts will be showing under Threat Protection followed by Security Alerts:

Both security alerts and recommendations can be consumed and managed from the Security Center portal or can be exported to other tools for further analysis and remediation. One great example would be integrating Azure Security Center with Azure Sentinel as part of monitoring the Windows Virtual Desktop environment.

Enabling Azure Security Center for Windows Virtual Desktop environment

Azure Security Center Free tier provides security recommendations and Secure Score for Windows Virtual Desktop deployments.

To enable all protection capabilities you should follow these two steps:

Make sure you have Azure Security Center Standard tier (as shown below).
Enable threat protection for Virtual Machines.

And one last tip. If you are using Azure Devops CI/CD Pipelines together with Windows 10 Azure VM Image as a solution for continuous build and deploy of the Windows Virtual Desktop solution, you’re most likely using Azure Key Vault for the secret management. If not already enabled, setting up threat protection for Azure Key Vault should be your next stop.

How are you protecting your Windows Virtual Desktop environment? We are sure there are plenty more ideas out there and we would love to see the community submitting them to our GitHub repo.
Quelle: Azure

Preparing for what’s next: Financial considerations for cloud migration

Co-authored by Jorge Magana, Director, Azure Finance (Financial Planning and Analysis).

In the kick off blog of this series, I shared our top recommendations to accelerate your cloud migration journey, one of which was around aligning key stakeholders across your organization. As you move through assessments and plan your migration, it is critical to get buy in from your CFO and other financial stakeholders—even more so in today’s challenging macro-climate.

IT and finance organizations need to be aligned around how to be agile to adjust to rapidly shifting demands while ensuring that their cost structure is lean enough to weather tough market conditions. With this dual focus, it is critical to understand not only the technical benefits of a cloud transition, but also the financial and economic opportunities associated with it. Today I'm sharing my own experience of partnering with finance along with the wisdom that customers have shared about their journey.

How can cloud migration affect CFO priorities?

Here are three key areas that IT organizations need to internalize and align on with their finance organization as they plan cloud migration:

What’s the holistic impact to the organization’s financial posture? 
What will the impact be on external and internal finance KPIs and processes?
What operational changes are required during and after migration to ensure that budget/ROI controls are met? 

How is the organization’s financial posture going to change?

Azure customers constantly unlock new, positive ROI projects previously not possible on-premises as they migrate workloads. By design, Azure is built to facilitate business agility, creating opportunities for true competitive advantage and substantial decrease in time to market. As a result, our customers recognize significant financial benefits driven in large part by cloud flexibility and elasticity and changes in businesses’ financial operating models that reduce asset purchases and upfront cash investments.

Cloud flexibility and elasticity

First, Azure customers can adjust their cost structure to improve their organization’s bottom line, which is table stakes in today’s environment. In recent earnings calls, CFOs of companies not leveraging the cloud mentioned their inability to reduce fixed expenses, which hurt profitability. As our customers migrate to Azure, they are shifting to a cost structure that is variable by design:

Figure 1: Cloud cost structure provides flexibility

 

Next, Azure customers can maximize resource efficiency. We have worked directly with large and small customers alike who were running on-premises workloads at very low resource utilization. These customers purchased assets for peak demand and lead-times, but most of the time those servers, and even some datacenters, were sitting idle and underused. By rightsizing and optimizing capacity when migrating to Azure, customers can realize economic benefits from cloud scale and elasticity. As an example, the built-in scalability in Azure has helped Maersk quickly scale up on demand eliminating the need to maintain idle resources during off-peak times.

“Scalability is one of the big benefits we get from Azure. In the past, it might have taken us months to procure and configure servers and get them into production. Now, we can scale up on demand in Azure in a matter of minutes." – Musaddique Alatoor, Head of Equipment Innovation, A.P. Moller – Maersk

Finally, shifting to a cloud model can reduce costs by enabling customers to consume resources only during peak usage periods, while reducing capacity when demand needs drop.

Changes in the financial operating model

Key financial benefits of Azure are driven by a fundamental shift in the IT operating model, which benefits the organization’s core financial statements in the following ways:

Balance sheet: Prior to migrating to Azure, many of our customers owned or operated their datacenters. These were expensive long-term assets that limited the cash and capital required to grow the business, support strategic initiatives, and respond to market conditions. Once on Azure, our customers avoid buying equipment, repurpose expensive real estate, and shift datacenter operations costs into developing cloud applications and other projects that drive business growth. This makes their balance sheet more agile, shifting fixed assets to cash. This is what drove Maersk to move their five regional datacenters to Azure to lower the company’s risks and position them for continued growth.
Cash flow statement: Azure customers save immediate cash by avoiding cyclical and sporadic IT asset purchases. With the “pay for what you use” model along with platform capabilities like policy and tagging that Azure enables, CFOs increase visibility, predictability and delay cash spend.
Income statement (profit and loss): Over time, Azure customers can improve profitability by reducing the cost to deliver equal or larger IT value by taking advantage of Azure’s flexibility, low management costs, its broad portfolio of services and pricing models. Learn how CYTI was able to take advantage of Azure’s flexibility to reduce infrastructure costs.

"We're now saving about 30 percent a year on infrastructure costs just by moving to Azure, with more flexibility, better servers, greater customization, and more freedom to do what we want." – Darren Gourley, Chief Technology Officer, CYTI

How will financial KPIs and processes change?

When migrating from on-premises to Azure, there are several financial benefits that subsequently impact KPIs and finance processes. The two most prominent are: 1) budget and financial reporting processes: expense shifts from capital expenditure (CAPEX) to operational expenditure (OPEX); 2) Impact on EBITDA (earnings before interest, taxes, depreciation, and amortization).

CAPEX to OPEX: During an Azure migration, spend that was previously allocated to CAPEX is now being redeployed to OPEX. This is optimal from a cashflow timing and a balance sheet flexibility perspective but requires CFOs to shift budgets to support the new model. Capstone Mining used this approach to significantly lower their capital costs by moving to Azure.
"We wanted to eliminate $3 million (USD) in capital costs over about three years, and to reduce our operating costs by approximately the same amount. At the same time, we wanted to improve our quality of service. With Azure, we're confident about meeting these goals." – Jim Slattery, Chief Financial Officer, Capstone Mining
EBITDA: EBITDA is a financial metric that companies use to measure profitability. This metric ignores real costs like server spend. When moving to the cloud, EBITDA is impacted because the metric can no longer ignore costs like server depreciation. When moving to the cloud, if your company tracks EBITDA, it will likely be impacted from a migration shift. As opposed to overly focusing on EBITDA, many customers choose to identify additional financial metrics that better measure business value improvements (such as cash flows, operating income, or cost of goods sold efficiency).

Managing financial KPI’s and processes is a critical component of a CFO’s job. By creating a channel of communication with your financial stakeholders and highlighting symbiotic relationships of some of the KPI and process impacts of a cloud migration, you can begin working with your finance team to proactively reset expectations around both capital/operating budgets and EBITDA targets in a cloud vs on-premises world.

Implementing the business case: Ongoing cost-optimization and management

Once the cloud migration project begins, here are a few tips and best financial practices for success:

Reducing on-premises asset acquisitions: There must be broad internal alignment and processes to evaluate and control how and when teams buy new on-premises assets. Every new purchase will add fixed costs that will prevent cloud savings for a longer period.
Initial resource clean-up, rightsizing, and optimization: When migrating to Azure, consider which workloads are no longer needed and can be turned off. For workloads still needed, consider what can be done to optimize those resources and operational hours, leveraging tools such as Azure Migrate.
Continuous cost optimization: Workloads aren’t static. Once in Azure, leverage our tools (including Azure Cost Management and Azure Advisor) and establish processes to monitor resources and patterns to continuously optimize cloud costs.
Resource tagging and spend categorization: Azure allows for simplified resource tagging and cost allocation compared with on-premises. This helps increase spend accountability, while evaluating workload ROI. Through resource tagging you are able to better align your spend to cost categories like the cost of goods sold (COGS) or research and development and allocate costs of workloads directly to underlying business units. Targeted cost allocation can directly help drive efficiencies and reductions.
Billing models: Azure billing models like reserved instances and spot pricing are fantastic opportunities to save money. As an example, Azure three-year Reserved Instances (RI) do not require upfront payment, have tremendous flexibility, and provide discounts up to 72 percent.
Azure Hybrid Benefit: With Azure you can take advantage of your existing Microsoft licenses with Software Assurance to avoid incremental licensing costs for migrating workloads and maximize previous investments.

Figure 2: Well-optimized cloud usage can free up excess capacity

Aligning cloud spend with underlying workload usage

A) Idle capacity: Azure allows customers to eliminate idle capacity intended to cover future growth across workloads. Actions like rightsizing or eliminating unnecessary workloads can help you reduce your idle capacity when moving to the cloud.

B) Variable workloads: Azure customers only pay for the hours they need when demand temporarily peaks above average levels on variable workloads. Taking advantage of tools and actions like VM scale sets and “snoozing” can help you only pay for the resources needed.

C) Predictable workloads: Azure customers can minimize costs of predictable workloads by taking advantage of Azure Reserved Instances and Spot prices.

What’s next?

As the cloud migration team in IT, ensure finance partners and key stakeholders are brought in from the beginning, and include them in appropriate decision-making and progress review forums. Reach out to your finance peers to better understand their expectations and how you can collaborate as you embark on your cloud migration project. Use the Cloud Adoption Framework for Azure for best practice guidance around aligning your organization to a common vision and approach.
Leverage cost-savings offers (including Azure Hybrid Benefit and Reserved Instances) and free tools (Azure TCO calculator, Azure pricing calculator, Azure Migrate) as you plan and prepare for cloud migration.
Use tools like Azure Cost Management and Azure Advisor once on Azure to drive continuous optimization; ensure financial stakeholders have appropriate access and visibility.
For expert assistance from Microsoft or our qualified partners, check out our Cloud Solution Assessment offerings or join the Azure Migration Program (AMP).

We hope this gives you a good understanding of the critical intersection between IT and finance in the context of your organization’s cloud migration journey. Engaging the migration leadership team within your organization to collaboratively create both the technical and correlating financial roadmap ensures alignment, facilitates migration success and long-term organizational success. In the coming weeks, we will continue this blog series with deeper dives on topics like assessments, Landing Zones, infrastructure, data, and application migration best practices.  

Share your feedback

Please share your experiences or thoughts in the comments section below—we appreciate your feedback.
Quelle: Azure

The Road to Kata Containers 2.0

thenewstack.io – The open source Kata Containers project, launched in late 2017, aims to unite the security advantages of virtual machines (VMs) with the speed and manageability of containers. What has the project ac…
Quelle: news.kubernauts.io

Powering past limits with financial services in the cloud

Editor’s note: We asked financial institution KeyBank to share their story of moving their data warehouse from Teradata to Google Cloud. Here are details on why they moved to cloud, how they did their research, and what benefits cloud can bring.At KeyBank, we serve our 3.5 million customers online and in-person, and managing and analyzing data is essential to providing great service. We process more than four billion records every single day, and move that data to more than 40 downstream systems. Our teams use that data in many ways; we have about 400 SaaS users and 4,000 Tableau users exploring analytics results and running reports.We introduced Hadoop four or five years ago as our data lake architecture, using Teradata for high-performance analytics. We stored more than a petabyte of data in Hadoop on about 150 servers, and more than 30 petabytes in our Teradata environment. We decided to move operations to the cloud when we started hitting the limits of what an on-premises data warehouse could do to meet our business needs. We wanted to move to cloud quickly and open up new analytics capabilities for our teams. Considering and testing cloud platformsTeradata had worked well for us when we first deployed it. Back then, Teradata was a market leader in data warehousing, with many of the leading banks invested in it. We chose it for its high-performance analytics capabilities, and our marketing and risk management teams used it heavily. It also worked well with other SaaS tools we were using, and SaaS remains a good tool for accessing our mainframe.Ten years into using Teradata, we had a lot of product-specific data stores. It wasn’t a fully formed data lake architecture. We also maintain more than 200 SaaS models. In 2019, our Teradata appliances were nearing capacity, and we knew they would need a refresh in 2021. We wanted to avoid that refresh, and started doing proof-of-concept cloud testing with both Snowflake and Google Cloud.When we did those trials, we ran comparative benchmarks for load time, ETL time, performance and query time. Snowflake looked just like Teradata, but in the cloud. With Google, we looked at all the surrounding technology of the platform. We couldn’t be on a single cloud platform if we chose Snowflake. We picked Google Cloud, since it would let us simplify and offer us a lot more options to grow over time.Adapting to a cloud platformAlong with changing technology, our teams would have to learn some new skills with this cloud migration. Our primary goal when moving to a cloud architecture was getting the performance of Teradata at the cost of Hadoop, but on a single platform. Managing a Hadoop data lake running on Teradata architecture is complicated—it really takes two different skill sets. There are some big considerations that go into making these kinds of legacy vs. modern enterprise technology decisions. With an on-premises data warehouse like Teradata, you govern in capacity, so performance varies based on the load on the hardware at any given time. That led to analytics users hitting the limits during month-end processing, for example. With Google Cloud, there are options for virtually unlimited capacity. Cost savings was a big reason for our move to cloud. Pricing models are very different with cloud, but ultimately we’re aiming not to pay for storage that’s just sitting there, not in use. Cloud gives us the opportunity to scale up for a month if needed, then back down after the peak, managing costs better. Figuring this out is a new skill we’ve learned. For example, running a bad query in Teradata or Hadoop wouldn’t change the on-premises cost for that query, but would consume horsepower. Running that query on Google Cloud won’t interfere with other users’ performance, but would cost us money. So we’re running training to ensure people aren’t making those types of mistakes, that they’re running the right types of queries.Shifting to cloud computingThe actual cloud migration involved working closely with the security team to meet their requirements. We also needed to align data formats. For example, we had to make sure our ETL processing could talk to Google Cloud Storage buckets and BigQuery data sets. We’re finding that for the most part the queries do port over seamlessly to BigQuery. We’ve had to tweak just a handful of data types. Since moving to cloud, the early results are very promising; we’re seeing 3 to 4x faster query performance, and we can easily turn capacity up or down. We have five data marts in testing to use real-world data volumes to get comparison queries.We’re still making modifications to how we set up and configure services in the cloud. That’s all part of the change that comes when you’re now owning and operating data assets securely in the cloud. We had to make sure that any personally identifiable information (PII) was stored securely and tokenized. We’ll also continue to tune cost management over time as we onboard more production data. Managing change and planning for the futureThe change management of cloud is an important component of the migration process. Even with our modern data architecture, we’re still shifting established patterns and use cases as we move workloads to Google Cloud. It’s a big change to go to a capacity-based model, where we can change capacity on demand to meet our needs, vs. needing more hardware with our old Teradata method. Helping 400 users migrate to newer tools requires some time and planning. We hosted training sessions with help from Google, and made sure business analysts were involved up front to give feedback. We also invested in training and certifications for our analysts.We’re on our way to demonstrating that Google can give us better performance based on the cost per query than Teradata did. And using BigQuery means we can do more analytics in place now, rather than the previous process of copying, storing, and manipulating data, then creating a report. As we think through how to organize our analytics resources, we want to get the business focused on priorities and consumer relationships. For example, we want to know the top five or so areas where analytics can add value, so we can all be focused there. To make sure we would get the most out of these new analytics capabilities, we set up a charter and included cross-functional leaders so we know we’re all keeping that focus and executing on it. We’re retraining with these new skills, and even finding new roles that are developing. We built a dedicated cloud-native team—really an extension of our DevOps team—focused on setting up infrastructure and using infrastructure as code. The program we’ve built is ready for the future. With our people and technology working together, we’re well set up for a successful future.
Quelle: Google Cloud Platform