Cert-manager v0.15 and beyond

blog.jetstack.io – This post will explore the new features in the recently released cert-manager v0.15, as well as give an overview of our plans for the future of the project. Jump to the bottom for more information on…
Quelle: news.kubernauts.io

Get to know Google Cloud with our new Architecture Framework

Are you using Google Cloud or thinking about making the move to the cloud? Are you a cloud architect or cloud engineer who needs to ensure your services are secure and reliable yet also manageable during day-to-day operations? We have heard feedback from many of you that you need a structured approach for efficiently running your business on Google Cloud and today we’re excited to deliver just that. Today we are making Google Cloud’s Architecture Framework available to everyone. This framework provides architecture best practices and implementation guidance on products and services to aid your application design choices based on your unique business needs. With the help of this framework, you can quickly identify areas where your approach differs from recommended best practices, so you can apply them across your organization to ensure standardization and achieve consistency. The framework provides a foundation for building and improving your Google Cloud deployments using four key principles: Operational excellence – Guidance on how to make design choices in the cloud to improve your operational efficiency. These include approaches for automating the build process, implementing monitoring and disaster recovery planning. Security, privacy and compliance – Guidance on various security controls you can choose along with a list of products and features best suited to support security needs for your deployments. Reliability – How to build reliable and highly available solutions. Recommendations include defining reliability goals, improving your approach to observability (including monitoring), establishing an incident management function, and techniques to measure and reduce the operational burden on your teams.Performance Cost Optimization – Suggestions on various available tools to tune your applications for a better end-user experience and analyze the cost of operation on Google Cloud, while maintaining an acceptable level of service.Each section provides details on strategies, best practices, design questions, recommendations, and more. You can use this framework during various stages of your cloud journey, from evaluating design choices across various products to incorporating various aspects of security and reliability into your design. You can also use the framework for your existing deployments to help you increase efficiency or incorporate new products and features into your solutions to simplify ongoing management.  How to use the frameworkWe recommend reviewing the “System Design Considerations” first and then dive into other specific sections based on your needs. Discover: Use the framework as a discovery guide for Google Cloud Platform offerings and learn how the various pieces fit together to build solutions.  Evaluate:  Use the design questions outlined in each section to guide your thought process while you’re thinking about your system design. If you’re unable to answer the design question, you can review the highlighted Google Cloud services and features to address them.Review:  If you’re already on Google Cloud, use the recommendations section to verify if you are following best practices or as a pulse check to review before deploying to production.The framework is modular so you can pick and choose sections most relevant to you, but we recommend reading all of the sections, because why not! You can learn more about the Google Cloud Architecture Framework here and contact us for additional insights. A special thanks to a village of Googlers who helped deliver this framework, Matt Salisbury, Gustavo Franco, Charles Baer, Tiffany Lewis, Vivek Rau, Shylaja Nukala, Jan Bultmann, Ryan Martin, Dom Jimenez, Hamidou Dia, Lindsey Scrase, Lakshmi Sharma, Amr Awadallah, Ben Jackson, Jim Travis
Quelle: Google Cloud Platform

Using logging for your apps running on Kubernetes Engine

Whether you’re a developer debugging an application or on the DevOps team monitoring applications across several production clusters, logs are the lifeblood of the IT organization. And if you run on top of Google Kubernetes Engine (GKE), you can use Cloud Logging, one of the many services integrated into GKE, to find that useful information. Cloud Logging, and its companion tool Cloud Monitoring, are full featured products that are both deeply integrated into GKE. In this blog post, we’ll go over how logging works on GKE and some best practices for log collection. Then we’ll go over some common logging use cases, so you can make the most out of the extensive logging functionality built into GKE and Google Cloud Platform. What’s included in Cloud Logging on GKEBy default, GKE clusters are natively integrated with Cloud Logging (and Monitoring). When you create a GKE cluster, both Monitoring and Cloud Logging are enabled by default. That means you get a monitoring dashboard specifically tailored for Kubernetes and your logs are sent to Cloud Logging’s dedicated, persistent datastore, and indexed for both searches and visualization in the Cloud Logs Viewer. If you have an existing cluster with Cloud Logging and Monitoring disabled, you can still enable logging and monitoring for the cluster. That’s important because with Cloud Logging disabled, a GKE-based application temporarily writes logs to the worker node, which may be removed when a pod is removed, or overwritten when log files are rotated. Nor are these logs centrally accessible, making it difficult to troubleshoot your system or application. In addition to cluster audit logs, and logs for the worker nodes, GKE automatically collects application logs written to either STDOUT or STDERR. If you’d prefer not to collect application logs, you can also now choose to collect only system logs. Collecting system logs are critical for production clusters as it significantly accelerates the troubleshooting process. No matter how you plan to use logs, GKE and Cloud Logging make it simple and easy–simply start your cluster, deploy your applications and your logs appear in Cloud Logging!How GKE collects logsGKE deploys a per-node logging agent that reads container logs, adds helpful metadata, and then sends the logs to the logs router, which sends the logs to Cloud Logging and any of the Logging sink destinations that you have configured. Cloud Logging stores logs for the duration that you specify or 30 days by default. Because Cloud Logging automatically collects standard output and error logs for containerized processes, you can start viewing your logs as soon as your application is deployed.Where to find your logsThere are several different ways to access your logs in Logging depending on your use case. Assuming you’ve already enabled the workspace, you can access your logs using: Cloud Logging console – You can see your logs directly from the Cloud Logging console by using the appropriate logging filters to select the Kubernetes resources such as cluster, node, namespace, pod or container logs. Here are some sample Kubernetes-related queries to help get you started. GKE console – In the Kubernetes Engine section of the Google Cloud Console, select the Kubernetes resources listed in Workloads, and then the Container or Audit Logs links. Monitoring console – In the Kubernetes Engine section of the Monitoring console, select the appropriate cluster, nodes, pod or containers to view the associated logs. gcloud command line tool – Using the gcloud logging read command, select the appropriate cluster, node, pod and container logs.For custom log aggregation, log analytics or to integrate with third-party systems, you can also use the logging sinks feature to export logs to BigQuery, Cloud Storage and Pub/Sub. For example, you can export logs to BigQuery and then use SQL queries to analyze application logs over an entire year. Or you may need to export specific logs to an existing third-party system using an integration with Pub/Sub. The best way to access your logs depends on your use case.Logging recommendations for containerized applicationsBefore we dive into some typical use cases for logging in GKE, let’s first review some best practices for using Cloud Logging with containerized applications:Use the native logging mechanisms of containers to write the logs to stdout and stderr.If your application cannot be easily configured to write logs to stdout and stderr, you can use a sidecar pattern for logging.Log directly with structured logging with different fields. You can then search your logs more effectively based on those fields.Use severities for better filtering and reducing noise. By default, logs written to the standard output are on the INFO level and logs written to the standard error are on the ERROR level. Structured logs with JSON payload can include a severity field, which defines the log’s severity.Use the links to the logs directly from the Kubernetes Engine section of the Cloud Console for containers which makes it quick to find the logs corresponding to the container.Understand the pricing, quota, and limits of Cloud Logging to understand the associated costs. Use casesNow, let’s look at some simple yet common use cases for logs in a GKE environment: diagnosing application errors, analyzing simple log data, analyzing complex log data, and integrating Cloud Logging with third-party applications. Read on for more. Using Cloud Logging to diagnose application errors Imagine you’re a developer and need to diagnose an application error in a development cluster. To use a concrete example, we will work through a scenario based on a sample microservices demo app deployed to a GKE cluster. You can deploy this demo app in your own Google Cloud project or you can go through the Site Reliability Troubleshooting Qwiklab to deploy a version of this demo app that includes an error. In the demo app, there are many microservices and dependencies among them.Let’s say you start receiving ‘500’ Internal Server Errors from the app when you try to place an order:Let the debugging begin! There are two quick ways to find the logs:1. Use the Kubernetes Engine console – Start by opening the checkout service in the Kubernetes Engine console, which has all the technical details about the serving pod, the container and links to the container and audit logs. There, you can find the technical details about the pod along with the links for container and audit logs.If you click the log link for the container, you will be directed to the Cloud Logging’s logs viewer with a pre-populated search query similar to the one below. This is created for you and points you to the specific container logs for the application running in the checkoutservice pod.2. Use the Logs Viewer in the Cloud Logging console – you can go directly to the Cloud Logging console and use the Logs Viewer to search for error messages across specific logs. You can specify the resource types, search fields, and a time range to speed up your query (more tips here). The Logs Viewer provides both a Classic and a Preview option. The Query builder in the Logs Viewer Preview lets you specify those filtering conditions quickly. For instance, you can select the resources in the dropdown menus for the cluster, namespace, and container.This selection in the Query Builder yields the following query:If you are not familiar with the codebase for the app, you’ll need to do some debugging with the logs to fix the issue. One good starting point is to search for the error message in the logs to understand the context of the error. You can add the field jsonPayload.error to your query to look for the specific log message that you received. To keep your queries most efficient, make sure to include the resource.type field.One of the helpful features included in the Preview of the Logs Viewer is a histogram, which lets you visualize the frequency of the logs matched by your query. In this example, this helps us understand how often our error appears in the logs.Next, you can look at the specific log entries that matched the query.If you expand the log entries, the payment-related log entry provides you with details about the pod, container, and a stack trace of the error. The logs point to the exact location of the defective code. Alternatively, if you prefer to use the command-line interface, you can run the same commands via Cloud Shell. Notice the conditions used in the query and the stderr log it searches.Analyzing log dataAnother common use case for logging is to analyze the log data with complex and powerful queries using built-in logging query language. You can use the query builder to build your queries or use the autocomplete to build a custom query. To find log entries quickly, you can include the exact values for the indexed log fields such as resource.type, logName and severity. Below are several example queries. You can use this query to check if an authorized user is trying to execute a command inside a container by replacing cluster_name and location with your specific cluster’s name and zone values:You can use this query to check if a specific user is trying to execute a command inside a container by replacing cluster_name, location and principalEmail with your specific cluster’s name, zone and email address values:This query filters pod-related log entries within a given time period by replacing replacing cluster_name, location, pod and timestamp with your specific cluster’s name, zone, pod and time values:You can find more sample queries for product-specific examples of querying logs across Google Cloud. You can also find specific GKE audit log query examples to help answer your audit logging questions.Using Cloud Logging for advanced analytics For more advanced analytics, you may want to export your logs to BigQuery. You can then use standard SQL queries to analyze the logs, correlate data from other sources and enrich the output. For example, the following SQL query returns log data related to the email ‘user@example.com’ from the default namespace on a GKE cluster running the microservices demo app.This log data may also provide valuable business insights. For example, the query below lets you know how many times a particular product was recommended by the recommendationservice in the last two days:You can also analyze the activity and audit logs. For example, the following query returns all kubelet warnings in a specific timeframe:If you are interested, you can find more sample queries in the Scenarios for exporting Cloud Logging: Security and access analytics article.Using Cloud Logging for third-party tools or automationThe last use case we want to mention is integrating Cloud Logging with Pub/Sub. You can create sinks and export logs to Pub/Sub topics. This is more than simply exporting the log data. With Pub/Sub, you can create an event-driven architecture and process the log events in real time in an automated fashion. If you implement this event-driven architecture with serverless technologies such as Cloud Run or Cloud Functions, you can significantly reduce the cost and management overhead of the automation.Learn more about Cloud Logging and GKEWe built our logging capabilities for GKE into Cloud Logging to make it easy for you to store, search, analyze, and monitor your logs. If you haven’t already, get started with Cloud Logging on GKE and join the discussion on our mailing list.
Quelle: Google Cloud Platform

Automating cybersecurity guardrails with new Zero Trust blueprint and Azure integrations

In our day-to-day work, we focus on helping customers advance the security of their digital estate using the native capabilities of Azure. In the process, we frequently find that using Azure to improve an organization’s cybersecurity posture can also help these customers achieve compliance more rapidly.

Today, many of our customers in regulated industries are adopting a Zero Trust architecture, moving to a security model that more effectively adapts to the complexity of the modern environment, embraces the mobile workforce, and protects people, devices, applications, and data wherever they’re located.

Regardless of where the request originates or what resource it accesses, Zero Trust teaches us to “never trust, always verify.” In a Zero Trust model, every access request is strongly authenticated, authorized within policy constraints, and inspected for anomalies before granting access. This approach can aid the process of achieving compliance for industries that use NIST-based controls including financial services, defense industrial base, and government.

A Zero Trust approach should extend throughout the entire digital estate and serve as an integrated security philosophy and end-to-end strategy, across three primary principles: (1) verify explicitly, (2) enforce least privilege access, and (3) assume breach.

Use the Azure blueprint for faster configuration of Zero Trust

The Azure blueprint for Zero Trust enables application developers and security administrators to more easily create hardened environments for their application workloads. Essentially, the blueprint will help you implement Zero Trust controls across six foundational elements: identities, devices, applications, data, infrastructure, and networks.

Using the Azure Blueprints service, the Zero Trust blueprint will first configure your VNET to deny all network traffic by default, enabling you to extend it and/or set rules for selective traffic based on your business needs. In addition, the blueprint will enforce and maintain Azure resource behaviors and configuration in compliance with specific NIST SP 800-53 security control requirements using Azure Policy.

The blueprint includes Azure Resource Manager templates to deploy and configure Azure resources such as Virtual Network, Network Security Groups, Azure Key Vault, Azure Monitor, Azure Security Center, and more. If you’re working with applications that need to comply with FedRAMP High or DoD Impact Level 4 requirements or just want to improve the security posture of your cloud deployment, the blueprint for Zero Trust is designed to help you get there faster.

The Azure blueprint for Zero Trust is currently in preview with limited support. To learn more and find instructions to deploy into Azure, see Azure blueprint for Zero Trust. For more information, questions, and feedback, please contact us at Zero Trust blueprint feedback.

In addition to this new blueprint, we’re announcing two new integrations with Azure to bring faster authorization and increased flexibility to the public sector and regulated industries:

Accelerate risk management for Azure deployments with Xacta

Increasing the speed with which cloud-based initiatives achieve authorization is a critical part of modernization. Often this process is highly manual and lacks the ability to provide a clear picture for continuous monitoring

Xacta now integrates with Azure Policy and Azure Blueprints, enabling customers to centrally manage compliance policies, track their compliance status, and more easily enforce policies to ensure ongoing compliance. For example, Xacta streamlines and automates many labor-intensive tasks associated with key security frameworks such as the NIST Risk Management Framework (RMF), NIST Cybersecurity Framework (CSF), FedRAMP, and ISO 27001.

Through this new integration, Azure Policy automatically generates a significant portion of the required accreditation package directly into Xacta, instantiating a risk management framework and reducing the manual effort required of risk professionals, freeing up their time to focus on critical risk decisions.

Enable continuous monitoring of containers using Anchore

Customers using containers to achieve greater flexibility within regulated environments commonly encounter security and governance challenges. To address those challenges, Anchore recently announced their support for Windows containers, delivering more choice for public sector agencies and enterprises developing container-based applications and implementing broad DevSecOps initiatives. Anchore Enterprise 2.3 performs deep image inspection of Windows container images, helping teams establish policy-based approaches to container compliance without compromising velocity.

Whether you’re using containers today or evaluating services, such as Azure Kubernetes Service, you can count on us to continue to provide world-class cybersecurity technology, controls, and best practices to help you accelerate both security and compliance.

Learn more

To learn more about how to implement Zero Trust architecture on Azure, read the six-part blog series on the Azure Government Dev blog. You may also want to bookmark the Security blog to keep up with our coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
Quelle: Azure

Use Azure Firewall for secure and cost-effective Windows Virtual Desktop protection

This post was co-authored by Pavithra Thiruvengadam, Program Manager, Windows Virtual Desktop

Work from home policies require many IT organizations to address fundamental changes in capacity, network, security, and governance. Many employees aren’t protected by the layered security policies associated with on-premises services while working from home. Virtual desktop infrastructure (VDI) deployments on Azure can help organizations rapidly respond to this changing environment.  However, you need a way to protect inbound or outbound internet access to and from these VDI deployments.

Windows Virtual Desktop is a comprehensive desktop and application virtualization service running in Azure. It’s the only VDI that delivers simplified management, multi-session Windows 10, and optimizations for Office 365. You can deploy and scale your Windows desktops and apps on Azure in minutes and get built-in security and compliance features. In this post, we explore how to use Azure Firewall for secure and cost-effective Windows Virtual Desktop protection.

Windows Virtual Desktop components

The Windows Virtual Desktop service is delivered in a shared responsibility model:

Customer-managed RD clients connect to Windows desktops and applications from their favorite client device from anywhere on the internet.
Microsoft-managed Azure service handles connections between RD clients and Windows Virtual Machines in Azure (including Windows 10 multi-session).
Customer-managed virtual network in Azure hosts Windows 10 multi-session virtual machines in host pools.

Windows Virtual Desktop doesn’t require you to open any inbound access to your virtual network. However, to ensure platform connectivity between customer-managed virtual machines and the service, a set of outbound network connections must be enabled for the host pool virtual network. While these dependencies can be configured using Network Security Groups, this configuration is limited to network-level traffic filtering only. For application-level protection, you can use Azure Firewall or a third party network virtual appliance (NVA). For best practices to consider before deploying an NVA, see Best practices to consider before deploying a network virtual appliance.

Host pool outbound access to Windows Virtual Desktop

Azure Firewall is a cloud-native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

Azure Firewall provides a Windows Virtual Desktop FQDN Tag to simplify host pool outbound access to Windows Virtual Desktop. Use the following steps to allow outbound platform traffic:

Deploy Azure Firewall and configure your Windows Virtual Desktop host pool subnet User Defined Route (UDR) to route all traffic via the Azure Firewall.
Create an application rule collection and add a rule to enable the WindowsVirtualDesktop FQDN tag. The source IP address range is the host pool virtual network, the protocol is https, and the destination is WindowsVirtualDesktop.

   

The set of required storage and service bus accounts for your Windows Virtual Desktop host pool is deployment specific and isn’t yet captured in the WindowsVirtualDesktop FQDN tag. Additionally, a network rule collection is needed to allow DNS access from your Active Directory Domain Services (ADDS) deployment and KMS access from your virtual machines to Windows Activation Service. To configure access for these additional dependencies, see Use Azure Firewall to protect Windows Virtual Desktop deployments.

Host pool outbound access to the internet

Depending on your organization needs, you may want to enable secure outbound internet access for your end users. As Windows Virtual Desktop sessions are running on customer-managed virtual machines, they are also subject to your virtual network security controls. In cases where the list of allowed destinations is well-defined (for example, Office 365 access), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance.

If you want to filter outbound user internet traffic using an existing on-premises secure web gateway, you can configure web browsers or other applications running on the Windows Virtual Desktop host pool with an explicit proxy configuration. For example, see How to use Microsoft Edge command-line options to configure proxy settings. These proxy settings only influence your end-user internet access, allowing outbound traffic directly via Azure Firewall.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

What is Windows Virtual Desktop?
Azure Firewall documentation.
Use Azure Firewall to protect Windows Virtual Desktop deployments.
Azure Firewall February 2020 blog: New Azure Firewall certification and features in Q1 CY2020.

Quelle: Azure

Azure Virtual Machine Scale Sets now provide simpler management during scale-in

We recently announced the general availability of three features for Azure Virtual Machine Scale Sets. Instance protection, custom scale-in policy, and terminate notification provide new capabilities to simplify management of virtual machine instances during scale-in.

Azure Virtual Machine Scale Sets are a way to collectively deploy and easily manage a number of virtual machine (VM) instances in a group. You can also configure autoscaling rules for your scale set that enable you to dynamically increase or decrease the number of instances based on what the workload requires.

With these new features, you now have more control over gracefully handling the removal of instances during scale-in, enabling you to achieve better user experience for your applications and services. These new features are available across all Azure regions for public cloud as well as sovereign clouds. There is no extra charge for using these features with Azure Virtual Machine Scale Sets.

Let’s take a look at how these features provide you better control during scale-in.

Instance protection—protect one or more instances from scale-in

You can apply the policy Protect from scale-in to one or more instances in your scale set if you do not want these instances to be deleted when a scale-in occurs. This is useful when you have a few special instances that you would like to preserve while dynamically scaling in or out other instances in your scale set. These instances might be performing certain specialized tasks different from other instances in the scale set and you may want these special instances to not be removed from the scale set. Instance protection provides you the capability to enable such scenarios for your workload.

Protect one or more instances from scale-set actions

Instance protection also allows you to protect one or more of your instances from getting modified during other scale-set operations like reimage or upgrade. This can be done by applying the policy Protect from scale-set actions to specific instances. Applying this policy to an instance automatically also protects it from a scale-in.

Custom scale-in policy—configure the order of instance removal during scale-in

When one or more instances need to be removed form a scale set during scale-in, then instances are selected for deletion in such a way that the scale set remains balanced across availability zones and fault domains, if applicable. Custom scale-in policies allow you to further specify and control the order in which instances should be selected for deletion during scale-in. You can use the OldestVM scale-in policy to remove the oldest created instance first, or NewestVM scale-in policy to remove the newest created instance first. In both the scenarios, balancing across availability zones is given preference. If you have applied either of the protection policies to an instance, then it will not be picked up for deletion during scale-in.

Below are a couple examples of the scale-in order for a scale set with three availability zones and initial instance count 9. These examples assume that the VM with smallest instance ID was created first and that the VM associated with highest instance ID was created last. The VM instance enclosed in a dotted square represents that it has been protected using one of the instance protection policies. The cross indicates that the VM instance will be selected for deletion during scale-in.

 

Terminate notification—receive in-VM notification of instance deletion

When an instance is about to be deleted from a scale set, you may want to perform certain custom actions on the instance. Examples of these actions could be de-registering from the load balancer, or copying the logs, among others. When instance deletions are triggered by the platform, for example due to a scale-in, then these actions need to be programmatically performed to ensure that application does not get interrupted or useful logs are properly retained. With the terminate notification feature, you can configure your instances to receive in-VM notifications about upcoming instance deletion and pause the delete operation for 5 to 15 minutes to perform such custom actions on the instance.

The terminate notifications are sent through the Azure metadata service—Scheduled events—and can be received using a REST endpoint accessible from within the VM instance. Specific actions or scripts can be configured to run when an instance receives the terminate notification at the configured endpoint. Once these actions are completed and you do not want to wait for the pre-configured pause timeout to finish, then you can approve the deletion by issuing a POST call to the metadata service. This will allow deletion of the instance to continue.

Get started

You can enable these features for your scale set using REST API, Azure CLI, Azure PowerShell or Azure Portal. Below are the links to the documentation pages for detailed instructions.

Instance protection
Custom scale-in policy
Terminate notification

Quelle: Azure

Azure Blob Storage enhancing data protection and recovery capabilities

Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. Today, we are announcing the general availability of Geo-Zone-Redundant Storage (GZRS)—providing protection against regional disasters and Account failover—allowing you to determine when to initiate a failover instead.

Additionally, we are releasing two new preview features: Versioning and Point in time restore. These new functionalities expand upon Azure Blob Storage’s existing capabilities such as data redundancy, soft delete, account delete locking, and immutable blobs, making our data protection and restore capabilities even better.

Geo-Zone-Redundant Storage (GZRS)

Geo-Zone-Redundant Storage (GZRS) and Read-Access Geo-Zone-Redundant Storage (RA-GZRS) are now generally available offering intra-regional and inter-regional high availability and disaster protection for your applications.

GZRS writes three copies of your data synchronously across multiple Azure Availability zones, similar to Zone redundant storage (ZRS), providing you continued read and write access even if a datacenter or availability zone is unavailable. In addition, GZRS asynchronously replicates your data to the secondary geo pair region to protect against regional unavailability. RA-GZRS exposes a read endpoint on this secondary replica allowing you to read data in the event of primary region unavailability.

To learn more, see Azure Storage redundancy.

Account failover

Customer-initiated storage account failover is now generally available, allowing you to determine when to initiate a failover instead of waiting for Microsoft to do so. When you perform a failover, the secondary replica of the storage account becomes the new primary. The DNS records for all storage service endpoints—blob, file, queue, and table—are updated to point to this new primary. Once the failover is complete, clients will automatically begin reading from and writing to data to the storage account in the new primary region, with no code changes.

Customer initiated failover is available for GRS, RA-GRS, GZRS and RA-GZRS accounts. To learn more, see our Disaster recovery and account failover documentation.

Versioning preview

Applications create, update, and delete data continuously. A common requirement is the ability to access and manage both current and previous versions of the data. Versioning automatically maintains prior versions of an object and identifies them with version IDs. You can restore a prior version of a blob to recover your data if it is erroneously modified or deleted.

A version captures a committed blob state at a given point in time. When versioning is enabled for a storage account, Azure Storage automatically creates a new version of a blob each time that blob is modified or deleted.

Versioning and soft delete work together to provide you with optimal data protection. To learn more, see our documentation on Blob versioning.

Point in time restore preview

Point in time restore for Azure Blob Storage provides storage account administrators the ability to restore a subset of containers or blobs within a storage account to a previous state. This can be done by an administrator to a specific past date and time in the event of an application corrupting data, a user inadvertently deleting contents, or a test run of a machine learning model.

Point in time restore makes use of Blob Change feed, currently in preview. Change feed enables recording of all blob creation, modification, and deletion operations that occur in your storage account. Today we are expanding upon our Change feed preview by enabling four new regions and enabling support for two new blob event types: BlobPropertiesUpdated and BlobSnapshotCreated.

This improvement now captures change records caused by the SetBlobMetadata, SetBlobProperties, and SnapshotBlob operations. To learn more, read Change feed support in Azure Blob Storage (Preview).

Point in time restore is intended for ISV partners and customers who want to implement their own restore workflow on top of Azure Storage. To learn more, see Point in time restore.

Build it, use it, and tell us about it

These new capabilities provide greater data protection control for all users of Azure Storage. The general availability release of GZRS adds region-replicated zone redundancy types. Account failover allows customers to control geo-replicated failover for their storage accounts. In addition, the previews of Versioning and Point in time restore allow greater control of data protection and restoration to a previous date and time.

We look forward to hearing your feedback on these features and suggestions for future improvements through email at AzureStorageFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at the Azure Storage feedback forum.
Quelle: Azure