Docker Desktop 4.30: Proxy Support with SOCKS5, NTLM and Kerberos, ECI for Build Commands, Build View Features, and Docker Desktop on RHEL Beta

In this post:

Enhancing connectivity with SOCKS proxy support in Docker Desktop

Seamless integration of Docker Desktop with NTLM and Kerberos proxies

Docker Desktop with Enhanced Container Isolation for build commands

Docker Desktop for WSL 2: A leap towards simplification and speed

Enhance your Docker builds experience with new Docker Desktop Build features

Reimagining dev environments: Streamlining development workflows

Docker Desktop support for RHEL beta

Docker Desktop is elevating its capabilities with crucial updates that streamline development workflows and enhance security for developers and enterprises alike. Key enhancements in Docker Desktop 4.30 include improved SOCKS5 proxy support for seamless network connectivity, advanced integration with NTLM and Kerberos for smoother authentication processes, and extended Enhanced Container Isolation (ECI) to secure build environments. Additionally, administrative ease is boosted by simplifying sign-in enforcement through familiar system settings, and WSL 2 configurations have been optimized to enhance performance.

In this blog post, we’ll describe these enhancements and also provide information on future features and available beta features such as Docker Desktop on Red Hat Enterprise Linux (RHEL). Read on to learn more about how these updates are designed to maximize the efficiency and security of your Docker Desktop experience.

Enhancing connectivity with SOCKS proxy support in Docker Desktop

Docker Desktop now supports SOCKS5 proxies, a significant enhancement that broadens its usability in corporate environments where SOCKS proxy is the primary means for internet access or is used to connect to company intranets. This new feature allows users to configure Docker Desktop to route HTTP/HTTPS traffic through SOCKS proxies, enhancing network flexibility and security.

Users can easily configure Docker Desktop to access the internet using socks5:// proxy URLs. This ensures that all outgoing requests, including Docker pulls and other internet access on ports 80/443, are routed through the chosen SOCKS proxy.

The proxy configuration can manually be specified in Settings > Resources > Proxies > Manual proxy configuration, by adding the socks5://host:port URL in the Secure Web Server HTTPS box.

Automatic detection of SOCKS proxies specified in .pac files is also supported.

This advancement not only improves Docker Desktop’s functionality for developers needing robust proxy support but also aligns with business needs for secure and versatile networking solutions. This new feature is available to Docker Business subscribers. 

Visit Docker Docs for detailed information on setting up and utilizing SOCKS proxy support in Docker Desktop.

Seamless integration of Docker Desktop with NTLM and Kerberos proxies

Proxy servers are vital in corporate networks, ensuring security and efficient traffic management. Recognizing their importance, Docker Desktop has evolved to enhance integration with these secured environments, particularly on Windows. Traditional basic authentication often presented challenges, such as repeated login prompts and security concerns. 

Docker Desktop 4.30 introduces major upgrades by supporting advanced authentication protocols such as Kerberos and NTLM, which streamline the user experience by handling the proxy handshake invisibly and reducing interruptions.

These updates simplify workflows and improve security and performance, allowing developers and admins to focus more on their tasks and less on managing access issues. The new version promises a seamless, secure, and more efficient interaction with corporate proxies, making Docker Desktop a more robust tool in today’s security-conscious corporate settings.

For a deeper dive into how Docker Desktop is simplifying proxy navigation and enhancing your development workflow within the Docker Business subscription, be sure to read the full blog post.

Docker Desktop with Enhanced Container Isolation for build commands

Docker Desktop’s latest update marks an important advancement in container security by extending Enhanced Container Isolation (ECI) to docker build and docker buildx commands. This means docker build/buildx commands run in rootless mode when ECI is enabled, thereby protecting the host machine against malicious containers inadvertently used as dependencies while building container images.

This update is significant as it addresses previous limitations where ECI protected containers initiated with docker run but did not extend the same level of security to containers created during the build processes — unless the build was done with the docker-container build driver. 

Prior limitations:

Limited protection: Before this update, while ECI effectively safeguarded containers started with docker run, those spawned by docker build or docker buildx commands, using the default “docker” build driver, did not benefit from this isolation, posing potential security risks.

Security vulnerabilities: Given the nature of build processes, they can be susceptible to various security vulnerabilities, which previously might not have been adequately mitigated. This gap in protection could expose Docker Desktop users to risks during the build phase.

Enhancements in Docker Desktop 4.30:

Rootless build operations: By extending ECI to include build commands, Docker Desktop now ensures that builds run rootless, significantly enhancing security.

Comprehensive protection: This extension of ECI now includes support for docker builds on all platforms (Mac, Windows, Hyper-V, Linux), except Windows WSL, ensuring that all phases of container operation — both runtime and build — are securely isolated.

This development not only strengthens security across Docker Desktop’s operations but also aligns with Docker’s commitment to providing comprehensive security solutions. By safeguarding the entire lifecycle of container management, Docker ensures that users are protected against potential vulnerabilities from development to deployment.

To understand the full scope of these changes and how to leverage them within your Docker Business Subscription, visit the Enhanced Container Isolation docs for additional guidance.

Docker Desktop for WSL 2: A leap toward simplification and speed

We’re excited to announce an update to Docker Desktop that enhances its performance on Windows Subsystem for Linux (WSL 2) by reducing the complexity of the setup process. This update simplifies the WSL 2 setup by consolidating the previously required two Docker Desktop WSL distributions into one.

The simplification of Docker Desktop’s WSL 2 setup is designed to make the codebase easier to understand and maintain, improving our ability to handle failures more effectively. Most importantly, this change will also enhance the startup speed of Docker Desktop on WSL 2, allowing you to get to work faster than ever before.

What’s changing?

Phase 1: Starting with Docker Desktop 4.30, we are rolling out this update incrementally on all fresh installations. If you’re setting up Docker Desktop for the first time, you’ll experience a more streamlined installation process with reduced setup complexity right away.

Phase 2: We plan to introduce data migration in a future update, further enhancing the system’s efficiency and user experience. This upcoming phase will ensure that existing users also benefit from these improvements without any hassle.

To take advantage of phase 1, we encourage all new and existing users to upgrade to Docker Desktop 4.30. By doing so, you’ll be prepared to seamlessly transition to the enhanced version as we roll out subsequent phases.

Keep an eye out for more updates as we continue to refine Docker Desktop and enrich your development experience. 

Enhance your Docker Builds experience with new Docker Desktop Build features

Docker Desktop’s latest updates bring significant improvements to the Builds View, enhancing both the management and transparency of your build processes. These updates are designed to make Docker Desktop an indispensable tool for developers seeking efficiency and detailed insights into their builds.

Bulk delete enhancements:

Extended bulk delete capability: The ability to bulk delete builds has been expanded beyond the current page. Now, by defining a search or query, you can effortlessly delete all builds that match your specified criteria across multiple pages.

Simplified user experience: With the new Select all link next to the header, managing old or unnecessary builds becomes more straightforward, allowing you to maintain a clean and organized build environment with minimal effort (Figure 1).

Figure 1: Docker Desktop Build history view displaying the new Select All or Select Various builds to take action.

Build provenance and OpenTelemetry traces:

Provenance and dependency insights: The updated Builds View now includes an action menu that offers access to the dependencies and provenance of each build (Figure 2). This feature enables access to the origin details and the context of the builds for deeper inspection, enhancing security and compliance.

OpenTelemetry integration: For advanced debugging, Docker Desktop lets you download OpenTelemetry traces to inspect build performance in Jaeger. This integration is crucial for identifying and addressing performance bottlenecks efficiently. Also, depending on your build configuration, you can now download the provenance to inspect the origin details for the build.

Figure 2: Docker Desktop Builds View displaying Dependencies and Build results in more detail.

Overall, these features work together to provide a more streamlined and insightful build management experience, enabling developers to focus more on innovation and less on administrative tasks. 

For more detailed information on how to leverage these new functionalities and optimize your Docker Desktop experience, make sure to visit Builds documentation.

Reimagining Dev Environments: Streamlining development workflows

We are evolving our approach to development environments as part of our continuous effort to refine Docker Desktop and enhance user experience. Since its launch in 2021, Docker Desktop’s Dev Environments feature has been a valuable tool for developers to quickly start projects from GitHub repositories or local directories. However, to better align with our users’ evolving needs and feedback, we will be transitioning from the existing Dev Environments feature to a more robust and integrated solution in the near future. 

What does that mean to those using Dev Environments today? The feature is unchanged. Starting with the Docker Desktop 4.30 release, though, new users trying out Dev Environments will need to explicitly turn it on in Beta features settings. This change is part of our broader initiative to streamline Docker Desktop functionalities and introduce new features in the future (Figure 3).

Figure 3: Docker Desktop Settings page displaying available features in development and beta features.

We understand the importance of a smooth transition and are committed to providing detailed guidance and support to our users when we officially announce the evolution of Dev Environments. Until then, you can continue to leverage Dev Environments and look forward to additional functionality to come.

Docker Desktop support for Red Hat Enterprise Linux beta

As part of Docker’s commitment to broadening its support for enterprise-grade operating systems, we are excited to announce the expansion of Docker Desktop to include compatibility with Red Hat Enterprise Linux (RHEL) distributions, specifically versions 8 and 9. This development is designed to support our users in enterprise environments where RHEL is widely used, providing them with the same seamless Docker experience they expect on other platforms.

To provide feedback on this new beta functionality, engage your Account Executive or join the Docker Desktop Preview Program.

As Docker Desktop continues to evolve, the latest updates are set to significantly enhance the platform’s efficiency and security. From integrating advanced proxy support with SOCKS5, NTLM, and Kerberos to streamlining administrative processes and optimizing WSL 2 setups, these improvements are tailored to meet the needs of modern developers and enterprises. 

With the addition of exciting upcoming features and beta opportunities like Docker Desktop on Red Hat Enterprise Linux, Docker remains committed to providing robust, secure, and user-friendly solutions. Stay connected with us to explore how these continuous advancements can transform your development workflows and enhance your Docker experience.

Learn more

Read Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30.

Authenticate and update to receive the newest Docker Desktop features per your subscription level.

New to Docker? Create an account. 

Learn how Docker Build Cloud in Docker Desktop can accelerate builds.

Secure your supply chain with Docker Scout in Docker Desktop.

Subscribe to the Docker Newsletter.

Have questions? The Docker community is here to help.

Quelle: https://blog.docker.com/feed/

Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30

Within the ecosystem of corporate networks, proxy servers stand as guardians, orchestrating the flow of information with a watchful eye toward security. These sentinels, while adding layers to navigate, play a crucial role in safeguarding an organization’s digital boundaries and keeping its network denizens — developers and admins alike — secure from external threats. 

Recognizing proxy servers’ critical position, Docker Desktop 4.30 offers new enhancements, especially on the Windows front, to ensure seamless integration and interaction within these secured environments.

Traditional approach

The realm of proxy servers is intricate, a testament to their importance in modern corporate infrastructure. They’re not just barriers but sophisticated filters and conduits that enhance security, optimize network performance, and ensure efficient internet traffic management. In this light, the dance of authentication — while complex — is necessary to maintain this secure environment, ensuring that only verified users and applications gain access.

Traditionally, Docker Desktop approached corporate networks with a single option: basic authentication. Although functional, this approach often felt like navigating with an outdated map. It was a method that, while simple, sometimes led to moments of vulnerability and the occasional hiccup in access for those venturing into more secure or differently configured spaces within the network. 

This approach could also create roadblocks for users and admins, such as:

Repeated login prompts: A constant buzzkill.

Security faux pas: Your credentials, base64 encoded, might as well be on a billboard.

Access denied: Use a different authentication method? Docker Desktop is out of the loop.

Workflow whiplash: Nothing like a login prompt to break your coding stride.

Performance hiccups: Waiting on auth can slow down your Docker development endeavors.

Seamless interaction

Enter Docker Desktop 4.30, where the roadblocks are removed. Embracing the advanced authentication protocols of Kerberos and NTLM, Docker Desktop now ensures a more secure, seamless interaction with corporate proxies while creating a streamlined and frictionless experience. 

This upgrade is designed to help you easily navigate the complexities of proxy authentication, providing a more intuitive and unobtrusive experience that both developers and admins can appreciate:

Invisible authentication: Docker Desktop handles the proxy handshake behind the scenes.

No more interruptions: Focus on your code, not on login prompts.

Simplicity: No extra steps compared to basic auth. 

Performance perks: Less time waiting, more time doing.

A new workflow with Kerberos authentication scheme is shown in Figure 1:

Figure 1: Workflow with Kerberos authentication.

A new workflow with NTLM auth scheme is shown in Figure 2:

Figure 2: Workflow with NTLM authentication scheme.

Integrating Docker Desktop into environments guarded by NTLM or Kerberos proxies no longer feels like a challenge but an opportunity. 

With Docker Desktop 4.30, we’re committed to facilitating this transition, prioritizing secure, efficient workflows catering to developers and admins who orchestrate these digital environments. Our focus is on bridging gaps and ensuring Docker Desktop aligns with today’s corporate networks’ security and operational standards.

FAQ

Who benefits? Both Windows-based developers and admins.

Continued basic auth support? Yes, providing flexibility while encouraging a shift to more secure protocols.

How to get started? Upgrade to Docker Desktop 4.30 for Windows.

Impact on internal networking? Absolutely none. It’s smooth sailing for container networking.

Validity of authentication? Enjoy 10 hours of secure access with Kerberos, automatically renewed with system logins.

Docker Desktop is more than just a tool — it’s a bridge to a more streamlined, secure, and productive coding environment, respecting the intricate dance with proxy servers and ensuring that everyone involved, from developers to admins, moves in harmony with the secure protocols of their networks. Welcome to a smoother and more secure journey with Docker Desktop.

Learn more

Update to Docker Desktop 4.30.

Read Docker Desktop 4.30: Proxy Support with SOCKS5, NTLM and Kerberos, ECI for Build Commands, Build View Features, and Docker Desktop on RHEL Beta.

Create a Docker Account and choose Business subscription.

Authenticate using your corporate credentials.

Subscribe to the Docker Newsletter.

Vote on what’s next! Check out our public roadmap.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Automating Docker Image Builds with Pulumi and Docker Build Cloud

This guest post was contributed by Diana Esteves, Solutions Architect, Pulumi.

Pulumi is an Infrastructure as Code (IaC) platform that simplifies resource management across any cloud or SaaS provider, including Docker. Pulumi providers are integrations with useful tools and vendors. Pulumi’s new Docker Build provider is about making your builds even easier, faster, and more reliable. 

In this post, we will dive into how Pulumi’s new Docker Build provider works with Docker Build Cloud to streamline building, deploying, and managing containerized applications. First, we’ll set up a project using Docker Build Cloud and Pulumi. Then, we’ll explore cool use cases that showcase how you can leverage this provider to simplify your build and deployment pipelines.  

Pulumi Docker Build provider features

Top features of the Pulumi Docker Build provider include the following:

Docker Build Cloud support: Offload your builds to the cloud and free up your local resources. Faster builds mean fewer headaches.

Multi-platform support: Build Docker images that work on different hardware architectures without breaking a sweat.

Advanced caching: Say goodbye to redundant builds. In addition to the shared caching available when you use Docker Build Cloud, this provider supports multiple cache backends, like Amazon S3, GitHub Actions, and even local disk, to keep your builds efficient.

Flexible export options: Customize where your Docker images go after they’re built — export to registries, filesystems, or wherever your workflow needs.

Getting started with Docker Build Cloud and Pulumi  

Docker Build Cloud is Docker’s newest offering that provides a pair of AMD and Arm builders in the cloud and shared cache for your team, resulting in up to 39x faster image builds. Docker Personal, Pro, Team, and Business plans include a set number of Build Cloud minutes, or you can purchase a Build Cloud Team plan to add minutes. Learn more about Docker Build Cloud plans. 

The example builds an NGINX Dockerfile using a Docker Build Cloud builder. We will create a Docker Build Cloud builder, create a Pulumi program in Typescript, and build our image.

Prerequisites

Docker

Pulumi CLI

Pulumi Cloud account (Individual free tier works)

A Pulumi-supported language

Docker Build Cloud Team plan or a Docker Core Subscription with available Docker Build Cloud minutes

Step 1: Set up your Docker Build Cloud builder

Building images locally means being subject to local compute and storage availability. Pulumi allows users to build images with Docker Build Cloud.

The Pulumi Docker Build provider fully supports Docker Build Cloud, which unlocks new capabilities, as individual team members or a CI/CD pipeline can fully take advantage of improved build speeds, shared build cache, and native multi-platform builds.

If you still need to create a builder, follow the steps below; otherwise, skip to step 1C.

A. Log in to your Docker Build Cloud account.

B. Create a new cloud builder named my-cool-builder. 

Figure 1: Create the new cloud builder and call it my-cool-builder.

C. In your local machine, sign in to your Docker account.

$ docker login

D. Add your existing cloud builder endpoint.

$ docker buildx create –driver cloud ORG/BUILDER_NAME
# Replace ORG with the Docker Hub namespace of your Docker organization.
# This creates a builder named cloud-ORG-BUILDER_NAME.

# Example:
$ docker buildx create –driver cloud pulumi/my-cool-builder
# cloud-pulumi-my-cool-builder

# check your new builder is configured
$ docker buildx ls

E. Optionally, see that your new builder is available in Docker Desktop.

Figure 2: The Builders view in the Docker Desktop settings lists all available local and Docker Build Cloud builders available to the logged-in account.

For additional guidance on setting up Docker Build Cloud, refer to the Docker docs.

Step 2: Set up your Pulumi project

To create your first Pulumi project, start with a Pulumi template. Pulumi has curated hundreds of templates that are directly integrated with the Pulumi CLI via pulumi new. In particular, the Pulumi team has created a Pulumi template for Docker Build Cloud to get you started.

The Pulumi programming model centers around defining infrastructure using popular programming languages. This approach allows you to leverage existing programming tools and define cloud resources using familiar syntaxes such as loops and conditionals.

To copy the Pulumi template locally:

$ pulumi new https://github.com/pulumi/examples/tree/master/dockerbuildcloud-ts –dir hello-dbc
# project name: hello-dbc
# project description: (default)
# stack name: dev
# Note: Update the builder value to match yours
# builder: cloud-pulumi-my-cool-builder
$ cd hello-dbc

# update all npm packages (recommended)
$ npm update –save

Optionally, explore your Pulumi program. The hello-dbc folder has everything you need to build a Dockerfile into an image with Pulumi. Your Pulumi program starts with an entry point, typically a function written in your chosen programming language. This function defines the infrastructure resources and configurations for your project. For TypeScript, that file is index.ts, and the contents are shown below:

import * as dockerBuild from "@pulumi/docker-build";
import * as pulumi from "@pulumi/pulumi";

const config = new pulumi.Config();
const builder = config.require("builder");

const image = new dockerBuild.Image("image", {

// Configures the name of your existing buildx builder to use.
// See the Pulumi.<stack>.yaml project file for the builder configuration.
builder: {
name: builder, // Example, "cloud-pulumi-my-cool-builder",
},
context: {
location: "app",
},
// Enable exec to run a custom docker-buildx binary with support
// for Docker Build Cloud (DBC).
exec: true,
push: false,
});

Step 3: Build your Docker image

Run the pulumi up command to see the image being built with the newly configured builder:

$ pulumi up –yes

You can follow the browser link to the Pulumi Cloud dashboard and navigate to the Image resource to confirm it’s properly configured by noting the builder parameter.

Figure 3: Navigate to the Image resource to check the configuration.

Optionally, also check your Docker Build Cloud dashboard for build minutes usage:

Figure 4: The build.docker.com view shows the user has selected the Cloud builders from the left menu and the product dashboard is shown on the right side.

Congratulations! You have built an NGINX Dockerfile with Docker Build Cloud and Pulumi. This was achieved by creating a new Docker Build Cloud builder and passing that to a Pulumi template. The Pulumi CLI is then used to deploy the changes.

Advanced use cases with buildx and BuildKit

To showcase popular buildx and BuildKit features, test one or more of the following Pulumi code samples. These include multi-platform, advanced caching,  and exports. Note that each feature is available as an input (or parameter) in the Pulumi Docker Build Image resource. 

Multi-platform image builds for Docker Build Cloud

Docker images can support multiple platforms, meaning a single image may contain variants for architectures and operating systems. 

The following code snippet is analogous to invoking a build from the Docker CLI with the –platform flag to specify the target platform for the build output.

import * as dockerBuild from "@pulumi/docker-build";

const image = new dockerBuild.Image("image", {
// Build a multi-platform image manifest for ARM and AMD.
platforms: [
dockerBuild.Platform.Linux_amd64,
dockerBuild.Platform.Linux_arm64,
],
push: false,

});

Deploy the changes made to the Pulumi program:

$ pulumi up –yes

Caching from and to AWS ECR

Maintaining cached layers while building Docker images saves precious time by enabling faster builds. However, utilizing cached layers has been historically challenging in CI/CD pipelines due to recycled environments between builds. The cacheFrom and cacheTo parameters allow programmatic builds to optimize caching behavior. 

Update your Docker image resource to take advantage of caching:

import * as dockerBuild from "@pulumi/docker-build";
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws"; // Required for ECR

// Create an ECR repository for pushing.
const ecrRepository = new aws.ecr.Repository("ecr-repository", {});

// Grab auth credentials for ECR.
const authToken = aws.ecr.getAuthorizationTokenOutput({
registryId: ecrRepository.registryId,
});

const image = new dockerBuild.Image("image", {
push: true,
// Use the pushed image as a cache source.
cacheFrom: [{
registry: {
ref: pulumi.interpolate`${ecrRepository.repositoryUrl}:cache`,
},
}],
cacheTo: [{
registry: {
imageManifest: true,
ociMediaTypes: true,
ref: pulumi.interpolate`${ecrRepository.repositoryUrl}:cache`,
},
}],
// Provide our ECR credentials.
registries: [{
address: ecrRepository.repositoryUrl,
password: authToken.password,
username: authToken.userName,
}],
})

Notice the declaration of additional resources for AWS ECR. 

Export builds as a tar file

Exporting allows us to share or back up the resulting image from a build invocation. 

To export the build as a local .tar file, modify your resource to include the exports Input:

const image = new dockerBuild.Image("image", {
push: false,
exports: [{
docker: {
tar: true,
},
}],
})

Deploy the changes made to the Pulumi program:

$ pulumi up –yes

Review the Pulumi Docker Build provider guide to explore other Docker Build features, such as build arguments, build contexts, remote contexts, and more.

Next steps

Infrastructure as Code (IaC) is key to managing modern cloud-native development, and Docker lets developers create and control images with Dockerfiles and Docker Compose files. But when the situation gets more complex, like deploying across different cloud platforms, Pulumi can offer additional flexibility and advanced infrastructure features. The Docker Build provider supports Docker Build Cloud, streamlining building, deploying, and managing containerized applications, which helps development teams work together more effectively and maintain agility.

Pulumi’s latest Docker Build provider, powered by BuildKit, improves flexibility and efficiency in Docker builds. By applying IaC principles, developers manage infrastructure with code, even in intricate scenarios. This means you can focus on building and deploying your containerized workloads without the hassle of complex infrastructure challenges.

Visit Pulumi’s launch announcement and the provider documentation to get started with the Docker Build provider. 

Register for the June 25 Pulumi and Docker virtual workshop: Automating Docker Image Builds using Pulumi and Docker.

Learn more

Get started with the Pulumi Docker Build provider.

Read the Pulumi Docker Build provider documentation.

Read the Docker Build documentation.

Learn about Docker Build Cloud.

See Docker Build Cloud pricing.

Find out about Docker Core Subscriptions.

Quelle: https://blog.docker.com/feed/

IP-Präfix-Sichtbarkeit in der Amazon-CloudWatch-Internet-Monitor-Konsole

Amazon CloudWatch Internet Monitor, ein Dienst zur Überwachung des Internetverkehrs für AWS-Anwendungen, der Ihnen einen globalen Überblick über Verkehrsmuster und Integritätsereignisse bietet, zeigt jetzt IPv4-Präfixe im Konsolen-Dashboard an. Mithilfe dieser Daten in Internet Monitor können Sie mehr über den Anwendungsdatenverkehr und Integritätsereignisse erfahren. Sie können zum Beispiel Folgendes tun:

IPv4-Präfixe anzeigen, die zu einem Client-Standort gehören, der von einem Integritätsereignis betroffen ist.
IPv4-Präfixe anzeigen, die zu einem Client-Standort gehören.
Verkehrsdaten nach dem Netzwerk filtern und durchsuchen, zu dem ein IPv4-Präfix oder eine IPv4-Adresse gehören.

Quelle: aws.amazon.com

Einführung des Datei-Commit-Verlaufs in Amazon CodeCatalyst

Heute kündigt AWS die allgemeine Verfügbarkeit des Datei-Commit-Verlaufs in Amazon CodeCatalyst an. Kunden können jetzt den Git-Commit-Verlauf der Datei in der CodeCatalyst-Konsole einsehen. Amazon CodeCatalyst hilft Teams dabei, Anwendungen in AWS zu planen, zu programmieren, zu erstellen, zu testen und bereitzustellen. Das Anzeigen des Verlaufs von Commits, die zu einer Datei gehören, ist eine Möglichkeit, den Aufwand für Entwickler zu reduzieren, wenn sie versuchen, den Verlauf der Änderungen für eine Codebasis zu verstehen.
Quelle: aws.amazon.com

Amazon Personalize macht es jetzt einfacher denn je, Benutzer aus Datensätzen zu löschen

Amazon Personalize macht es jetzt mit einer neuen Löschen-API einfacher als je zuvor, Benutzer aus Datensätzen zu entfernen. Amazon Personalize verwendet Datensätze, die von Kunden zur Verfügung gestellt werden, um in deren Auftrag individuelle Personalisierungsmodelle zu trainieren. Mit dieser neuen Funktion können Sie Einträge über Benutzer aus Ihren Datensätzen löschen, einschließlich Benutzermetadaten und Benutzerinteraktionen. So können Sie die Daten gemäß Ihren Compliance-Programmen einfacher verwalten und auf dem neuesten Stand halten, wenn sich Ihre Nutzerbasis ändert. Sobald das Löschen abgeschlossen ist, speichert Personalize keine Informationen über die gelöschten Benutzer mehr und berücksichtigt den Benutzer daher nicht für das Modelltraining. 
Quelle: aws.amazon.com