Amazon VPC Route Server now available in new regions

Amazon VPC Route Server is now available in 16 new regions in addition to the 14 existing ones. VPC Route Server simplifies dynamic routing between virtual appliances in your Amazon VPC. It allows you to advertise routing information through Border Gateway Protocol (BGP) from virtual appliances and dynamically update the VPC route tables associated with subnets and internet gateway. With this launch, Amazon VPC Route Server is available in 30 AWS Regions: US East (Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), US West (N. California), Canada West (Calgary), Asia Pacific (Malaysia), Europe (Milan), Europe (Paris), Asia Pacific (Sydney), Europe (London), Canada (Central), Mexico (Central), South America (Sao Paulo),Asia Pacific (Seoul), Europe (Zurich), Europe (Stockholm), Middle East (UAE), Israel (Tel Aviv), Asia Pacific (Taipei), Asia Pacific (New Zealand), Asia Pacific (Melbourne), Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Osaka) and Asia Pacific (Thailand). To learn more about Amazon VPC Route Server, visit this page.
Quelle: aws.amazon.com

Safer Docker Hub Pulls via a Sonatype-Protected Proxy

Why a “protected repo”?

Modern teams depend on public container images, yet most environments lack a single, auditable control point for what gets pulled and when. This often leads to three operational challenges:

Inconsistent or improvised base images that drift across teams and pipelines.

Exposure to new CVEs when tags remain unchanged but upstream content does not.

Unreliable workflows due to rate limiting, throttling, or pull interruptions.

A protected repository addresses these challenges by evaluating images at the boundary between public sources and internal systems, ensuring only trusted content is available to the build process. Routing upstream pulls through a Nexus Repository Docker proxy that authenticates to Docker Hub and caches approved layers and creates a security and reliability checkpoint. Repository Firewall inspects image layers and their components against configured policies and enforces the appropriate action, such as allow, quarantine, or block, based on the findings. This provides teams a standard, dependable entry point for base images. Approved content is cached to accelerate subsequent pulls, while malware and high-severity vulnerabilities are blocked before any layer reaches the developer’s environment.Combining this workflow with curated sources such as Docker Official Images or Docker Hardened Images provides a stable, vetted baseline for the entire organization.

Docker Hub authentication (PAT/OAT) quick setup

Before configuring a Nexus Docker proxy, set up authenticated access to Docker Hub. Authentication prevents anonymous-pull rate limits and ensures that shared systems do not rely on personal developer credentials. Docker Hub supports two types of access tokens, and for proxies or CI/CD systems the recommended option is an Organization Access Token (OAT).

Choose the appropriate token type

Personal Access Token (PAT): Use a PAT when authentication is tied to an individual account, such as local development or small teams.

Tied to a single user account

Required for CLI logins when the user enables two-factor authentication

Not recommended for shared infrastructure

Organization Access Token (OAT) (recommended): Use an OAT when authentication is needed for systems that serve multiple users or teams.

Associated with an organization rather than an individual

Suitable for CI/CD systems, build infrastructure, and Nexus Docker proxies

Compatible with SSO and 2FA enforcement

Supports granular permissions and revocation

Requires a Docker Hub Team or Business plan

Create an access token

To create a Personal Access Token (PAT):

Open Docker Hub account settings (clink on your hub avatar in the top right corner).

Select “Personal access tokens”.

Click on “Generate new token”.

Define token Name, Expiration and Access permissions.

Choose “Generate” and save the value immediately, as it cannot be viewed again.

To create an Organization Access Token (OAT):

Sign in to Docker Home and select your organization.

Select Admin Console, then Access tokens.

Select Generate access token.

Expand the Repository drop-down and assign only the required permissions, typically read/pull for proxies or CI systems.

Select Generate token. Copy the token that appears on the screen and save it. You won’t be able to retrieve the token once you exit the screen.

Recommended practices

Scope tokens to the minimum necessary permissions

Rotate tokens periodically

Revoke tokens immediately if they are exposed

Monitor last-used timestamps to confirm expected usage patterns

Step-by-step: create a Docker Hub proxy

The next step after configuring authentication is to make your protected repo operational by turning Nexus into your organization’s Docker Hub proxy. A Docker proxy repository in Nexus Repository provides  a single, policy-enforced registry endpoint that performs upstream pulls on behalf of developers and CI, caches layers locally for faster and more reliable builds, and centralizes access and audit trails so teams can manage credentials and image usage from one place.

To create the proxy:

As an administrator, navigate to the Settings view (gear icon).

Open Repositories and select Create repository.

Choose docker (proxy) as the repository type.

Configure the following settings:

Remote storage: https://registry-1.docker.io

Docker V1 API: Enabled

Index type: Select “Use Docker Hub”

Blob store and network settings as appropriate for your environment

Save the repository to finalize the configuration.

Provide a Clean Pull EndpointTo keep developer workflows simple, expose the proxy at a stable, organization-wide hostname. This avoids custom ports or per-team configurations and makes the proxy a transparent drop-in replacement for direct Docker Hub pulls.Common examples include:

docker-proxy.company.com

hub.company.internal

Use a reverse proxy or ingress controller to route this hostname to the Nexus proxy repository.

Validate Connectivity

Once the proxy is exposed, verify that it responds correctly and can authenticate to Docker Hub.Run:

docker login docker-proxy.company.comdocker pull docker-proxy.company.com/dhi/node:24

A successful pull confirms that the proxy is functioning correctly, upstream connectivity is working, and authenticated access is in place.

Turn on Repository Firewall for containers

Once the Docker proxy is in place, enable Repository Firewall so images are inspected before they reach internal systems. Repository Firewall enforces policy at download time, stopping malware and high-severity vulnerabilities at the registry edge, reducing the blast radius of newly disclosed issues and cutting remediation work for engineering teams.

To enable Firewall for the proxy repository:

As an administrator, navigate to the Settings view (gear icon).

Navigate to Capabilities under the System menu.

Create a ‘Firewall Audit and Quarantine’ capability for your Docker proxy repository.

Configure your policies to quarantine new violating components and protect against introducing risk.

Inform your development teams of the change to set expectations.

Understanding “Quarantine” vs. “Audit”Repository Firewall evaluates each image as it is requested:

Quarantine – Images that violate a policy are blocked and isolated. They do not reach the developer or CI system. The user receives clear feedback indicating the reason for the failure.

Audit – Images that pass the policies are served normally and cached. This improves performance and makes the proxy a consistent, reliable source of trusted base images.

Enabling Repository Firewall gives you immediate, download-time protection and the telemetry to operate it confidently. Start with conservative policies (quarantine on malware, and on CVSS ≥ 8), monitor violations and cache hit rate, tune thresholds based on real-world telemetry, and move to stricter block enforcement once false positives are resolved and teams are comfortable with the workflow.

What a blocked pull looks like

After enabling Repository Firewall and configuring your baseline policies, any pull that fails those checks is denied at the registry edge and no image layers are downloaded. By default Nexus returns a non-descriptive 404 to avoid exposing policy or vulnerability details, though you can surface a short, internal-facing failure message.As an example, If Firewall is enabled and your CVSS threshold policy is configured correctly, the following pull should fail with a 404 message. 

docker pull docker-proxy.company.com/library/node:20

This confirms that:

The request is passing through the proxy.

Repository Firewall is inspecting the image metadata.

Policy violations are blocked before any image layers are downloaded.

In the Firewall UI, you can open the proxy repository and view the recorded violations. The details can include detected CVEs, severity information, and the policy that triggered the denial. This provides administrators with visibility and confirms that enforcement is functioning as expected.

Additionally, the Quarantined Containers dashboard lists every image that Repository Firewall has blocked, showing the triggering policy and severity so teams can triage with full context. Administrators use this view to review evidence, add remediation notes, and release or delete quarantined items; note that malware is quarantined by default while other violations are quarantined only when their rules are set to Fail at the Proxy stage.

Fix forward: choose an approved base and succeed

Once Policy Enforcement is validated, the next step is to pull a base image that complies with your organization’s security rules. This shows what the normal developer experience looks like when using approved and trusted content.

Pull a compliant tag through the proxy:

docker pull docker-proxy.company.com/dhi/node:24

This request passes the Repository Firewall checks, and the image is pulled successfully. The proxy caches each layer locally so that future pulls are faster and no longer affected by upstream rate limits or registry availability.If you repeat the pull, the second request is noticeably quicker because it is served directly from the cache. This illustrates the everyday workflow developers should expect: trusted images, predictable performance, and fewer interruptions.

Get started: protect your Docker pulls

A Sonatype-protected Docker proxy gives developers one policy-compliant registry endpoint for image pulls. Layers are cached for speed, policy violations surface with actionable guidance, and teams work with vetted base images with the same Docker CLI workflows they already rely on. When paired with trusted sources such as Docker Hardened Images, this pattern delivers predictable baselines with minimal developer friction.Ready to try this pattern? Check the following pages:

Sonatype Nexus Repository basic documentation

Integration with Docker Hub

Register for Nexus Repository trial here

Quelle: https://blog.docker.com/feed/

Amazon Connect makes it easier to manage recurring overrides for hours of operation

Amazon Connect now makes it easier to manage contact center operating hours for recurring events like holidays, maintenance windows, and promotional periods, with a visual calendar that provides at-a-glance visibility by day, month, or year. You can set up recurring overrides that automatically take effect weekly, monthly, or every other Friday, and use them to provide customers with personalized experiences, all without having to manually revisit configurations. For example, every January 1st you can automatically greet customers with “Happy New Year!” and route them to a special holiday message before checking if agents are available, then on January 2nd your contact center automatically returns to normal operations. These additional hours of operation override capabilities are available in all AWS regions where Amazon Connect is available and offer public API and AWS CloudFormation support. To learn more, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, please visit the Amazon Connect website.
Quelle: aws.amazon.com

Amazon VPC IPAM policies now support RDS and Application Load Balancers

Amazon Virtual Private Cloud (VPC) IP Address Manager (IPAM) now supports policies for Amazon Relational Database Service (RDS) instances and Application Load Balancers (ALB). This feature enables IP administrators to centrally configure and enforce IP allocation strategies for these resources, improving operational posture and simplifying network and security management. Using IPAM policies, IP administrators can centrally define public IP allocation rules for AWS resources, such as RDS instances, Application Load Balancers and Network Address Translation (NAT) Gateways when used in regional availability mode, and Elastic IP addresses. The IP allocation policy configured centrally cannot be superseded by individual application teams, ensuring compliance at all times. Before this feature, IP administrators had to educate database administrators and application developers about IP allocation requirements for RDS instances and Application Load Balancers, and rely on them to always comply with best practices. Now, you can add IP-based filters for RDS and ALB traffic in your networking and security constructs like access control lists, route tables, security groups, and firewalls, with confidence that public IPv4 address assignments to these resources always come from specific IPAM pools. The feature is available in all AWS commercial regions and the AWS GovCloud (US) Regions, in both Free Tier and Advanced Tier of VPC IPAM. When used with the Advanced Tier of VPC IPAM, customers can set policies across AWS accounts and AWS regions. To get started please see the IPAM policies documentation page. To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.
Quelle: aws.amazon.com

AWS IoT Device Management launches Wi-Fi Simple Setup for managed integrations

AWS IoT Device Management now offers Wi-Fi Simple Setup (WSS) for managed integrations, enabling developers to implement simplified Wi-Fi provisioning in Internet of Things (IoT) solutions. With WSS, developers can now integrate QR code scanning functionality that empowers end users to connect their Wi-Fi enabled devices using simple bar code scans, reducing device setup time and minimizing the need for technical support compared to manual configurations. The WSS capability operates through the managed integrations feature of AWS IoT Device Management. Managed integrations enables developers to control and manage devices across different vendors and connectivity protocols, while WSS helps streamline the device onboarding process. Once users securely store their Wi-Fi credentials in managed integrations, new device setup becomes nearly automatic. Users simply power on their new IoT device and scan its QR code using the solution provider’s mobile app. The new device discovers and connects to a hidden network broadcasted by the IoT hub, which securely transmits the user’s pre-stored Wi-Fi credentials to complete the onboarding process. This creates a near zero-touch experience for end users to securely and conveniently onboard Wi-Fi-connected devices into managed integrations-based IoT solutions. The managed integrations feature is available in Canada (Central) and Europe (Ireland) To learn more, refer to the developer guide and get started on the AWS IoT console.
Quelle: aws.amazon.com