Amazon EMR Serverless adds support for job run level cost allocation

Amazon EMR Serverless now supports job run-level cost allocation that provides better visibility into charges for individual job runs by allowing you to configure granular billing attribution at the individual job run level. You can get granular cost visibility by filtering and tracking costs in AWS Cost Explorer and Cost and Usage Reports by specific job run IDs and cost allocation tags associated with job runs. Amazon EMR Serverless is a deployment option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Previously, you could assign cost allocation tags to EMR Serverless applications, with cost attribution limited to the application level. With job run-level cost allocation, now you can assign cost allocation tags to each job run, enabling fine-grained billing attribution at the individual job run level. Cost allocation tags at the job run level also allow you to track costs by domains within a single application. For example, a single application could support jobs for finance and marketing domains, allowing you to track costs separately for each domain. Tracking costs for individual job runs makes it easier to conduct benchmarks that assess the costs of each job run as well as focus cost optimization efforts more precisely, allowing deeper insights into resource utilization and spending patterns across different jobs and domains. This feature is available in all AWS Regions where Amazon EMR Serverless is available including AWS GovCloud (US) and China regions. To learn more, see Enabling Job Level Cost Allocation in the Amazon EMR Serverless User Guide
Quelle: aws.amazon.com

Announcing larger managed database bundles for Amazon Lightsail

Amazon Lightsail now offers two larger database bundles with up to 8 vCPUs, 32GB memory, and 960GB SSD storage. The new database bundles are available in both standard and high-availability plans. You can create MySQL and PostgreSQL databases using the new Lightsail managed database bundles. The new larger database bundles enable you to scale your database workloads and run more data-intensive applications in Lightsail. These higher-performance database bundles are ideal for production workloads that require increased storage capacity and processing power to handle growing datasets and concurrent connections. Using these new bundles, you can run e-commerce platforms, content management systems, business intelligence applications, SaaS products, and more. These new bundles are now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, or to get started with your free trial, click here.
Quelle: aws.amazon.com

Security Is a Developer Experience Problem, Rooted in Our Foundations

For more than a decade, the industry has tried to improve software security by pushing it closer to developers. We moved scanners into CI, added security checks to pull requests, and asked teams to respond faster to an ever-growing stream of vulnerabilities. And yet, the underlying problems have not gone away.

The issue is not that developers care too little about security. It is that we keep trying to fix security at the edges, instead of fixing the foundations. Hardened container images change that dynamic by reducing attack surface and eliminating much of the low-signal security noise before it ever reaches development teams.

Security Fails When It Becomes Noise

Most developers I know care deeply about building secure software. What they do not care about is security theater.

The way we handle security issues today, especially CVEs, often creates a steady stream of low-signal work for development teams. Alerts fire constantly. Many are technically valid but practically irrelevant. Others ask developers to patch components they did not choose and do not meaningfully control. Over time, this turns security into background noise.

When that happens, the system has already failed. Developers are forced to context switch, teams burn time debating severity scores, and real risk gets buried alongside issues that do not matter. This is not a motivation problem. It is a system design problem.

The industry responded by trying to “shift left” and push security earlier in the development cycle. In practice, this often meant pushing more work onto developers without giving them better defaults or foundations. The result was more toil, more alerts, and more reasons to tune it all out.

Shifting left was the right instinct but the wrong execution. The goal should not be making developers do more security work. It should be making secure choices the painless, obvious default so developers do less security work while achieving better outcomes.

Why Large Images Were the Default

To understand how we got here, it helps to be honest about why most teams start with large, generic base images.

When Docker launched in 2013, containers were unfamiliar. Developers reached for what they knew: full Linux distributions and familiar Debian or Ubuntu environments with all the debugging tools they relied on. 

Large images that had everything were a rational default. This approach optimized for ease and flexibility. When everything you might ever need is already present, development friction goes down. Builds fail less often. Debugging is simpler. Unknown dependencies are less likely to surprise you at the worst possible time.

For a long time, doing something more secure has required real investment. Teams needed a platform group that could design, harden, and continuously maintain custom base images. That work had to compete with product features and infrastructure priorities. Most organizations never made that tradeoff, and that decision was understandable.

So the industry converged on a familiar pattern. Start with a big image. Ship faster in the short term. Deal with the consequences later.

Those consequences compound. Large images dramatically increase the attack surface. They accumulate stale dependencies. They generate endless CVEs that developers are asked to triage long after the original choice was made. What began as a convenience slowly turns into persistent security and operational drag that slows development velocity and software shipments.

Secure Foundations Can Improve Developer Experience

There is a widely held belief that better security requires worse developer experience. In practice, the opposite is often true.

Starting from a secure, purpose-built foundation, like Docker Hardened Images, reduces complexity rather than adding to it. Smaller images contain fewer packages, which means fewer vulnerabilities and fewer alerts. Developers spend less time chasing low-impact CVEs and more time building actual product.

The key is that security is built into the foundation itself. Image contents are explicit and reproducible. Supply chain metadata like signatures, SBOMs, and provenance are part of the image by default, not additional steps developers have to wire together themselves. At the same time, these foundations are easy to customize securely. Teams can extend or tweak their images without undoing the hardening, thanks to predictable layering and supported customization patterns. This eliminates entire categories of hidden dependencies and security toil that would otherwise fall on individual teams.

There are also tangible performance benefits. Smaller images pull faster, build faster, and deploy faster. In larger environments, these gains add up quickly.

Importantly, this does not require sacrificing flexibility. Developers can still use rich build environments and familiar tools, while shipping minimal, hardened runtime images into production.

This is one of the rare cases where improving security directly improves developer experience. The tradeoff we have accepted for years is not inevitable.

What Changes When Secure Foundations Are the Default

When secure foundations and hardened images become the default starting point, the system behaves differently. Developers keep using the same Docker workflows they already know. The difference is the base they start from. 

Security hardening, patching, and supply chain hygiene are handled once in the foundation instead of repeatedly in every service. Secure foundations are not limited to operating system base images. The same principles apply to the software teams actually build on top of, such as databases, runtimes, and common services. Starting from a hardened MySQL or application image removes an entire class of security and maintenance work before a single line of application code is written.

This is the problem Docker Hardened Images are designed to address. The same hardening principles are applied consistently across widely used open source container images, not just at the operating system layer, so teams can start from secure defaults wherever their applications actually begin. The goal is not to introduce another security workflow or tool. It is to give developers better building blocks from day one.

Because the foundation is maintained by experts, teams see fewer interruptions. Fewer emergency rebuilds. Fewer organization-wide scrambles when a widely exploited vulnerability appears. Security teams can focus on adoption and posture instead of asking dozens of teams to solve the same problem independently.

The result is less security toil and more time spent on product work. That is a win for developers, security teams, and the business.

Build on Better Defaults

For years, we have tried to improve security by asking developers to do more. Patch faster. Respond to more alerts. Learn more tools. That approach does not scale.

Security scales when defaults are strong. When foundations are designed to be secure and maintained over time. When developers are not forced to constantly compensate for decisions that were made far below their code.

If we want better security outcomes without slowing teams down, we should start where software actually starts. That requires secure foundations, like hardened images, that are safe by default. With better foundations, security becomes quieter, development becomes smoother, and the entire system works the way it should.

That is the bar we should be aiming for.
Quelle: https://blog.docker.com/feed/

Amazon RDS for SQL Server now supports cross-region read replica in additional AWS Regions

Amazon Relational Database Service (Amazon RDS) for SQL Server now supports setting up cross-region read replicas in 16 additional AWS Regions. Cross-region read replicas enable customers to provide a replica database for read-only applications closer to users in a different region, and scale out read-only workloads. Since a read replica can be “promoted” to a standalone production database, cross-region read replicas can also be used for disaster recovery in case of regional failures. Customers can setup up to fifteen read replicas in the same or different region as the primary database instance. This launch adds support for cross-region read replicas in RDS for SQL Server in the following AWS Regions: Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (Taipei), Asia Pacific (Thailand), Canada West (Calgary), Europe (Milan), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), Mexico (Central), Middle East (Bahrain), and Middle East (UAE). To get started, visit the Amazon RDS SQL Server User Guide.
Quelle: aws.amazon.com