Amazon Nova now supports the customization of content moderation settings

Amazon Nova models now support the customization of content moderation settings for approved business use cases that require processing or generating sensitive content. Organizations with approved business use cases can adjust content moderation settings across four domains: safety, sensitive content, fairness, and security. These settings allow customers to adjust specific settings relevant to their business requirements. Amazon Nova enforces essential, non-configurable controls to ensure responsible use of AI, such as controls to prevent harm to children and preserve privacy. Customization of content moderation settings is available for Amazon Nova Lite and Amazon Nova Pro in the US East (N. Virginia) region. To learn more about Amazon Nova, visit the Amazon Nova product page and to learn about Amazon Nova responsible use of AI, visit the AWS AI Service Cards, or see the User Guide. To see if your business model is appropriate to customize content moderation settings, contact your AWS Account Manager.
Quelle: aws.amazon.com

AWS announces Nitro Enclaves are now available in all AWS Regions

AWS Nitro Enclaves is an Amazon EC2 capability that enables customers to create isolated compute environments (enclaves) to further protect and securely process highly sensitive data within their EC2 instances. Nitro Enclaves helps customers reduce the attack surface area for their most sensitive data processing applications. There is no additional cost other than the cost for the using Amazon EC2 instances and any other AWS services that are used with Nitro Enclaves. Nitro Enclaves is now available across all AWS Regions, expanding to include new regions in Asia Pacific (New Zealand, Thailand, Jakarta, Hyderabad, Malaysia, Melbourne, and Taipei), Europe (Spain and Zurich), Middle East (UAE and Tel Aviv), and North America (Central Mexico and Calgary). To learn more about AWS Nitro Enclaves and how to get started, visit the AWS Nitro Enclaves page.
Quelle: aws.amazon.com

Introducing a Richer ”docker model run” Experience

The command line is where developers live and breathe. A powerful and intuitive CLI can make the difference between a frustrating task and a joyful one. That’s why we’re excited to announce a major upgrade to the interactive chat experience in Docker Model Runner, our tool for running AI workloads locally.

We’ve rolled out a new, fully-featured interactive prompt for the “docker model run” command that brings a host of quality-of-life improvements, making it faster, easier, and more intuitive to chat with your local models. Let’s dive into what’s new.

A True Readline-Style Prompt with Keyboard Shortcuts

The most significant change is the move to a new readline-like implementation. If you spend any time in a modern terminal, you’ll feel right at home. This brings advanced keyboard support for navigating and editing your prompts right on the command line.

You can now use familiar keyboard shortcuts to work with your text more efficiently. Here are some of the new key bindings you can start using immediately:

Move to Start/End: Use “Ctrl + a” to jump to the beginning of the line and “Ctrl + e” to jump to the end.

Word-by-Word Navigation: Quickly move through your prompt using “Alt + b” to go back one word and “Alt + f” to go forward one word.

Efficient Deletions:

“Ctrl + k”: Delete everything from the cursor to the end of the line.

“Ctrl + u”: Delete everything from the cursor to the beginning of the line.

“Ctrl + w”: Delete the word immediately before the cursor.

Screen and Session Management:

“Ctrl + l”: Clear the terminal screen to reduce clutter.

“Ctrl + d”: Exit the chat session, just like the /bye command.

Take Back Control with “Ctrl + c”

We’ve all been there: you send a prompt to a model, and it starts generating a long, incorrect, or unwanted response. Previously, you had to wait for it to finish. Not anymore.

You can now press “Ctrl + c” at any time while the model is generating a response to immediately stop it. We’ve implemented this using context cancellation in our client, which sends a signal to halt the streaming response from the model. This gives you full control over the interaction, saving you time and frustration. This feature has also been added to the basic interactive mode for users who may not be in a standard terminal environment. “Ctrl + c” ends that interaction but does not exit. “Ctrl + d” exits “docker model run”.

Improved Multi-line and History Support

Working with multi-line prompts, like pasting in code snippets, is now much smoother. The prompt intelligently changes from > to a more subtle . to indicate that you’re in multi-line mode.

Furthermore, the new prompt includes command history. Simply use the Up and Down arrow keys to cycle through your previous prompts, making it easy to experiment, correct mistakes, or ask follow-up questions without retyping everything. For privacy or scripting purposes, you can disable history writing by setting the DOCKER_MODEL_NOHISTORY environment variable.

Get Started Today!

These improvements make “docker model run” a more powerful and pleasant tool for all your local AI experiments. Pull a model from Docker Hub and start a chat to experience the new prompt yourself:

$ docker model run ai/gemma3
> Tell me a joke about docker containers.
Why did the Docker container break up with the Linux host?

… Because it said, "I need some space!"

Would you like to hear another one?

Help Us Build the Future of Local AI

Docker Model Runner is an open-source project, and we’re building it for the community. These updates are a direct result of our effort to create the best possible experience for developers working with AI.

We invite you to get involved!

Star, fork, and contribute to the project on GitHub: https://github.com/docker/model-runner

Report issues and suggest new features you’d like to see.

Share your feedback with us and the community.

Your contributions help shape the future of local AI development and make powerful tools accessible to everyone. We can’t wait to see what you build!

Learn more

Check out the Docker Model Runner General Availability announcement

Visit our Model Runner GitHub repo! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!

Get started with Docker Model Runner with a simple hello GenAI application

Quelle: https://blog.docker.com/feed/

Amazon U7i instances now available in Europe (London) Region

Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Europe (London) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

Amazon CloudWatch Database Insights now provides on-demand analysis for RDS for SQL Server

Amazon CloudWatch Database Insights expands the availability of its on-demand analysis experience to the RDS for SQL Server database engine. CloudWatch Database Insights is a monitoring and diagnostics solution that helps database administrators and developers optimize database performance by providing comprehensive visibility into database metrics, query analysis, and resource utilization patterns. This feature leverages machine learning models to help identify performance bottlenecks during the selected time period, and gives advice on what to do next. Previously, database administrators had to manually analyze performance data, correlate metrics, and investigate root cause. This process is time-consuming and requires deep database expertise. With this launch, you can now analyze database performance monitoring data for any time period with automated intelligence. The feature automatically compares your selected time period against normal baseline performance, identifies anomalies, and provides specific remediation advice. Through intuitive visualizations and clear explanations, you can quickly identify performance issues and receive step-by-step guidance for resolution. This automated analysis and recommendation system reduces mean-time-to-diagnosis from hours to minutes. You can get started with this feature by enabling the Advanced mode of CloudWatch Database Insights on your RDS for SQL Server databases using the RDS service console, AWS APIs, the AWS SDK, or AWS CloudFormation. Please refer to RDS documentation and Aurora documentation for information regarding the availability of Database Insights across different regions, engines and instance classes. 
Quelle: aws.amazon.com