Empowering Data-Driven Development: Docker’s collaboration with Snowflake and Docker AI Advancements

Docker, in collaboration with Snowflake, introduces an enhanced level of developer productivity when you leverage the power of Docker Desktop with Snowpark Container Services (private preview). At Snowflake BUILD, Docker presented a session showcasing the streamlined process of building, iterating, and efficiently managing data through containerization within Snowflake using Snowpark Container Services.

Watch the session to learn more about how this collaboration helps streamline development and application innovation with Docker, and read on for more details. 

Docker Desktop with Snowpark Container Services helps empower developers, data engineers, and data scientists with the tools and insights needed to seamlessly navigate the intricacies of incorporating data, including AI/ML, into their workflows. Furthermore, the advancements in Docker AI within the development ecosystem promise to elevate GenAI development efforts now and in the future.

Through the collaborative efforts showcased between Docker and Snowflake, we aim to continue supporting and guiding developers, data engineers, and data scientists in leveraging these technologies effectively.

Accelerating deployment of data workloads with Docker and Snowpark

Why is Docker, a containerization platform, collaborating with Snowflake, a data-as-a-service company? Many organizations lack formal coordination between data and engineering teams, meaning every change might have to go through DevOps, slowing project delivery. Docker Desktop and Snowpark Container Services (private preview) improve collaboration between developers and data teams. 

This collaboration allows data and engineering teams to work together, removing barriers to enable:

Ownership by streamlining development and deployment

Independence by removing traditional dependence on engineering stacks 

Efficiency by reducing resources and improving cross-team coordination

With the growing number of applications that rely on data, Docker is invested in ensuring that containerization supports the changing development landscape to provide consistent value within your organization.

Streamlining Snowpark deployments with Docker Desktop 

Docker Desktop provides many benefits to data teams, including improving data ingestion or enrichment and improving general workarounds when working with a data stack. Watch the video from Snowflake BUILD for a demo showing the power of Docker Desktop and Snowpark Container Services working together. We walk through:

How to create a Docker Image using Docker Desktop to help you drive consistency by encapsulating your code, libraries, dependencies, and configurations in an image.

How to push that image to a registry to make it portable and available to others with the correct permissions.

How to run the container as a job in Snowpark Container Services to help you scale your work with versioning and distributed deployments. 

Using Docker Desktop with Snowpark Container Services provides an enhanced development experience for data engineers who can develop in one environment and deploy in another. For example, with Docker Desktop you can create on an Arm64 platform, yet deploy to Snowpark, an AMD64 platform. This functionality shows multi-platform images, so you can have a great local development environment and still deploy to Snowpark without any difficulty. 

Boosting developer productivity with Docker AI 

In alignment with Docker’s mission to increase the time developers spend on innovation and decrease the time they spend on everything else, Docker AI assists in streamlining the development lifecycle for both development and data teams. Docker AI, available in early access now, aims to simplify current tasks, boosting developer productivity by offering context-specific, automated guidance. 

When using Snowpark Container Services, deploying the project to Snowpark is the next step once you’ve built your image. Leveraging its trained model on Snowpark documentation, Docker AI offers relevant recommendations within your project’s context. For example, it autocompletes Docker files with best practice suggestions and continually updates recommendations as projects evolve and security measures change. 

This marks Docker’s initial phase of aiding the community’s journey in simplifying using big data and implementing context-specific AI guidance across the software development lifecycle. Despite the rising complexity of projects involving vast data sets, Docker AI provides support, streamlining processes and enhancing your experience throughout the development lifecycle.

Docker AI aims to deliver tailored, automated advice during Dockerfile or Docker Compose editing, local docker build debugging, and local testing. Docker AI leverages the wealth of knowledge from the millions of long-time Docker users to autogenerate best practices and recommend secure, updated images. With Docker AI, developers can concentrate more on innovating their applications and less time on tools and infrastructure. Sign up for the Docker AI Early Access Program now.

Improving the collaboration across development and data teams

Our continued investment in Docker Desktop and Docker AI, along with our key collaborators like Snowflake, help you streamline the process of building, iterating, and efficiently managing data through containerization.

Download Docker Desktop to get started today. Check with your admins — you may be surprised to find out your organization is already using Docker! 

Learn more

Review Snowpark Container Services GitHub documentation.

Follow the Snowflake tutorial to leverage your Snowflake data and build a Docker Image. 

Learn more about LLM and Hugging Face. 

Sign up for the Docker AI Early Access Program.

Quelle: https://blog.docker.com/feed/

Amazon Redshift Serverless: Ankündigung verbesserter Funktionen für Verwaltbarkeit und Benutzerfreundlichkeit

Heute kündigt Amazon Redshift eine verbesserte Verwaltbarkeit und Überwachung für Funktionen für Amazon Redshift Serverless an, darunter kontoübergreifende Cross-VPC, benutzerdefinierte Domainnamen (CNAME), Snapshot-Planung, regionsübergreifendes Kopieren (CRC), verbesserte Sichtbarkeit für die Serverless-Abrechnung in der Redshift-Konsole und Versionsverfolgung. Diese Funktionen bieten Ihnen einen nahtlosen Datenzugriff, zuverlässigen Datenschutz und einen kostengünstigen Betrieb.
Quelle: aws.amazon.com

Amazon SageMaker führt neue Inferenzfunktionen ein, um Kosten und Latenz zu reduzieren

Wir freuen uns, Ihnen neue Funktionen in Amazon SageMaker vorstellen zu können, mit denen Kunden die Kosten für die Modellbereitstellung im Durchschnitt um 50% und die Inferenzlatenz im Durchschnitt um 20% senken können. Kunden können mehrere Modelle auf derselben Instance bereitstellen, um die zugrunde liegenden Beschleuniger besser nutzen zu können. SageMaker überwacht aktiv Instances, die Inferenzanfragen verarbeiten, und leitet Anfragen intelligent weiter, je nachdem, welche Instanzen verfügbar sind.
Quelle: aws.amazon.com

Vektor-Engine für Amazon OpenSearch Serverless jetzt allgemein verfügbar

Heute kündigt AWS die allgemeine Verfügbarkeit der Vektor-Engine für Amazon OpenSearch Serverless an. Die Vector Engine für OpenSearch Serverless ist eine einfache, skalierbare und leistungsstarke Vektordatenbank, die es Entwicklern erleichtert, Machine Learning (ML) – erweiterte Sucherlebnisse und Anwendungen für generative künstliche Intelligenz (KI) – zu erstellen, ohne die zugrundeliegende Vektordatenbank-Infrastruktur verwalten zu müssen. Entwickler können sich auf die kosteneffiziente, sichere und ausgereifte Serverless-Plattform der Vector Engine verlassen, um nahtlos vom Anwendungsprototyping zur Produktion überzugehen. 
Quelle: aws.amazon.com