Mirantis is Now a CVE Numbering Authority

We’re pleased to announce that as of January 2022, the Common Vulnerabilities and Exposures (CVE) Program has designated Mirantis a CVE Numbering Authority (CNA). The CVE Program, sponsored by the U.S. Department of Homeland Security, identifies and catalogs vulnerabilities, fostering a more widespread and standardized understanding of the cybersecurity environment.As a CVE Numbering Authority, the … Continued
Quelle: Mirantis

Introducing Compute Optimized VMs powered by AMD EPYC processors

Over the last six months, we launched 3rd Gen AMD EPYC™ CPUs (formerly code-named “Milan”) across our Compute Engine virtual machine (VM) families. We introduced the Tau VM family, targeting scale-out workloads. Tau VMs are the leader both in terms of performance and workload total cost of ownership (TCO) from any leading provider available today. We also refreshed our general-purpose N2D instances with 3rd Gen AMD EPYC processors, providing a 30% boost in price-performance. Today, we’re excited to announce the General Availability of the newest instance series in our Compute Optimized family, C2D, also powered by 3rd Gen AMD EPYC processors.”AMD EPYC processors continue to showcase their capabilities for HPC and compute-focused workloads. Whether that’s running drug simulations for the latest vaccines, exploring the cosmos, or helping design critical hardware and electronics for the future of the industry,” said Lynn Comp, corporate vice president, Cloud Business, AMD. “The Google Cloud C2D instances with AMD EPYC processors show the continued growth of the AMD and Google Cloud collaboration, by now offering some of the highest performance instances for demanding, performance-intensive workloads.”New larger machine shapes for the Compute Optimized FamilyC2D instances take advantage of advances in processor architecture from the latest generation AMD EPYC™ CPUs including  “Zen 3” core.  C2D supports Persistent Disks, Advanced Networking, Compact Placement Policies, and soon-to-follow Sole Tenant nodes. Instances are configurable with up to 112 vCPUs (56 cores), 896 GB of memory, and 3 TB of Local SSD. C2D is available in standard, high-cpu and high-mem, each with seven machine types for optimal memory-to-core ratio, to better align with your workload. Improved performance for a wide variety of workloads The Compute Optimized VM family is ideal for customers with performance-intensive workloads. C2D instances provide the largest VM sizes within the Compute Optimized VM family and are best-suited for memory-bound workloads such as high-performance databases, gaming, and high-performance computing (HPC) workloads, such as electronic design automation (EDA) and computational fluid dynamics (CFD). C2D high-cpu and standard instances serve existing compute-intensive workloads, including high-performance web servers, media transcoding, and AAA Gaming. C2D high-mem machine configurations are well suited for workloads such as HPC and EDA that require higher memory configurations. For optimal HPC workload performance, check out Google’s best practices for running tightly-coupled HPC applications on Compute Engine.Performance reportWe’ve illustrated below how C2D with 3rd Gen EPYC compares against N2D with 2nd Gen EPYC (formerly code-named “Rome”)    in GCP’s preferred set of benchmarks to measure compute intensive performance, media transcoding, and gaming benchmarks.We worked with AMD engineers to benchmark some key applications in the HPC industry. The improvements in the Compute Optimized family are clear when C2D is compared directly to AMD’s previous generation of EPYC processors, specifically the n2d-standard-128 machine shape, closest to C2D’s 112 vCPUs. We first compare performance on industry-standard measures of memory bandwidth (STREAM Triad) and floating-point performance (HPL).Compared to the N2D VM’s baseline performance, the C2D’s 3rd Gen EPYC processor improvements, including higher L3 cache sizes per core and full NUMA exposure, have a direct benefit in memory performance. This is empirically observed through the 30% improved STREAM Triad results. C2D’s floating-point improvements can also be seen in the 7% performance increase in the HPL results, despite being run with 12.5% fewer cores than the previous-generation EPYC processor. Looking at application benchmarks across some key areas of focus in HPC, we can see that C2D VMs provide material gains for representative benchmarks in areas such as weather forecasting (WRF CONUS 2.5km), molecular dynamics (NAMD), and CFD (OpenFOAM).What customers are sayingNot only is the c2d-standard-112 machine shape faster overall in the above workloads, but it’s also ~6% cheaper than the baseline n2d-standard-128 machine shape. It’s no wonder that customers are choosing it for their memory-intensive and HPC workloads. Here’s a sampling.AirShaper is an cloud-based CFD platform that helps designers and engineers to easily run aerodynamic simulations to improve the performance and efficiency of cars, drones, motorbikes — even athletes themselves.“Getting the best performance helps us drastically reduce run times, improving user experience and cutting costs at the same time. By running our CFD simulations on C2D, we’ve been able to reduce our costs by almost 50% and reduce simulation times by 30% compared to previous generation high-performance computing instances. Also, compared to our on-prem instances we’ve been able to reduce our simulation times by more than a factor of three.” – Wouter Remmerie, CEO AirshaperClutch’s Integrated Customer Data and Marketing platform delivers customer intelligence and personalized engagements for brands to identify, understand and motivate each segment of their customer base. Clutch offers solutions for CDP, Loyalty, Offer Management, Marketing Orchestration and Stored Value that use embedded machine learning to increase the lifetime value of each customer.“We moved our compute and memory intensive Data Analytics platform to Compute Optimized on AMD EYPC Milan instances. The C2D instances provide a sweet spot of memory and CPU performance.” – Ed Dunkelberger, SVP TechnologyGoogle Kubernetes Engine supportGoogle Kubernetes Engine (GKE) is the leading platform for organizations looking for advanced container orchestration, delivering the highest levels of reliability, security, and scalability. GKE supports C2D VMs, helping you get the most out of your containerized workloads. You can add C2D 3rd Gen EPYC CPU-based VMs to your GKE clusters by choosing the C2D machine type in your GKE node pools. Confidential Computing (coming soon)Confidential Computing is an industry-wide effort to protect data in-use including encryption of data in-memory — while it’s being processed. With Confidential Computing, you can run your most sensitive applications and services on C2D VMs.We’re committed to delivering a portfolio of Confidential Computing VM instances and services such as GKE and Dataproc using the AMD Secure Encrypted Virtualization (SEV) security feature. We’ll support SEV using this latest generation of AMD EPYC™ processors in the near term and plan to add more security capabilities in the future.Get started with C2D todayC2D instances are available today in regions around the globe: us-central1 (Iowa), asia-southeast1 (Singapore), us-east1 (South Carolina), us-east4 (North Virginia), asia-east1 (Taiwan), and europe-west4 (Netherlands), and in additional regions in the coming months. C2D instances are available via on-demand, as Spot VMs, and via reservations. You can also take advantage of further cost savings by purchasing Committed Use Discounts (CUDs) in one- and three-year terms. To start using C2D instances, simply choose the C2D option when creating a new VM or GKE node in the Google Cloud Console.Related ArticleCompute Engine explained: Choosing the right machine family and typeAn overview of Google Compute Engine machine families and machine types.Read Article
Quelle: Google Cloud Platform

Build, deploy, and scale ML models faster with Vertex AI’s new training features

Vertex AI includes over a dozen powerful MLOps tools in one unified interface, so you can build, deploy, and scale ML models faster. We’re constantly updating these tools, and we recently enhanced Vertex AI Training with an improved Local Mode to speed up your debugging process and Auto-Container Packaging to simplify cloud job submissions. In this article, we’ll look at these updates, and how you can use them to accelerate your model training workflow.Debugging is an inherently repetitive process with small code change iterations. Vertex AI Training is a managed cloud environment that spins up VMs, loads dependencies, brings in data, executes code, and tears down the cluster for you. That’s a lot of overhead to test simple code changes, which can greatly slow down your debugging process. Before submitting a cloud job, it’s common for developers to first test code locally.Now, with Vertex AI Training’s improved Local Mode, you can iterate and test your work locally on a small sample data set without waiting for the full Cloud VM lifecycle. This is a friendly and fast way to debug code before running it at cloud scale. By leveraging the environment consistency made possible by Docker Containers, Local Mode  lets users submit their code as a local run with the expectation it will be processed in a similar environment to the one executing a cloud job. This results in greater reliability and reproducibility. With this new capability, you can debug simple run time errors faster since they do not need to submit the job to the cloud and wait for VM cluster lifecycle overhead. Once you have setup the environment,  you can launch a local run with gcloud:Once you are ready to run your code at cloud scale, Auto-Container Packaging simplifies the cloud job submission process. To run a training application, you need to upload your code and any dependencies. Previously this process took three steps:Build the docker container locally.Push the built container to a container repository.Create a Cloud Vertex AI Training job.With Auto-Container Packaging, that 3 step process is brought down to a single Create step:Additionally, even if you are not familiar with Docker, Auto-Container Packaging lets you take advantage of the consistency and reproducibility benefits of containerization.These new Vertex AI Training features further simplify and speed up your model training workflow. Local Mode helps you iterate faster with small code changes to quickly debug runtime errors. Auto-Container Packaging reduces the steps it takes to submit your local python code as a scaled up cloud job.You can try this codelab to gain hands-on experience with these features.To learn more about the improved local mode, visit our local mode documentation guide.Auto-Container Packaging documentation can be found on the Create a Custom Jobdocumentation page under “gcloud.”To learn about Vertex AI, check out this blog post from our developer advocates.Related ArticleBio-pharma organizations can now leverage the groundbreaking protein folding system, AlphaFold, with Vertex AIHow to run DeepMind’s AlphaFold on Google Cloud’s Vertex AI.Read Article
Quelle: Google Cloud Platform

New Docker Menu & Improved Release Highlights with Docker Desktop 4.5

We’re excited to announce the release of Docker Desktop 4.5 which includes enhancements we’re excited for you to try out. 

New Docker Menu: Improved Speed and Unified Experience Across Operating Systems

We’ve launched a new version of the Docker Menu which creates a consistent user experience across all operating systems (including Docker Desktop for Linux, follow the roadmap item for updates and pre-release builds!). The Docker Menu looks and works exactly as it did before, so no need to learn anything new, just look forward to potential enhancements in the future. This change has also significantly sped up the time it takes to open the Docker Dashboard, so actions from the Docker Menu that take you to the Docker Dashboard are now instantaneous.

If you do run into any issues, you can still go back to the old version by doing the following:

Quit Docker Desktop, then add a features-overrides.json  file with the following content:

{
“WhaleMenuRedesign”: {
“enabled”: false
}
}

Depending on your operating system, you will need to place this file in one of the following location:

On Mac: ~/Library/Group Containers/group.com.docker/features-overrides.jsonOn Windows: %APPDATA%Dockerfeatures-overrides.json

Docker Dashboard Release Highlights

Continuing the revamp of the update experience, we’ve moved the release highlights into the software updates section in the Docker Dashboard, creating one centralized place for all update information, so you can easily refer back to it. We’ve also included new information about the update: the version number of the newest version available, as well as the build number for both the version you are on and the latest version. 

For now, when you manually check for updates from the Docker Menu, you will still see the release highlights pop-up outside of the Docker Dashboard, but that will be removed in future versions, and direct you instead to this Software Updates section.

Reducing the Frequency of Docker Desktop Feedback Prompts

We’ve seen your comments that we’re asking for feedback too often and it’s disrupting your workflows. We really appreciate the time you take to let us know how our product is doing, but we’ve made sure you get asked less often. 

To give you an overview, previously, we asked for feedback 14 days after a new installation, and then users were prompted again for feedback every 90 days. Now, new installations of Docker Desktop will prompt users to give initial feedback after 30 days of having the product installed. Users can then choose to give feedback or decline. You then won’t be asked again for 180 days since the last prompt for a rating.

These scores help us understand how the user experience of a product is trending so we can continue to make improvements to the product, and the comments you leave helps us make changes like this when we’ve missed the mark. 

What’s missing from making Docker great for you?

We strive to put developers first in everything we do. As we mentioned in this blog, your feedback is how we prioritize features and is why we’re working on improving Mac filesystem performance (check out the roadmap item for the latest build), and implementing Docker Desktop for Linux. We’d love to know what you think we should work on next. Upvote, comment or add new ideas to our public roadmap. 

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/
The post New Docker Menu & Improved Release Highlights with Docker Desktop 4.5 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/