RDO Victoria Released

RDO Victoria ReleasedThe RDO community is pleased to announce the general availability of the RDO build for OpenStack Victoria for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Victoria is the 22nd release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/.The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.PLEASE NOTE: RDO Victoria provides packages for CentOS8 and python 3 only. Please use the Train release, for CentOS7 and python 2.7.Interesting things in the Victoria release include:

With the Victoria release, source tarballs are validated using the upstream GPG signature. This certifies that the source is identical to what is released upstream and ensures the integrity of the packaged source code.
With the Victoria release, openvswitch/ovn are not shipped as part of RDO. Instead RDO relies on builds from the CentOS NFV SIG.
Some new packages have been added to RDO during the Victoria release:

ansible-collections-openstack: This package includes OpenStack modules and plugins which are supported by the OpenStack community to help with the management of OpenStack infrastructure.
ansible-tripleo-ipa-server: This package contains Ansible for configuring the FreeIPA server for TripleO.
python-ibmcclient: This package contains the python library to communicate with HUAWEI iBMC based systems.
puppet-powerflex: This package contains the puppet module needed to deploy PowerFlex with TripleO.
The following packages have been retired from the RDO OpenStack distribution in the Victoria release:

The Congress project, an open policy framework for the cloud, has been retired upstream and from the RDO project in the Victoria release.
neutron-fwaas, the Firewall as a Service driver for neutron, is no longer maintained and has been removed from RDO.

Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/victoria/highlights.ContributorsDuring the Victoria cycle, we saw the following new RDO contributors:Amy Marrich (spotz)Daniel Pawlik Douglas Mendizábal Lance Bragstad Martin Chacon PizaPaul Leimer Pooja Jadhav Qianbiao NG Rajini Karthik Sandeep Yadav Sergii Golovatiuk Steve Baker Welcome to all of you and Thank You So Much for participating!But we wouldn’t want to overlook anyone. A super massive Thank You to all 58 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories:Adam Kimball Ade Lee Alan PevecAlex Schultz Alfredo Moralejo Amol KahatAmy Marrich (spotz)Arx Cruz Bhagyashri Shewale Bogdan Dobrelya Cédric Jeanneret Chandan Kumar Damien Ciabrini Daniel Pawlik Dmitry Tantsur Douglas Mendizábal Emilien Macchi Eric Harney Francesco Pantano Gabriele Cerami Gael Chamoulaud Gorka Eguileor Grzegorz Grasza Harald Jensås Iury Gregory Melo FerreiraJakub Libosvar Javier Pena Joel Capitao Jon Schlueter Lance Bragstad Lon Hohberger Luigi Toscano Marios Andreou Martin Chacon PizaMathieu Bultel Matthias Runge Michele Baldessari Mike Turek Nicolas Hicher Paul Leimer Pooja Jadhav Qianbiao.NG Rabi Mishra Rafael Folco Rain LeanderRajini Karthik Riccardo Pittau Ronelle Landy Sagi Shnaidman Sandeep Yadav Sergii Golovatiuk Slawek Kaplonski Soniya Vyas Sorin Sbarnea Steve Baker Tobias Urdin Wes Hayutin Yatin Karel The Next Release CycleAt the end of one release, focus shifts immediately to the next release i.e Wallaby.Get StartedThere are three ways to get started with RDO.To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.For a production deployment of RDO, use TripleO and you’ll be running a production cloud in short order.Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.Get HelpThe RDO Project has our users@lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.The #rdo channel on Freenode IRC is also an excellent place to find and give help.We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.Get InvolvedTo get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
Quelle: RDO

Turbocharge your software supply chain with Artifact Registry, now GA

As enterprises modernize their applications with improved software delivery processes, they face increasing challenges in managing their dependencies—the artifacts that make up their applications, deployed in accordance with security and compliance best practices. Today, we’re excited to announce that Artifact Registry is generally available. With support for container images, Maven, npm packages, and additional formats coming soon, Artifact Registry helps your organization benefit from scale, security, and standardization across your software supply chain. As the evolution of Google Container Registry, Artifact Registry is the single place to store container images as well as language and OS packages. As a fully managed platform, Artifact Registry helps you get total control of the software delivery process with numerous new features, including support for regional repositories, VPC service controls, granular per-repository access controls, and Customer Managed Encryption Keys (CMEK). It also offers built-in vulnerability scanning for container images and integrates with Binary Authorization, so you can enforce validation and define policies to ensure only verified images make it to production. Delivering software both fast and safely is an important goal of enterprise software development. Data from DevOps Research & Assessment (DORA) shows that there’s a vast gap between elite DevOps teams and everyone else in their ability to meet this goal. Artifact Registry brings together many of the best practices employed by elite DevOps teams so that any organization can deliver software at scale, reduce operational overhead, and free developers to focus on building differentiated value for customers. Swiss financial services provider Leonteq Securities is an early adopter of Artifact Registry, and reports that it has allowed them to streamline their software delivery process:“The migration from our on-prem registry to Artifact Registry has been a smooth experience. Artifact Registry builds upon Container Registry by providing us a single place to store, manage, secure, and share both Maven and Docker artifacts. And given Artifact Registry is fully serverless, unlike our on-prem registry, we never run out of space and pay for what we actually use.”  – Imants Firsts, Senior Software Engineer, Leonteq SecuritiesLet’s take a deeper look at the features you’ll find in Artifact Registry, and how to get started. Integrate security into your CI/CD pipelineArtifact Registry gives you the freedom to integrate with tools you use and love on a day-to-day basis. It is fully integrated with Cloud Build, Google Cloud’s CI/CD platform, automatically storing, managing, and securing any artifacts it creates. And with baked-in vulnerability scanning, container images are automatically scanned for OS package vulnerabilities. Artifact Registry is also integrated with Google Cloud runtimes such as Google Kubernetes Engine (GKE), Cloud Run, and Compute Engine. So whether you’re deploying to serverless, Kubernetes, or a Virtual Machine environment, Artifact Registry supports your DevOps processes. In addition, because Artifact Registry supports standard protocols, you can easily integrate it with popular CI/CD and security tooling. This enables you to benefit from Artifact Registry’s increased capabilities without having to change all of your existing CI/CD workflow and tooling. StackRox, Qualys, Palo Alto Networks, and Sysdig are early partners who have integrated and verified their tooling with Artifact Registry. StackRox is a Kubernetes-native container security platform that protects cloud-native applications across the entire software life cycle—from build, to deploy, to runtime—and delivers better security, accelerates development velocity, and lowers operational risks. Qualys container security, built on the Qualys Cloud platform, provides comprehensive inventory, security assessment and runtime defense capabilities for containers across the build-ship-run container lifecycle in your hybrid IT environment. Palo Alto Networks Prisma Cloud provides full-lifecycle, full-stack security for any cloud-native workload or application running on Google Cloud. Sysdig secures and monitors containers on Anthos with GKE and GKE On Prem. It provides deep visibility into the risk, health, and performance of cloud-native apps across public, hybrid, and multi-cloud deployments enabling secure and reliable software delivery.Artifact Registry: Evolution of Container Registry With more features, Artifact Registry builds upon the benefits already available in Container Registry. Following are just some of the benefits enterprises can get with Artifact Registry:*Some features are in pre-GA release stages. For full details please see Artifact Registry’s documentation.We’ll continue to develop Artifact Registry with even greater control and security features for both container and non-container artifacts. To take advantage of these improvements and additions, you can learn more about transitioning from Container Registry here. Try it today!With Artifact Registry, you now have an easy way to manage artifacts and improve security within your CI/CD pipeline. Here are more ways you can learn more about Artifact Registry:Artifact Registry for Java application development and deliveryIntro to Artifact RegistryDeploying from Artifact Registry to GKE
Quelle: Google Cloud Platform

The democratization of data and insights: Expanding Machine Learning Access

In the first blog in this series, we discussed how data availability, data access, and insight access have evolved over time, and what Google Cloud is doing today to help customers democratize the production of insights across organizational personas. In this blog we’ll discuss why artificial intelligence (AI) and machine learning (ML) are critical to generating insights in today’s world of big data, as well as what Google Cloud is doing to expand access to this powerful method of analysis.A report by McKinsey highlights the stakes at play: by 2030, companies that fully absorb AI could double their cash flow, while companies that don’t could see a 20% decline. ML and AI have traditionally been seen as the domain of experts and specialists with PhDs, so it’s no surprise that many business leaders frame their ML goals around HR challenges: creating new departments, hiring new employees, developing retaining programs for the existing workforce, and so on. But this isn’t the way it has to be. At Google Cloud, we’re focused not only on making the experts more efficient but also driving ML capabilities into the day-to-day work for anyone who works with data. For experts, the traditional ML audience, we’ve built an entire suite of tools. Our AI Platform makes it easy for them to rapidly iterate and turn ideas to deployment efficiently. Across ML teams, AI Hub makes it easier to collaborate with teammates to avoid duplicating work streams and get work done faster. Finally, TensorFlow Enterprise delivers supported and scalable TensorFlow in the cloud, directly from the leading contributors to the OSS project (us!). Making existing experts nimbler and faster helps them increase their output, which expands access to ML within an organization. However, to truly integrate ML throughout an entire organization, we need to create tools that more personas can use to drive actionable insights. Let’s take a look at what Google Cloud is doing to democratize ML across three key personas: data analysts, developers, and data engineers.Data AnalystsData analysts, as we mentioned in our first blog, are the data analytics backbone of many Fortune 500 companies. They’re experts within a data warehouse, very comfortable with SQL, and knowledgeable about the needs of the business. We knew that to drive ML capabilities to this persona, we would need to meet them where their expertise already was. That’s exactly what BigQuery ML does: it brings ML inside the data warehouse, and it’s deployed using just a few easy-to-use SQL statements—much more familiar to analysts than the Python, R, and Scala-reliant tools on which many data scientists rely. When combined with BigQuery’s ability to scale to larger data volumes than traditional enterprise data warehouses, BigQueryML gives data analysts the ability to drive ML across vast amounts of data to uncover previously unseen insights. There are a wide variety of available models within BigQuery that can help customers drive use cases as varied as recommendations, segmentation, anomaly detection, forecasting, and prediction. Further, if there’s a need for custom models, ML experts can build models to import into BigQuery, where analysts can use them at scale. We’ve seen customers in very different industries with very different use cases successfully deploy BigQuery ML. Telus has used ML to deploy anomaly detection that secures its network; UPS has used it to achieve precise package volume forecasting; Geotab is driving smarter cities by blending ML and geospatial analytics; and we’ve even seen BigQuery ML deployed to predict movie audiences. Beyond that, we see retailers predicting purchasing, financial services institutions determining insurance risk, and gaming companies forecasting long-term customer value. This analysis would have been impossible for data analysts to drive in the past. Today, it’s not only efficient, but it also has a very quick path to production.With the growing functionality of BigQuery ML, data-savvy team members have less need to also build expertise in transferring large amounts of data into and out of the BigQuery environment, and learning how to parallelize and scale data pipelines to handle deployment. By working directly in BigQuery for data cleaning, model training, and deployment, you can spend more time focused on understanding the data and delivering value from it, rather than moving it around. Daniel Lewis, Senior Data Scientist, R&D Specialist, GeotabDevelopersFor the developer audience, we’ve developed two different types of services that democratize ML and serve as “building blocks” in creating applications. The first is a set of pre-trained models that are easily accessible by APIs. These APIs tackle many common use cases around sight, language, conversation, and more. For models that require more specificity, such as  identifying all trucks of a particular make and model versus general identification of a truck, we offer AutoML custom models, which empower developers to build domain-specific customer models. These tools have enabled companies like Keller Williams, USA Today, PWC, AES Corporation, and more.With AutoML Vision, nearly half of our inspection images no longer need human review. Google is a great partner, because their technology is consistently among the world leaders. Nicholas Osborn, Director, AES Digital HubWhen it comes to building machine learning models at scale, AutoML Tables gives developers (as well as data scientists and analysts) the ability to automatically build and deploy ML models on structured data with incredible speed. A codeless interface not only makes it easy for anyone to build models and incorporate them into broader applications, but it also saves time, saves money, and increases the quality of deployed ML models. Using AutoML Tables, we’ve seen customers deliver marketing programs that delivered 150% more subscribers per dollar spent and user engagement at 140% of industry averages, all by communicating to the right user in the right place at the right time.Further, these ML APIs do more than enable application developers. For ETL developers using Cloud Data Fusion, it’s easy to integrate these APIs into your data integration pipelines to enhance and prepare analysis for downstream applications and users. ML is now as easy as point, click, drag, and drop.Data EngineersThe final persona in our discussion of ML democratization is the data engineer. It’s worth mentioning that all of the personas we’ve discussed benefit from the autoscaling nature of Google Cloud’s platform, which eliminates the need for time-intensive tuning and provisioning of infrastructure to run ML models. This work can disproportionately fall to data engineers (or can turn data scientists into de facto data engineers as they try to productionize their models).We’ve worked to embed ML capabilities in both buckets of data engineering we see at Google: the Dataproc-oriented open source path, as well as the cloud-native Dataflow path. Let’s examine both.For open source adherents and those familiar with Hadoop and Spark environments, we make it easy to run SparkML jobs that you may be comfortable building, or have previously built. We have an easy-to-run Quicklab that can introduce you to the concept of ML with Spark on Dataproc, and you can try that out with free credits. We also give customers the ability to build custom OSS clusters on custom machines – and do it fast – to bring GPU-powered ML to our customers. Together with features announced earlier this year, Dataproc users can now quickly deploy ML leverage easy-to-use notebooks, schedule cluster deletion, and more.For data engineers using Dataflow, Google Cloud has made it easy to use Tensorflow Extended (TFX) to build and manage ML workflows in production. Working through Apache Beam (Dataflow’s SDK), this integration yields a toolkit for building ML pipelines, a set of standard components you can use as a part of a pipeline or ML training script, and libraries for the base functionality of many standard components. Our solutions teams are working to make this even easier, releasing common patterns like anomaly detection, which telco customers are putting to use for cybersecurity while banks use it to detect financial fraud. Wrapping upBringing ML capabilities to this broad set of new personas democratizes the most important aspect of big data: generating insights that help businesses drive predictions, new customer segments, recommendations, or more. The deeper insights provided by ML are going to become more and more critical to business success, which means the businesses that succeed are going to be the ones that can deploy ML and artificial intelligence widely. At Google, we know the best ideas tend to bubble up rather than get pushed down. When your full organization has access to both data and the tools to analyze the data, you’re ready for whatever comes next. If you’d like to give machine learning a try today, the BigQuery sandbox is a great (and free!) place to get started trying out BigQuery ML.Having discussed the importance of democratizing data, insights, and ML, our next blog will address how to take advantage of these insights in real-time—a critical piece of delighting customers and staying ahead of the competition.
Quelle: Google Cloud Platform

Apple Silicon M1 Chips and Docker

Revealed at Apple’s ‘One More Thing’ event on Nov 10th, Docker was excited to see new Macs feature Apple silicon and their M1 chip. At Docker we have been looking at the new hypervisor features and support that are required for Mac to continue to delight our millions of customers. We saw the first spotlight of these efforts at Apple WWDC in June, when Apple highlighted Docker Desktop on stage. Our goal at Docker is to provide the same great experience on the new Macs as we do today for our millions of users on Docker Desktop for Mac, and to make this transition as seamless as possible. 

Building the right experience for our customers means getting quite a few things right before we push a release. Although Apple has released Rosetta 2 to help move applications over to the new M1 chips, this does not get us all the way with Docker Desktop. Under the hood of Docker Desktop, we run a virtual machine, to achieve this on Apple’s new hardware we need to move onto Apple’s new hypervisor framework. We also need to do all the plumbing that provides the core experience of Docker Desktop, allowing you to docker run from your terminal as you can today.

Along with this, we have technical dependencies upstream of us that need to make changes prior to making a new version of Docker Desktop GA. We rely on things like Go for the backend of Docker Desktop and Electron for the Docker Dashboard to view your Desktop content. We know these projects are hard at work getting ready for M1 chips, and we are watching them closely. 

We also want to make sure we get the quality of our release right, which means putting the right tooling in place for our team to support repeatable, reliable testing. To do this we need to complete work including setting up CI for M1 chips to supplement the 25 Mac Minis that we use for automated testing of Docker Desktop. Apple’s announcement means we can start to get these set up and put in place to start automating the testing of Desktop on M1 chips. 

Last but by no means least, we also need to review the experience in the product for docker build. We know that developers will look at doing more multi-architecture builds than before. We have support for multi-architecture builds today behind buildx, and we will need to work on how we are going to make this simpler as part of this release. We want developers to continue  to work locally in Docker and have the same confidence that you can just build – share – run your content as easily as you do now regardless of the architecture. 

If you are excited for the new Mac hardware and want to be kept up to date on the status of Docker on M1 chips, please sign up for a Docker ID to get our newsletter for the latest updates. We are also happy to let you know that the latest version of Docker Desktop runs on Big Sur. If you have any feedback, please let us know either by our issue tracker or our public roadmap!

Also a big thank you to all of you who have engaged on the public roadmap, Twitter and our issue trackers highlight how much you care about Docker for Mac. Your interest and energy is greatly appreciated! Keep providing feedback and check in with us as we work on this going forward.
The post Apple Silicon M1 Chips and Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Memcached 1.6.6 jetzt auf Amazon ElastiCache verfügbar

Amazon ElastiCache for Memcached hat die Unterstützung auf die aktuelle Version Memcached 1.6.6 erweitert. Diese Version bringt zahlreiche Verbesserungen wie beispielsweise verbesserte Speicherverwaltung, die die Speichernutzung von ungenutzten Client-Verbindungen reduziert und das Risiko der Speicherfragmentierung von einer Vielzahl an Verbindungen senkt. Zusätzlich führt diese Version das experimentelle Metaprotokoll und die Metabefehle ein. 
Quelle: aws.amazon.com

AWS Systems Manager Explorer bietet jetzt eine Zusammenfassung für mehrere Konten und Regionen der AWS Config-Compliance.

Ab sofort bietet der AWS Systems Manager Explorer eine Zusammenfassung von AWS Config-Regeln und verbundener Ressourcen-Compliance, um Ihnen bei der Prüfung des allgemeinen Compliance-Status zu helfen und nicht konforme Ressourcen schnell zu finden. Systems Manager Explorer ist ein Betriebs-Dashboard, das einen Überblick über Ihre Betriebsdaten über Ihre AWS-Konten und -Regionen hinweg bietet und Ihnen dabei hilft, zu erkennen, wo Sie möglicherweise betriebliche Probleme untersuchen und beheben müssen. Mit AWS Config können Sie die Konfigurationen Ihrer AWS-Ressourcen bewerten, prüfen und beurteilen.
Quelle: aws.amazon.com

NICE DCV veröffentlicht Version 2020.2 mit neuem Session Manager und Leistungssteigerungen für interaktive Workloads mit hoher Bildrate

Wir freuen uns, Ihnen die Freigabe der Version 2020.2 von NICE DCV mit folgenden neuen Funktionen bekanntgeben zu können:

DCV Session Manager – Dies ist eine optionale Komponente, die REST APIs bereitstellt, um den Lebenszyklus einer Sitzung über einen DCV-Serverfuhrpark zu erstellen und zu verwalten. Sie ist kostenfrei für AWS-Kunden verfügbar, die DCV auf Amazon EC2-Instances ausführen und für On-Prem-Kunden mit DCV Plus und DCV Professional Plus-Lizenzen.*
Verbesserter Support für Anwendungsfälle mit hoher Bildrate – Der DCV-Bildratenbegrenzer ist jetzt auf 60 FPS voreingestellt, für Konsolensitzungen auf Servern und Amazon EC2-Instances mit einer NVIDIA GPU. Durch Aktivierung des neuen QUIC-basierten Transportprotokolls, können Kunden mit hohen Bildraten und hochdynamischen Workloads wie beim Gaming außerdem von einer flüssigeren und reaktionsschnelleren Streaming-Qualität erfahren, besonders unter suboptimalen Netzwerkbedingungen.
Support für SLES 15 und Ubuntu 20.4 – Kunden können jetzt DCV-Server und Client-Komponenten auf SUSE Linux Enterprise 15 und Ubuntu 20.4 Hosts verwenden.
Smart Card Redirection für Windows Server – Anwendungen, die in einer entfernten Windows-Sitzung ausgeführt werden, können jetzt Smartcards verwenden, die mit dem Client-Rechner des Kunden verbunden sind. Diese Funktion war bereits für Linux-Server verfügbar.

Quelle: aws.amazon.com