Get more insights with the new version of the Node.js library

We’re thrilled to announce the release of a new update to the Cloud Logging Library for Node.js with the key new features of improved error handling and writing structured logging to standard output which becomes handy if you run applications in serverless environments like Google Functions!The latest v9.9.0 of Cloud Logging Library for Node.js makes it even easier for Node.js developers to send and read logs from Google Cloud providing real-time insight into what is happening in your application through comprehensive tools like Log Explorer. If you are a Node.js developer working with Google Cloud, now is a great time to try out Cloud Logging.The latest features of the Node.js library are also integrated and available in other packages which are based on Cloud Logging Library for Node.js:@google-cloud/logging-winston – this package integrates Cloud Logging with the Winston logging library. @google-cloud/logging-bunyan – this package integrates Cloud Logging with the Bunyan logging library. If you are unfamiliar with the Cloud Logging Library for Node.js, start by running following command to add the library to your project:code_block[StructValue([(u’code’, u’npm install @google-cloud/logging’), (u’language’, u”)])]Once the library is installed, you can use it in your project. Below, I demonstrate how to initialize the logging library, create a client assigned configured with a project ID,  and log a single entry ‘Your log message':code_block[StructValue([(u’code’, u”// Imports the Google Cloud client library rn const { Logging } = require(‘@google-cloud/logging’);rn // Creates a client with predefined project Id and a path torn // credentials JSON file to be used for auth with Cloud Loggingrn const logging = new Logging(rn {rn projectId: ‘your-project-id’,rn keyFilename: ‘/path/to/key.json’,rn }rn );rn // Create a log with desired log namern const log = logging.log(‘your-log-name’);rn // Create a simple log entry without any metadatarn const entry = log.entry({}, ‘Your log message’);rn // Log your record!!!rn log.info(entry);”), (u’language’, u”)])]Here’s the log message generated by this code in Log Explorer:Two critical features of the latest Cloud Logging Library for Node.js release are writing structured log entries to standard output and error handling with a default callback. Let’s dig in deeper. Writing structured log entries to standard outputThe LogSync class helps users write context-rich structured logs to stdout or any other Writable interface. This class extracts additional log properties like trace context from HTTP headers, and can be used to toggle between writing to the Cloud Logging endpoint or to stdout during local development.In addition, writing structured logging to stdout can be integrated with a Logging agent. Once a log is written to stdout, a Logging agent then picks up those logs and delivers those to Cloud Logging out-of-process. Logging agents can add more properties to each entry before streaming it to the Logging API.We recommend serverless applications (i.e. applications running in Cloud Functions and Cloud Run) to use the LogSync class as async logs delivery may be dropped due to lack of CPU or other environmental factors  preventing the logs from being sent immediately to the Logging API. Cloud Functions and Cloud Run applications by their nature are ephemeral and can have a short lifespan which will cause logging data drops when an instance is shut down before the logs have been sent to Cloud Logging servers. Today, Google Cloud managed services automatically install Logging agents for all Google serverless environments in the resources that they provision – this means that you can use LogSync in your application to seamlessly deliver logs to Cloud Logging through standard output.Below is a sample how to use LogSync class:code_block[StructValue([(u’code’, u”const { Logging } = require(‘@google-cloud/logging’);rn const logging = new Logging(rn {rn projectId: ‘your-project-id’,rn keyFilename: ‘/path/to/key.json’,rn }rn );rn// Create a LogSync transport, defaulting to `process.stdout`rnconst log = logging.logSync(‘Your-log-name’);rnconst entry = log.entry({}, ‘Your log message’);rnlog.write(entry);”), (u’language’, u”)])]If you use @google-cloud/logging-winston  or @google-cloud/logging-bunyan library, you can set the redirectToStdout parameter in LoggingWinston or LoggingBunyan constructor options respectively. Below is a sample code how to redirect structured logging output to stdout for LoggingWinston class:code_block[StructValue([(u’code’, u”// Imports the Google Cloud client library for Winstonrnconst {LoggingWinston} = require(‘@google-cloud/logging-winston’);rnrn// Creates a client that writes logs to stdoutrnconst loggingWinston = new LoggingWinston({rn projectId: ‘your-project-id’,rn keyFilename: ‘/path/to/key.json’,rn redirectToStdout: true,rn});”), (u’language’, u”)])]Error Handling with a default callbackThe Log class provides users the ability to write and delete logs asynchronously. However, there are cases when log entries cannot be written or deleted and an error is thrown – if the error is not handled properly, it can crash the application. One possible way to handle the error is to await the log write/delete calls and wrap it with try/catch. However, waiting for every write or delete call may introduce delays which could be avoided by simply adding a callback as shown below:code_block[StructValue([(u’code’, u”// Asynchronously write the log entry and handle response or rn // any errors in provided callbackrn log.write(entry, err => {rn if (err) {rn // The log entry was not written.rn console.log(err.message);rn } else {rn console.log(‘No error in write callback!’);rn }rn });”), (u’language’, u”)])]Adding a callback to each write or delete call is duplicate code and remembering to include it for each call may be toilsome, especially if  the code handling the error is always the same. To eliminate this burden, we introduced the ability to provide a default callback for the Log class which can be set through the LogOptions passed to the Log constructor as in example below:code_block[StructValue([(u’code’, u”const {Logging} = require(‘@google-cloud/logging’);rn const logging = new Logging();rn rn // Create options with default callback to be called on rn // every write/delete response or errorrn const options = {rn defaultWriteDeleteCallback: function (err) {rn if (err) {rn console.log(‘Error is: ‘ + err);rn } else {rn console.log(‘No error, all is good!’);rn }rn },rn };rnrn const log = logging.log(‘my-log’, options);”), (u’language’, u”)])]If you use @google-cloud/logging-winston  or @google-cloud/logging-bunyan library, you can set the callback through defaultCallback parameter in LoggingWinston or LoggingBunyan constructor options respectively. Here is an example of  how to set a default callback for LoggingWinston class:code_block[StructValue([(u’code’, u”// Imports the Google Cloud client library for Winstonrnconst {LoggingWinston} = require(‘@google-cloud/logging-winston’);rnrn// Creates a clientrnconst loggingWinston = new LoggingWinston({rn projectId: ‘your-project-id’,rn keyFilename: ‘/path/to/key.json’,rn defaultCallback: err => {rn if (err) {rn console.log(‘Error occurred: ‘ + err);rn }rn },rn});”), (u’language’, u”)])]Next StepsNow, when you integrate the Cloud Logging Library for Node.js in your project, you can start using the latest features. To try the latest Node.js library in Google Cloud you can follow this quickstart walkthrough guide:For more information on the latest check out for Cloud Logging Library for Node.js user guide.For any feedback or contributions, feel free to open issues in our Cloud Logging Library for Node.js GitHub repo. Issues can be also opened for bugs, questions about library usage and new feature requests.Related ArticleIntroducing a high-usage tier for Managed Service for PrometheusNew pricing tier for our managed Prometheus service users with over 500 billion metric samples per month. Pricing for existing tiers redu…Read Article
Quelle: Google Cloud Platform

Run your fault-tolerant workloads cost-effectively with Google Cloud Spot VMs, now GA

Available in GA today, you can now begin deploying Spot VMs in your Google Cloud projects to start saving now. For an overview of Spot VMs, see our Preview launch blog and for a deeper dive, check out our Spot VM documentation. Modern applications such as microservices, containerized workloads, and horizontal scalable applications are engineered to persist even when the underlying machine does not. This architecture allows you to leverage Spot VMs to access capacity and run applications at a low price. You will save 60 – 91% off the price of our on-demand VMs with Spot VMs.To make it even easier to utilize Spot VMs, we’ve incorporated Spot VM support in a variety of tools. Google Kubernetes Engine (GKE)Containerized workloads are often a good fit for Spot VMs as they are generally stateless and fault tolerant. Google Kubernetes Engine (GKE) provides container orchestration. Now with native support for Spot VMs, use GKE to manage your Spot VMs to get cost savings. On clusters running GKE version 1.20 and later, the kubelet graceful node shutdown feature is enabled by default, which allows the kubelet to notice the preemption notice, gracefully terminate Pods that are running on the node, restart Spot VMs, and reschedule Pods. As part of this launch, Spot VM support in GKE is now GA. For best practices on how to use GKE with Spot VMs, see our architectural walkthrough on running web applications on GKE using cost-optimized Spot VMs as well as our GKE Spot VM documentation.  GKE Autopilot Spot PodsKubernetes is a powerful and highly configurable system. However, not everyone needs that much control and choice. GKE Autopilot provides a new mode of using GKE which automatically applies industry best practices to help minimize the burden of node management operations. When using GKE Autopilot, your compute capacity is automatically adjusted and optimized based on your workload needs. To take your efficiency to the next level, mix in Spot Pods to drastically reduce the cost of your nodes. GKE Autopilot gracefully handles preemption events by redirecting requests away from nodes with preempted Spot Pods and manages autoscaling and scheduling to ensure new replacement nodes are created to maintain sufficient resources. Spot Pods for GKE Autopilot is now GA, and you can learn more through the GKE Autopilot and Spot Pods documentation.  TerraformTerraform makes managing infrastructure as code easy, and Spot VM support is now available for Terraform on Google Cloud. Using Terraform templates to define your entire environment, including networking, disks, and service accounts to use with Spot VMs, makes continuous spin-up and tear down of deployments a convenient, repeatable process. Terraform is especially important when working with Spot VMs as the resources should be treated as ephemeral. Terraform works even better in conjunction with GKE to define and manage a node poolseparately from the cluster control plane. This combination gives you the best of both worlds by using Terraform to set up your compute resources while allowing GKE to handle autoscaling and autohealing to make sure you have sufficient VMs after preemptions. SlurmSlurm is one of the leading open-source HPC workload managers used in TOP 500 supercomputers around the world. Over the past five years, we’ve worked with SchedMD, the company behind Slurm, to release ever-improving versions of Slurm on Google Cloud. SchedMD recently released the newest Slurm for Google Cloud scripts, available through the Google Cloud Marketplace and in SchedMD’s GitHub repository. This latest version of Slurm for Google Cloud includes support for Spot VMs via the Bulk API. You can read more about the release in the Google Cloud blog post.Related ArticleCloud TPU VMs are generally availableCloud TPU VMs with Ranking & Recommendation acceleration are generally available on Google Cloud. Customers will have direct access to TP…Read Article
Quelle: Google Cloud Platform

Google Cloud establishes European Advisory Board

Customers around the globe turn to Google Cloud as their trusted partner to digitally transform, enable growth, and solve their most critical business problems. To help inform Google Cloud on how it can continually improve the value and experience it delivers for its customers in Europe, the company has set up a European advisory board comprising accomplished leaders from across industries. Rather than representing Google Cloud, the European Advisory Board serves as an important feedback channel and critical voice to the company in Europe, helping ensure its products and services meet European requirements. The group also helps Google Cloud accelerate the understanding of key challenges enterprises across industries and the public sector face and helps further drive the company’s expertise and differentiation in the region.Members of the European Advisory Board offer proven expertise and distinct understanding of key market dynamics in Europe. Google Cloud’s European Advisory Board Members are:Michael Diekmann Michael Diekmann is currently Chairman of the Supervisory Board of Allianz SE, having served as Chairman of the Board of Management and CEO from 2003 to 2015. He is also Vice Chairman of the Supervisory Board of Fresenius SE & Co. KGaA, and a member of the Supervisory Board of Siemens AG. Mr. Diekmann presently holds seats at various international Advisory Boards and is an Honorary Chairman of the International Business Leaders Advisory Council for the Mayor of Shanghai (IBLAC).Brent HobermanBrent Hoberman is Co-Founder and Executive Chairman of Founders Factory (global venture studios, seed programmes and accelerator programmes), Founders Forum (global community of founders, corporates and tech leaders), and firstminute capital ($300m seed fund with global remit, backed by over 100+ unicorn founders). Previously, he co-foundedMade.com in 2010, which went public in 2021 with a valuation of $1.1bn, andlastminute.com in 1998 where he was CEO from its inception and sold it in 2005 to Sabre for $1.1bn. Mr. Hoberman has backed nine unicorns at Seed stage, and technology businesses he has co-founded have raised over $1bn and include Karakuri.Anne-Marie Idrac Anne-Marie Idrac is former Minister of French State for Foreign Trade, Minister of State for Transport, and member of the Assemblée Nationale. Ms. Idrac’s other roles include chair and CEO of RATP and of French Railways SNCF, as well as chair of Toulouse–Blagnac Airport. She is currently a director of Saint Gobain, Total, and Air France KLM. Ms. Idrac also chairs the advisory board of the public affairs school of Sciences Po in Paris, as well as France’s Logistics Association. She is also a special senior representative to the French autonomous vehicles strategy group.Julia Jaekel Julia Jaekel served for almost ten years as CEO of Gruner + Jahr, a leading media and publishing company and held various leadership positions in Bertelsmann SE & Co KGaA, including on the Bertelsmann’s Group Management Committee. She is currently on the Board of Adevinta ASA and Holtzbrinck Publishing Group.Jim Snabe (Lead Advisor) Jim Snabe currently serves as Chairman at Siemens and board member at C3.ai. He is also a member of the World Economic Forum Board of Trustees and Adjunct Professor at Copenhagen Business School. Mr. Snabe was previously co-CEO of SAP and Chairman of A. P. Moller Maersk. Delphine Geny StephannDelphine Gény-Stephann is the former Secretary of State to the Minister of the Economy and Finance in France. She held various leadership positions in Saint-Gobain, including on the group’s General Management Committee. She is currently on the Board of Eagle Genomics, EDF and Thales.Jos WhiteJos White is a founding partner at Notion Capital, a venture capital firm focused on SaaS and Cloud. Jos is a pioneer in Europe’s Internet and SaaS industry having co-founded Star, one of the UK’s first Internet providers and MessageLabs, one of the world’s first SaaS companies, and through Notion who have made more than 70 investments in European SaaS companies including Arqit, CurrencyCloud, Dixa, GoCardless, Mews, Paddle, Unbabel and Yulife.
Quelle: Google Cloud Platform

Amazon Redshift unterstützt jetzt den Linear Learner-Algorithmus mit Redshift ML

Amazon Redshift ML ermöglicht es Ihnen, mit bekannten SQL-Befehlen Machine-Learning-(ML-)Modelle zu erstellen, zu trainieren und bereitzustellen. Mit Amazon Redshift ML können Sie jetzt Amazon SageMaker, einen vollständig verwalteten Machine-Learning-Dienst, ohne Verschieben Ihrer Daten oder Erlernen neuer Fähigkeiten nutzen. Amazon Redshift unterstützt jetzt den Amazon SageMaker Linear Learner-Algorithmus zum Erstellen von Modellen mit Amazon Redshift ML.
Quelle: aws.amazon.com

AWS App Mesh unterstützt jetzt Internet Protocol Version 6 (IPv6)

AWS App Mesh unterstützt jetzt IPv6, sodass Kunden Workloads unterstützen können, die in IPv6-Netzwerken ausgeführt werden, und App Mesh-APIs über IPv6 aufrufen können. Dies hilft den Kunden, die IPv6-Compliance-Anforderungen zu erfüllen, und macht teure Netzwerkgeräte für die Adressübersetzung zwischen IPv4 und IPv6 überflüssig. AWS App Mesh ist ein Service-Netz, das Netzwerke auf Anwendungsebene bereitstellt, sodass Ihre Services problemlos über verschiedene Arten von Computing-Infrastrukturen miteinander kommunizieren können. AWS App Mesh standardisiert die Kommunikation Ihrer Services und bietet Ihnen eine durchgängige Sichtbarkeit und Optionen zum Abstimmen für die Hochverfügbarkeit Ihrer Anwendungen.
Quelle: aws.amazon.com

AWS Backup fügt Unterstützung für Amazon FSx for OpenZFS hinzu

Mit AWS Backup können Sie jetzt Ihre Amazon-FSx-for-OpenZFS-Dateisysteme schützen und so Ihre Anforderungen an eine zentralisierte Datensicherung und die Compliance erfüllen. Mit der nahtlosen Integration von AWS Backup in AWS Organizations können Sie unveränderliche Sicherungen von Amazon-FSx-for-OpenZFS-Dateisysteme für alle Ihre Konten zentral erstellen und verwalten, Ihre Daten vor versehentlichen oder böswilligen Aktionen schützen und die Daten mit ein paar einfachen Klicks wiederherstellen. Darüber hinaus können Sie einheitliche, revisionssichere Berichte erstellen, um den Compliance-Status Ihrer Datenschutzrichtlinien nachzuweisen.
Quelle: aws.amazon.com

Ankündigung der allgemeinen Verfügbarkeit der öffentliche Einbettung mit 1 Klick, die mit Amazon QuickSight verfügbar ist

Amazon QuickSight unterstützt jetzt die öffentliche Einbettung mit 1 Klick, eine Funktion, mit der Sie Ihre Dashboards in öffentliche Anwendungen, Wikis und Portale einbetten können, ohne programmieren oder entwickeln zu müssen. Einmal aktiviert, kann jeder im Internet sofort auf diese eingebetteten Dashboards mit aktuellen Informationen zugreifen, ohne dass Server-Bereitstellungen oder Infrastrukturlizenzen erforderlich sind! Durch die öffentliche Einbettung mit nur 1 Klick können Sie Ihren Endbenutzern in Minutenschnelle Zugang zu Erkenntnissen verschaffen.
Quelle: aws.amazon.com