Bei Amazon ECS-optimierten Linux 2 AMIs ist der AWS Systems Manager Agent jetzt vorinstalliert

Bei Amazon ECS-optimierten Linux 2 Amazon Machine Images (AMIs) ist der AWS System Manager (SSM) Agent jetzt vorinstalliert. Der SSM Agent versetzt den Systems Manager in die Lage, EC2 Instances im ECS-Cluster eines kunden zu aktualisieren, zu verwalten und zu konfigurieren. Damit erhalten ECS-Kunden, die den SSM Agent bisher manuell in ihrer ECS-optimierten AMI installiert haben, diese Funktionen jetzt ohne weitere Konfiguration.
Quelle: aws.amazon.com

Showing the C++ developer love with new client libraries

We use a lot of C++ at Google, and we’ve heard that many of you do as well. So whether you’re using C++ for your next amazing game, your high-frequency trading platform, your massively parallel scientific computations, or any of a variety of other applications, we want Google Cloud to be an excellent platform for you.To that end, we are happy to say that we’re now building open-source C++ client libraries to help you access Google Cloud services. These are idiomatic C++ libraries that we intend to work well with your application and development workflow. Already, hundreds of GCP projects use generally available C++ libraries every day, including Google Cloud Storage (example code) and Cloud Bigtable (example code). We also have a beta release of our Cloud Spanner C++ library (example code), and we expect it to become generally available very soon. And we’re actively working on open-source client libraries for all the remaining cloud services. Several of these libraries are already being used by important Google services handling $IMPRESSIVE_NUMBER of data per $TIME_UNIT.If you’re looking for more C++ client libraries, please let us know how we can help. You can contact your sales rep, or even feel free to directly contact the engineers by filing issues on our GitHub project page. We look forward to hearing from you and helping you get the most out of Google Cloud with your C++ application!
Quelle: Google Cloud Platform

Logging + Trace: love at first insight

Meet Stackdriver Logging, a gregarious individual who loves large-scale data and is openly friendly to structured and unstructured data alike. Although they grew up at Google, Stackdriver Logging welcomes data from any cloud or even on-prem. Logging has many close friends, including Monitoring, BigQuery, Pub/Sub, Cloud Storage and all the other Google Cloud services that integrate with them. However, recently, they are looking for a deeper relationship to find insight.Now meet Stackdriver Trace, a brilliant and organized being. Trace also grew up at Google and is a bit more particular about data, making sense out of the chaos of distributed systems. Logging and Trace were brought together by mutual friends, such as Alex Van Boxel, cloud architect at Veepee. “Tracking down performance issues is like solving a murder mystery, having tracing and logging linked together is a big help for the forensics team,” he says. With a strong union, Trace and Logging are a match made in heaven: Developers are able to see exactly what is happening with their code, and how it fits within other services in the ecosystem. By embedding logs, Trace is able to show detail of what happened during a particular service call. Added to Trace’s ability to show the complete request, the user has full stack observability. By adding Trace IDs to logs, Logging is able to filter for logs within a trace, and link into end-to-end Traces. You can see not only how your code functions, but the context in which it does.  What Trace loves most about Logging“Logging, you complete me“  — Trace Complete your workflow by showing logs inline for each service call. In the Trace UI, you can understand logs in context by showing logs in line as events for each service call.Drill into the logs that relate to a particular service in the logs view. You can understand how your code is operating at a deeper level by linking from the Trace UI right into the relevant log entry in Stackdriver Logging.Search across the entire request. In the Trace UI, you can filter for labels on any service in the trace, showing logs for a downstream service when an upstream condition is true.What Logging loves most about Trace“Trace, you help me be a better platform.”  — LoggingSee logs from the entire request. In the logs viewer, filtering by Trace ID shows you all logs for that specific request.Drill into a trace of the complete request. In the logs viewer, you can drill into the trace of the complete request right from the log entry of interest, which helps understand richer and more complete context. Diagnose the root cause of errors. In the Trace UI, you can search for error traces, and easily see which downstream service is responsible for the error.Here’s how you can share the happy couple’s love: Explore the hidden superpowers of Stackdriver LoggingGet started with Stackdriver TraceGet started with LoggingAdd log Trace IDs to your application logs
Quelle: Google Cloud Platform

Taking a practical approach to BigQuery slot usage analysis

Google BigQuery is a fully managed serverless solution for your enterprise data warehouse workloads. Nothing could be easier than that: Just upload your data via batch or streaming and start running your queries. The underlying system will seamlessly take care of providing the infrastructural resources needed to complete your different jobs. It seems like magic, right? Especially when you think about the fact that, behind the scenes, there is a large-scale distributed system with a multitude of parallel tasks executed by dozens of microservices spread across several availability zones across your selected Google Cloud region (find more details about BigQuery technology).  But what if you want more visibility into the power under the hood? In this blog post, we’ll dive into all the currently available options to monitor and analyze resource usage. We’ll also describe the newly available extension to INFORMATION_SCHEMA views, now in beta, that offers you practical access to the underlying data. Let’s start by exploring the computational capacity unit we use to understand the load of the system.BigQuery’s computational capacity unit: the slotEvery time you perform a query, there are several back-end tasks that need to be completed (such as reading data from tables, pruning data that is not useful anymore, performing eventual aggregations, etc.). Each task, executed on an ad-hoc microservice, requires an adequate amount of computational power in order to be fulfilled. The slot is the computational capacity unit to measure that power. The BigQuery engine dynamically identifies the amount of slots needed to perform a single query, and background processes will transparently allocate the adequate computation power needed to accomplish the task.So, it’s essential to understand how to monitor and analyze slot usage, because that lets your technical team understand if there are any bottlenecks, then allows the business to choose the best pricing model (on-demand vs. flat-rate).We’ll now analyze three different strategies to gain better visibility of slot usage and see how you can get started using them:Slot usage analysis with system tablesReal-time usage monitoring with StackdriverSlot usage analysis with BigQuery audit logs Slot usage analysis with system tablesWe’re announcing an extended version of INFORMATION_SCHEMA views that contain real-time information about BigQuery jobs. This is part of our internal series of views, called INFORMATION_SCHEMA, that lets you extract useful information related to datasets, routines, tables and views. With a simple query, you can access a series of metadata that facilitate the analysis of the current data warehouse definition (i.e., tables list, descriptions of the fields, etc).This new extended version makes it easier to analyze slot usage (and other resources as well) because all the jobs information needed is just a query away.Here’s an example. Let’s assume that you want to understand last month’s daily consumption of slots split by users (all together with other information) within a single project. The query you have to write is really simple:The pivotal element here is total_slot_ms field, which contains the total amount of slots per millisecond used by a query. That’s the total number of slots consumed by the query over its entire execution time, considered in milliseconds. If you want to compute the average slot usage of the query, divide the value by the milliseconds duration of the query. You can do that by subtracting the value of endTime field from creationTime field. For example, if you have a 10-second query that used 20,000 totalSlotMs, it means that the average slot usage is 2: 20.000/10*1000.If you continue digging into the column definitions of the views, you will find a lot of information that will enable you to implement different kinds of analysis. You can, for example, easily compute the most expensive queries within your organization, find users who are heavily issuing queries, understand what the most expensive projects are, and also perform deep analysis on single queries by studying their query stages. Since details are available within one second of the job completion, you can use that information to implement your own triggers (e.g., once a load job has been successfully completed, launch the query to clean and transform the data for production usage).Note that the data is typically available within seconds, and jobs data is retained for a period of 180 days. If you want to maintain a backup of the data for historical reasons or to perform analysis later, use scheduled queries to automate the export of the data in an ad-hoc (partitioned) table. And keep in mind that real-time slot usage can fluctuate over the runtime of a query. To get deeper into the details, try this open-source solution to visualize slots consumption.Real-time slot usage monitoring with StackdriverIf you want instead to monitor in real time the slot usage of your project, Stackdriver is the place to go. It lets you: Have an immediate overview of the project slot usage, thanks to the native slot utilization chartCreate ad-hoc charts using several available metrics (i.e., slots allocated or slots allocated per job type)Create ad-hoc alerts to receive notifications when a certain event occurs (i.e., when the number of available slots is under a certain threshold for more than five minutes)Check out this guide on implementing monitoring for BigQueryhttps://cloud.google.com/bigquery/docs/monitoring.Slot usage analysis with BigQuery audit logsAnother possible option to get access to the underlying information is exploring BigQuery audit logs. These logs contain a multitude of information, including slot usage. You have two ways to query the data:Stackdriver Logging, if you want to perform a quick search of a precise value of interest BigQuery, if you want to perform a more in-depth analysisStackdriver LoggingFrom the Stackdriver Logging page;Select BigQuery as resourceSelect the desired time frame (such as last hour or no limit) Filter for all the INFO entries Search for the ones containing getQueryResult method nameOnce you’ve found the query you were looking for, expand its payload and look for the entry of interest. For example,  protoPaylod.serviceData.jobGetQueryResultResponse.jobStatistics.totalSlotMs represents the total amount of slots-ms as described earlier.If you want to perform more in-depth analysis, you can use BigQuery. First, create a sink to transfer the data from Stackdriver Logging to BigQuery. Then you can perform more complex queries to analyze the slot usage of your project. For example, if you use the same query as above, this will be the outcome:We’re excited to see how you’ll use these tools to build your customized solution. Learn more here about organizing slots. And your next step might be to use Data Studio to generate custom dashboards to be shared within your organization.
Quelle: Google Cloud Platform