Top 10 takeaways from Looker’s 2021 JOIN@Home conference

JOIN@Home was an incredible celebration of the achievements that the Looker community made in the last year, and I was proud to be a part of it. Prominent leaders in the data world shared their successes, tips, and plans for the future. In the spirit of keeping the learning alive, I summarized the top two takeaways from each of the keynotes. They’re accompanied by illustrations that were captured live during the sessions by a local artist. Plus, there’s a fun surprise for you at the end. “Celebrating Data Heroes – Transforming Our World with Data”Our opening keynote featured a number of inspiring data professionals who use Looker in their work every day to see trends, drive decision making, and grow their customer base.Some of their main takeaways were:You can use analytics to make change for the greater good.Surgeon scientist Dr. Cherisse Berry spoke of cross-referencing healthcare outcomes data like trauma care survival rates, how long patients wait before being seen, and whether patients were appropriately triaged with demographic data to find gender and racial disparities in healthcare. For instance, she found that critically injured women receive trauma care less often than men. Because her analysis made the disparity known, informed decisions and actions can be taken to bring greater equality to New York state’s trauma care system. Provide templates to make insights more easily available to more users, especially non-technical ones.Michelle Yurovsky of  UIPath, an automation platform that helps customers avoid repetitive tasks, shared one of the key ways UIPath gets customers engaged: by providing dashboard templates that answer common automation questions. Customers get useful insights the second they click on the product. They can copy and modify the templates according to their business needs, so they’re less intimidated to start working with analytics – especially if they have no previous experience building dashboards. *Source: Coursera internal data, November 2021.“Developing a Better Future with Data”This keynote looked to the future of analytics.Two major themes were:Composable analytics capabilities help make application development faster, easier and more accessible.Composable analytics means creating a custom analytics solution using readily available components. You have access to composable analytics with Looker through the extension framework, which offers downloadable components you can use to build your application right on top of the Looker platform. Filter and visualization components enable you to more easily create the visual side of these data experiences.Augmented analytics help make it easier to handle the scale and complexity of data in modern business – and to make smarter decisions about probable future outcomes.Augmented analytics generate sophisticated analyses, created by integrating machine learning (ML) and artificial intelligence (AI) with data. The Looker team has worked to make augmented analytics more accessible to everyone this year. In particular, new Blocks give you access to ML  insights through the familiar Looker interface, enabling you to more quickly prototype ML- and AI-driven solutions. For instance, the Time-series Forecasting Block (which uses BigQuery ML) can be installed to give analysts deeper insights into future demand for better inventory and supply chain management. CCAI Insights gives call centers access to Contact Center AI Insights data with analysis they can use immediately. “The Looker Difference”Product Managers Ani Jain and Tej Toor highlighted many recent features you might find useful for activating and enabling users with Looker.Here are two moments that stood out:Giving your teams better starting points can lead to more engagement with analytics.Two improved ways to find insights from this year: Quick Starts and board improvements. Quick Starts function as pre-built Explore pages that your users can open with a click, helping to make ad hoc analysis more accessible and less intimidating. They’re also a convenient way to save an analysis you find yourself doing frequently – and they even save your filter settings. And, with new navigation improvements in Looker, boards are easier to find and use. Now you can pin links to a board, whether it’s a dashboard, a Look, an Explore, or something else, including external links. So go ahead. Try your hand at creating a useful data hub for your team with a new board.Natural language processing and Looker can help you make sense of relationships within data, quickly.A great example of this is the Healthcare NLP API Block, which creates an interactive user interface where healthcare providers, payers, pharma companies, and others in the healthcare industry can more easily access intelligent insights. Under the hood, this Block works on top of GCP Healthcare NLP API, an API offering pre-trained natural language models to extract medical concepts and relationships from medical text. The NLP API helps to structure the data, and the Looker Block can make the insights within that data more accessible.“Building and Monetizing Custom Data Experiences with Looker” Pedro Arellano, Product Director at Looker, and Jawad Laraqui, CEO of Boston-based consultancy Data Driven, chatted about embedded analytics and the remarkable speed one can build data applications with Looker and monetization strategies.Points you don’t want to miss from this one:Looker can help you augment an existing customer experience and create a new revenue stream with embedded data.For example, you can provide personalized insights to a customer engaged with your product, or automate business processes such as using data to trigger a service order workflow when an issue is encountered with a particular product. Embedding data in these ways can make the customer experience smoother all around. To take it a step further, you can monetize a data product you build to help create a new revenue stream.Building for Looker Marketplace can help you find more customers for your app and can promote a better user experience. Jawad compared using the extension framework to build for the Looker Marketplace as having an app in the Apple store. Being in the Marketplace is a way for customers to find and use his product organically, and it helps give the end users a streamlined experience. He said: “We were able to quickly copy and paste our whole application from a stand-alone website into something that is inside of Looker. And we did this quickly—one developer did this in one day. It’s a lot easier than you think, so I encourage everyone to give it a try. Just go build!”“Looker for Modelers: What’s New and What’s Next”Adam Wilson, Product Manager at Looker, covered the latest upgrades and future plans for Looker’s semantic data model. This layer sits atop multiple sources of data and standardizes common metrics and definitions, so it can be governed and fed into modern built-in business intelligence (BI) interactive dashboards, connected into familiar tools such as Google sheets, and other BI tools where users work —we’re calling this the unified semantic model.Capabilities to look out for:Take advantage of Persistent Derived Table (PDT) upgrades that facilitate the end-user experience.You can use incremental PDTs to capture data updates without rebuilding the whole table, meaning your users get fresh data on a more regular basis with a lower load on your data warehouse. And it’s now possible to validate PDT build status in development mode, giving you the visibility needed to determine when to push updates to production. Coming soon, you’ll be able to do an impact analysis on proposed changes with visualized dependencies between PDTs.Reach users where they are with Connected Sheets and other BI tools.Coming soon, you’ll be able to explore Looker data in Google Sheets and share charts to Slides, too. And with Governed BI Connectors, Looker can act as a source of truth for users who are accustomed to interacting with data in Tableau, Power BI, and Google Data Studio.  You can sign up to hear when the Connected Sheets and Looker integration is available or separately to hear about preview availability for Governed BI Connectors.* Source: The Total Economic Impact™ of Looker, Forrester Consulting, June 2021.A commissioned study conducted by Forrester Consulting on behalf of Google Cloud** Source: Google Community Data, December 2020, 2021HackathonSpeaking of interesting new developments, here’s your fun surprise: a hackathon recap with a new chart you can use in your own analytics.The Looker developer community came together to create innovative Looker projects at this year’s JOIN hackathon, Hack@Home 2021. The event provided the participants access to the latest Looker features and resources to create tools useful for all Looker developers. The Nearly Best Hack Winner project demonstrated how easy it is to make custom visualizations by creating an animated bar chart race visualization that anyone can use. The Best Hack Winner showcased the power of the Looker extension framework with a Looker application that conveniently writes CSV data into Looker database connections.You can still view all the keynotes, as well as the breakout sessions and learning deep dives, on-demand on the JOIN@Home content hub. These will be available through the end of the month, so go soak up the learning while you can.
Quelle: Google Cloud Platform

Want multi-cluster Kubernetes without all the cost and overhead? Here’s how

Editor’s note: Today, we hear from Mengliao Wang, Senior Software Developer, Team Lead at Geotab, a leading provider of fleet management hardware and software solutions. Read on to hear how the company is expanding on their adoption of Google Cloud to deliver new services for their customers by leveraging Google Kubernetes Engine (GKE) multi-cluster features.Geotab’s customers ask a lot of our platform: They use it to gain insights from vast amounts of telemetry data collected from their fleet vehicles. They rely on it to adhere to strict data privacy requirements. And, because our customers are located all over the world, they need the platform to address their data residency and other jurisdictional processing requirements, which require compute and storage to live within a specific geographic region. Meanwhile, as a managed service provider, we need a cost-efficient business model — that was certainly a driving factor for adopting containers and GKE. As we started architecting the deployment of multiple clusters to support our customers’ data residency requirements, we determined we also needed to explore approaches to reduce the total operational maintenance and costs of our multi-cluster environment.In order to meet customers where they are, we moved forward with running GKE clusters in multiple Google Cloud Platform regions. At the same time, we recently began using GKE multi-cluster services, which provides our customers with the security and low latency they need, while giving us cost savings and an easy-to-maintain solution. Read on to learn more about Geotab, our journey to Google Cloud and GKE, and, more recently, how we deployed multi-cluster Kubernetes using GKE multi-cluster services.The rise of connected fleet vehicles “By 2024, 82% of all manufactured vehicles will be equipped with embedded telematics.”—Berg InsightAs a global leader in IoT and connected transportation, Geotab is advancing security, connecting commercial vehicles to the internet, and providing web-based analytics to help customers better manage their fleet vehicles. With over 2.5 million connected vehicles and processing billions of data points per day, we leverage data analytics and machine learning to support our customers in several ways. We help them improve productivity, optimize fleets by reducing their fuel consumption, enhance driver safety, achieve strong compliance to regulatory changes, and meet sustainability goals. Geotab partners with Original Equipment Manufacturers (OEMs) to help expand customers’ fleet management capabilities through access to the Geotab platform.Our journey to Google Cloud and GKEWe originally chose Google Cloud as our primary cloud provider as we found it to be the most stable of the cloud providers we tried, with the least amount of unscheduled downtime. End-to-end reliability is an important consideration for our customers’ safety and their confidence in Geotab’s driver-assistance features. Since getting started on our public cloud journey, we’ve leveraged Google Cloud to modernize different aspects of the Geotab platform.First, we embarked on a multi-milestone and multi-year initiative to modernize the Geotab Data Platform, adopting a container-based architecture using open source technologies; we continue to leverage Google Cloud services to launch innovative solutions that combine analytics and access to massive data volumes for better transportation planning decisions. Today, Geotab Data Platform is built entirely on GKE, with multiple services such as data ingestion, data digestion, data processing, monitoring and alerting, a management console, and several applications. We are now leveraging this modern platform to introduce new Geotab services to our customers.Exploring multi-cluster KubernetesAs discussed above, we recently began deploying our GKE clusters into multiple regions, to meet our customers’ performance and data residency requirements. However, not every service that makes up the Geotab platform is created equal… For example, data digestion and data ingestion services are at the core of the data platform. Data digestion services are Application Programming Interfaces (API), machine learning models, and business intelligence (BI) tools that consume data from the data environment for various data analysis purposes, and are served directly to customers. Data ingestion services ingest billions of telematics data records per day from Geotab GO devices and are responsible for persisting them into our data environment.But when looking at optimizing operating costs, we identified several services outside of the data platform that did not process sensitive customer information — our monitoring and alerting services are examples of these services. Duplicating these services in multiple regions would result in higher infrastructure costs and would result in additional maintenance complexity and overhead.We decided to deploy the services that do not process any customer data as shared services in a dedicated cluster. Not only does this lower the cost for resources, but it also makes it easier to manage from an operational perspective. However, this approach introduced two new challenges:Services such as data ingestion and data digestion that run in each jurisdiction needed to expose their metrics outside of their cluster to make them available to the shared services (monitoring and alerting / management console) running on the shared cluster, resulting in some security concerns.Since metrics would not be passing within a cluster subnetwork, they would travel via the public network, resulting in higher latency as well as additional security concerns.This is where GKE Multi-cluster Services (MCS) came in, perfectly solving these concerns without introducing any new architectural components for us to configure and maintain. MCS is a cross-cluster Service Discovery and invocation mechanism built-in to GKE. MCS extends the capabilities of the standard Kubernetes Service object. Services that are configured to be exported with MCS are discoverable and accessible across all clusters within a fleet of clusters via a virtual IP address, matching the behavior of a ClusterIP Service that is accessible within a cluster. With MCS, we do not need to expose public endpoints and all traffic is routed within the Google network.With MCS configured, we get the best of both worlds: services between the shared cluster and other regionally hosted clusters communicate as if they are all hosted in one cluster! Problem solved! Reflecting on the journeyOur modernization journey on Google Cloud continues to pay dividends. During the first phase of our journey, we reaped the benefits of being able to scale up our systems with less downtime. With GKE features like MCS, we are able to reduce the time required to roll-out new features to our global customers while addressing our business objectives to manage operating costs. We look forward to continuing on our multi-cluster journey with Google Cloud and GKE. Are you interested in learning more about how GKE multi-cluster services can help with your Kubernetes multi-cluster challenges? Check out this guide to configuring multi-cluster services, or reach out to a Google Cloud expert — we’re eager to help!Related ArticleDriving change: How Geotab is modernizing applications with Google CloudOver time, Geotab converted production servers running Windows Server to containers and open source, saving hundreds of thousands of doll…Read Article
Quelle: Google Cloud Platform

Instance-Tags jetzt im Amazon-EC2-Instance-Metadaten-Service verfügbar

Über den EC2-Instance-Metadaten-Service können Sie jetzt auf die Tags Ihrer Instanz zugreifen. Mit Tags können Sie AWS-Ressourcen auf unterschiedliche Weise kategorisieren (z. B. nach Zweck, Eigentümer oder Umgebung). Dies ist hilfreich, wenn Sie viele Ressourcen desselben Typs haben. Eine bestimmte Ressource kann dann anhand der ihr zugewiesenen Tags schnell identifiziert werden. Bisher konnten Sie die Konsole oder über die describe-tags-API auf Ihre Instance-Tags über zugreifen.
Quelle: aws.amazon.com

AWS Lambda unterstützt jetzt ES-Module und Top-Level Await für Node.js 14

AWS Lambda-Funktionen, die die Node.js 14-Laufzeit verwenden, unterstützen jetzt den in Form von ECMAScript-Modulen verpackten Code, wodurch Lambda-Kunden in ihren Lambda-Funktionen eine breitere Palette von JavaScript-Paketen verwenden können. Darüber hinaus können Lambda-Kunden jetzt die Vorteile von „Top-Level-Await“ nutzen, einer Funktion der Sprache Node.js 14. Bei Verwendung zusammen mit Provisioned Concurrency verbessert dies die Kaltstartleistung für Funktionen mit asynchronen Initialisierungsaufgaben. Weitere Informationen finden Sie im Blogpost Anwendung von Node.JS ES-Modulen und Top-Level Await in AWS Lambda.
Quelle: aws.amazon.com

Die differenzierte Zugriffskontrolle wird jetzt auf bestehenden Amazon-OpenSearch-Service-Domains unterstützt

Amazon OpenSearch Service (Nachfolger von Amazon Elasticsearch Service) unterstützt jetzt die Aktivierung der differenzierten Zugriffskontrolle auf bestehende Domains. Mit der differenzierten Zugriffskontrolle kommen mehrere Funktionen hinzu, mit denen Sie den Zugriff auf die in Ihrer Domäne gespeicherten Daten besser kontrollieren können.
Quelle: aws.amazon.com

Einführung von 37 neuen Ressourcentypen in der CloudFormation Registry

Seit unserem letzten Update im November 2021 wurde AWS CloudFormation Registry zwischen November und Dezember 2021 um die Unterstützung von 37 neuen Ressourcentypen (vollständige Liste siehe unten) erweitert. Ein Ressourcentyp umfasst ein Schema (Ressourceneigenschaften und Handler-Berechtigungen) sowie Handler, die API-Interaktionen mit den zugrunde liegenden AWS- oder Drittanbieter-Services erlauben. Kunden können jetzt den Lebenszyklus dieser neu unterstützten Ressourcen als Teil ihrer Cloud-Infrastruktur über CloudFormation konfigurieren, bereitstellen und verwalten, indem sie sie als Infrastructure as Code behandeln. Darüber hinaus freuen wir uns, Ihnen mitteilen zu können, dass am Tag der Einführung drei neue AWS-Services die CloudFormation-Unterstützung hinzugefügt haben. Bei diesen Services handelt es sich um: Amazon CloudWatch Evidently, Amazon CloudWatch RUM und AWS Resilience Hub. CloudFormation unterstützt jetzt 170 AWS-Services mit über 830 Ressourcentypen sowie über 40 Ressourcentypen von Drittanbietern.
Quelle: aws.amazon.com

Amazon EC2 On-Demand Capacity Reservations unterstützt jetzt Cluster-Placement-Gruppen

Von heute an können Kunden mit Amazon EC2 On-Demand Capacity Reservations Kapazität für Cluster-Placement-Gruppen reservieren. Mit Cluster-Placement-Gruppen können Kunden EC2-Instances in logischen Gruppen innerhalb eines Netzwerksegments mit hoher Bisektionsbandbreite starten und so geringe Latenzzeiten und hohen Durchsatz zwischen den Instanzen innerhalb des Clusters erzielen. Cluster-Placement-Gruppen sind vorteilhaft für Kunden mit Workloads, die eine eng gekoppelte Node-to-Node-Kommunikation erfordern, wie z. B. High-Performance-Computing (HPC)-Workloads oder In-Memory-Datenbanken wie SAP HANA. Mit On-Demand Capacity Reservations für Cluster-Placement-Gruppen können Kunden sicher sein, dass sie beim Skalieren von Rechenressourcen innerhalb ihres Clusters über reservierte Kapazitäten verfügen.
Quelle: aws.amazon.com