What is Cloud IoT Core?

Click to enlargeThe ability to gain real-time insights from IoT data can redefine competitiveness for businesses. Intelligence allows connected devices and assets to interact efficiently with applications and with human beings in an intuitive and non-disruptive way. After your IoT project is up and running, many devices will be producing lots of data. You need an efficient, scalable, affordable way to both manage those devices and handle all that information. IoT Core is a fully managed service for managing IoT devices. It supports registration, authentication, and authorization inside the Google Cloud resource hierarchy as well as device metadata stored in the cloud, and the ability to send device configuration from other GCP or third-party services to devices. Main componentsThe main components of Cloud IoT Core are the device manager and the protocol bridges:The device manager  registers devices with the service, so you can then monitor and configure them. It provides:Device identity management Support for configuring, updating, and controlling individual devicesRole-level access controlConsole and APIs for device deployment and monitoringTwo protocol bridges (MQTT and HTTP) can be used by devices to connect to Google Cloud Platform for:Bi-directional messagingAutomatic load balancingGlobal data access with Pub/SubHow does Cloud IoT Core work?Device telemetry data is forwarded to a Cloud Pub/Subtopic, which can then be used to trigger Cloud Functions as well as other third-party apps to consume the data. You can also perform streaming analysis with Dataflow or custom analysis with your own subscribers.Cloud IoT Core supports direct device connections as well as gateway-based architectures. In both cases the real time state of the device and the operational data is ingested into Cloud IoT Core and the key and certificates at the edge are also managed by Cloud IoT Core. From Pub/Sub the raw input is fed into Dataflow for transformation, and the cleaned output is populated in Cloud Bigtable for real-time monitoring or BigQueryfor warehousing and machine learning. From BigQuery the data can be used for visualization in Looker orData Studio and it can be used in Vertex AI for creating machine learning models. The models created can be deployed at the edge using Edge Manager (in experimental phase). Device configuration updates or device commands can be triggered by Cloud Functions or Dataflow to Cloud IoT Core, which then updates the device.  Design principles of Cloud IoT CoreAs a managed service to securely connect, manage, and ingest data from global device fleets, Cloud IoT COre is designed to be:Flexible, providing easy provisioning of device identities and enabling devices to access most of Google CloudIThe industry leader in IoT scalability and performance Interoperable, with supports for the most common industry-standard IoT protocolsUse casesIoT use cases range across numerous industries. Some typical examples include:Asset tracking, visual inspection, and quality control in retail, automotive, industrial, supply chain and logisticsRemote monitoring and predictive maintenance in oil & gas, utilities, manufacturing, and transportationConnected homes and consumer technologies.Vision intelligence in retail, security, manufacturing, and industrial sectorsSmart living in commercial, residential, and smart spaces Smart factories with predictive maintenance and real-time plant floor analytics For a more in-depth look into Cloud IoT Core check out the documentation.  For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.dev.Related Article5 cheat sheets to help you get started on your Google Cloud journeyWhether you need to determine the best way to move to the cloud, or decide on the best storage option, we’ve built a number of cheat shee…Read Article
Quelle: Google Cloud Platform

Get in sync: Consistent Kubernetes with new Anthos Config Management features

From large digital-native powerhouses to midsized manufacturing firms, every company today is creating and deploying more software to more places more often. Anthos Config Management lets you set and enforce consistent configurations and policies for your Kubernetes resources—wherever you build and run them—and manage Google Cloud services the same way. Today, as a part of Anthos Config Management, we are introducing Config Controller, a hosted service to provision and orchestrate Google Cloud resources. This service offers an API endpoint that can provision, actuate, and orchestrate Google Cloud resources the same way it manages Kubernetes resources. You don’t have to install or manage the components—or be an expert in Kubernetes resource management or GitOps—because Google Cloud will manage them for you. Today, we’re also announcing that, in addition to using it for hybrid and multicloud use cases, Anthos Config Management is now available for Google Kubernetes Engine (GKE) as a standalone service. GKE customers can now take advantage of config and policy automation in Google Cloud at a low incremental per-cluster cost.These announcements deliver a whole new approach to config and policy management—one that’s descriptive or declarative, rather than procedural or imperative. Let’s take a closer look.  Let Kubernetes automate your configs and policies Development teams need stable and secure environments to build apps quickly and deploy them easily. Today, platform teams often scramble to provision and configure the necessary infrastructure components, apps, and cloud services the same way—in many different places—and keep them all up-to-date, patched, and secure. The struggle is real, and it’s not new. Platform administrators have been hand-crafting and partially automating configuration with new infrastructure-as-code languages and tools for years. We can spin up new containerized dev environments in minutes in the cloud and on-prem. We can push code to production hundreds of times a day with automated CI/CD processes. So why do configurations drift and fall out of sync with production? Because it takes time and toil to develop a consistent and automated way to describe what we want, create what we need, and repair what we break. The declarative Kubernetes Resource Model (KRM) reduces this toil with a consistent way to define and update resources: describe what you want and Kubernetes makes it happen. ACM makes it even easier by adding pre-built, opinionated config and policy automations, such as creating a secure landing zone and provisioning a GKE cluster from a blueprint. Blueprints help platform teams configure both Kubernetes and Google Cloud services the same way every time.Describe your intent with a single resource modelThe Kubernetes API server includes controllers that make sure your container infrastructure state always matches the state you declare in YAML. For example, Kubernetes can ensure that a load balancer and service proxy are always created, connected to the right pods, and configured properly. But KRM can manage more than just container infrastructure. You can use KRM to deploy and manage resources such as cloud databases, storage, and networks. It can also manage your custom-developed apps and services using custom resource definitions. Create what you need from a single source of truthWith Anthos Config Management, you declare and set configurations once and forget them. You don’t have to be an expert in KRM or GitOps-style configuration because the hosted Config Controller service takes care of it. Config Controller provisions infrastructure, apps, and cloud services; configures them to meet your desired intent; monitors them for configuration drift; and applies changes every time you push a new resource declaration to your Git repository. Config changes are as easy as a git push—and easily integrate with your development workflows. Anthos Config Management uses Config Sync to continuously reconcile the state of your registered clusters and resources—that means any GKE, Anthos, or other registered cluster—and makes sure unvetted changes are never pushed to live clusters. Anthos Config Management reduces the risk of dev or ops teams making any changes outside the Git source of truth by requiring code reviews and rolling back any breaking changes to a good working state. In short, using Anthos Config Management both encourages and enforces best practices.Repair what breaks for automated complianceAnthos Config Management’s Policy Controller makes it easier to create and enforce fully programmable policies across all connected clusters. Policies act as guardrails to prevent any changes to configuration from violating your custom security, operational, or compliance controls. For example, you can set policies to actively block any non-compliant API requests, require every namespace to have a label, prevent pods from running privileged containers, restrict the types of storage volumes a container can mount, and more.Policy Controller is based on the open source Open Policy Agent Gatekeeper project, augmented by Google Cloud with a ready-to-use library of pre-built policies for the most common security and compliance controls. Customers can establish a secure baseline easily without deep expertise and ACM applies policies to a single cluster (e.g. GKE) or to a distributed set of Anthos clusters on-prem or in other cloud platforms. You can audit and add your own custom policies by allowing your security-savvy experts to create constraint templates which anyone can invoke in different dev or production environments without learning how to write or manage policy code. The audit functionality included allows platform admins to audit all violations, simplifying compliance reviews.Configure and control every cluster consistentlyThe hosted service, Config Controller, which runs Config Connector, Config Sync, and Policy Controller for you, is available in Preview. Config Controller leverages Config Connector, which lets you manage Google Cloud resources the same way you manage other Kubernetes resources, with continuous monitoring and self-healing. For example, you can ask Config Connector to create a Cloud SQL instance and a database. Config Connector can manage more than 60 Google Cloud resources, including Bigtable, BigQuery, Pub/Sub, Spanner, Cloud Storage, and Cloud Load Balancer.Once you’ve embraced a consistent resource model, using ACM to enforce configuration and policy automatically for individual resources, take the next step with blueprints. A blueprint is a package of config and policy that documents an opinionated solution to deploy and manage multiple resources at once. Blueprints capture best practices and policy guardrails, package them together, and let you deploy them as a complete solution to any Kubernetes clusters using Config Controller. Use Blueprints to manage multiple resources at once, or to create customized landing zones—compliant, properly configured, and easily duplicated environments that meet your own best practice guidelines and that are properly networked and secured. The Vienna Insurance Group uses Anthos Config Management in its Viesure Innovation Center, which it credits with improving its compliance posture.”Google’s Landing Zones and Config Controller equipped us with an extensive set of tools to set up our Google Cloud infrastructure quickly and securely. Their policy controllers are a powerful instrument for ensuring compliance for all our Google Cloud resources.” —Rene Schakmann, Head of Technology at viesure innovation center GmbHGet started todayAnthos Config Management on GKE is generally available today. If you’re a GKE customer, you can also now use Anthos Config Management at a low incremental cost. By making it available to GKE customers, and offering it as a hosted, managed service for everyone, we’re making it easier than ever for you to leverage “KRM as a service” to simplify and secure Kubernetes resource management from the data center to the cloud.To learn more about the technical details behind ACM, check out this recent episode of the Kubernetes Podcast from Google with the TL for Policy Controller, Max Smythe.Related ArticleI do declare! Infrastructure automation with Configuration as DataConfiguration as Data enables operational consistency, security, and velocity on Google Cloud with products like Config Connector.Read Article
Quelle: Google Cloud Platform

How Digitec Galaxus delivers personalized newsletters with reinforcement learning and Google Cloud

Digitec Galaxus AG is the biggest online retailer in Switzerland, operating two online stores: Digitec, Switzerland’s online market leader for consumer electronics and media products, and Galaxus, the largest Swiss online shop with a steadily growing range of consistently low-priced products for almost all daily needs. Known for its efficient, personalized shopping experiences, it’s clear that Digitec Galaxus understands what it takes to deliver a platform that is interesting and relevant to customers every time they shop. The problem: Personalizing decisions for every situationDigitec Galaxus already had established an engine to help them personalize experiences for shoppers when they reached out to Google Cloud. They had multiple recommendation systems in place and were also extensive early adopters of Recommendations AI, which already enabled them to offer personalized content in places like their homepages, product detail pages, and their newsletter. But those same systems sometimes made it difficult to understand how best to combine and optimize to create the most personalized experiences for their shoppers. Their requirements were threefold:Personalization: They have over 12 recommenders they can display on the app, however they would like to contextualize this and choose different recommenders (which in turn select the items) for different users. Furthermore they would like to exploit existing trends as well as experiment with new ones.Latency: They would like to ensure that the solution is architected so that the ranked list of recommenders can be retrieved with sub 50 ms latency.End-to-end easy to maintain & generalizable/modular architecture: Digitec wanted the solution to be architected using an easy to maintain, open source stack, complete with all MLops capabilities required to train and use contextual bandits models. It was also important to them that it is built in a modular fashion such that it can be adapted easily to other use cases which have in mind such as recommendations on the homepage, Smartags and more . To improve, they asked us to help them implement a machine learning (ML) contextual bandit based recommender system on Google Cloud taking all the above factors into consideration to take their personalization to the next level. Contextual bandits algorithms are a simplified form of reinforcement learning and help aid real-world decision making by factoring in additional information about the visitor (context) to help learn what is most engaging for each individual. They also excel at exploiting trends which work well, as well as exploring new untested trends which can yield potentially even better results. For instance, imagine that you are personalizing a homepage image where you could show a comfy living room couch or pet supplies. Without a contextual bandit algorithm, one of these images would be shown to someone at random without considering information you may have observed about them during previous visits. Contextual bandits enable businesses to consider outside context, such as previously visited pages or other purchases, and then observe the final outcome (a click on the image) to help determine what works best. Creating a personalization system with contextual banditsWhile Digitec Galaxus heavily personalizes their website homepages, they are very very sensitive and also require more cross-team collaboration to update and make changes. Together with the Digitec Galaxus team, we decided to narrow the scope and focus on building a contextual bandit personalization system for the newsletter first. The digitec Galaxus team has complete control over newsletter decisions and testing various ML experiments on a newsletter would have less chance of adverse revenue impact than a website homepage. The main goal was to architect a system that could be easily ported over to the homepage and other services offered by Digitec with minimal adaptations. It would also need to satisfy the functional and non-functional requirements of the homepage as well as other internal use cases.Below is a diagram of how the newsletter’s personalization recommendation system works:The system is given some context features about the newsletter subscriber such as their purchase history and demographics. Features are sometimes referred to as variables or attributes, and can vary widely depending on what data is being analyzed. The contextual bandit model trains recommendations using those context features and 12 available recommenders (potential actions). The model then calculates which action is most likely to enhance the chance of reward (a user clicking in the newsletter) and also minimize the problem (an unsubscribe). Calculating whether a click was a newsletter or an unsubscribe enabled the system to optimize for increasing clicks and avoid showing non-relevant content to the user (click-bait). This enabled Digitec Galaxus to exploit popular trends while also exploring potentially better-performing trends. How Google Cloud helpsThe newsletter context-driven personalization system was built on Google Cloud architecture using the ML recommendation training and prediction solutions available within our ecosystem. Below is a diagram of the high-level architecture used:The architecture covers three phases of generating context-driven ML predictions, including: ML Development: Designing and building the ML models and pipeline Vertex Notebooks are used as data science environments for experimentation and prototyping. Notebooks are also used to implement model training, scoring components, and pipelines. The source code is version controlled in Github. A continuous integration (CI) pipeline is set up to automatically run unit tests, build pipeline components, and store the container images to Cloud Container Registry. ML Training: Large-scale training and storing of ML models The training pipeline is executed on Vertex Pipelines. In essence, the pipeline trains the model using new training data extracted from BigQuery and produces a trained, validated contextual bandit model stored in the model registry. In our system, the model registry is a curated Cloud Storage. The training pipeline uses Dataflow for large scale data extraction, validation, processing, and model evaluation, and Vertex Training for large-scale distributed training of the model. AI Platform Pipelines also stores artifacts, the output of training models, produced by the various pipeline steps to Cloud Storage. Information about these artifacts are then stored in an ML metadata database in Cloud SQL. To learn more about how to build a Continuous Training Pipeline, read the documentation guide.ML Serving: Deploying new algorithms and experiments in production The training pipeline uses batch prediction to generate many predictions at once using AI Platform Pipelines, allowing Digitec Galaxus to score large data sets. Once the predictions are produced, they are stored inCloud Datastore for consumption. The pipeline uses the most recent contextual bandit model in the model registry to evaluate the inference dataset in BigQuery and give a ranked list of the best newsletters for each user, and persist it in Datastore. A Cloud Function is provided as a REST/HTTP endpoint to retrieve the precomputed predictions from Datastore.All components of the code and architecture are modular and easy to use, which means they can be adapted and tweaked to several other use cases within the company as well.Better newsletter predictions for millionsThe newsletter prediction system was first deployed in production in February, and Digitec Galaxus has been using it to personalize over 2 million newsletters a week for subscribers. The results have been impressive, 50% higher than our baseline. However, the collaboration is still ongoing to improve the results even more. “Working at this level in direct exchange with Google’s machine learning experts is a unique opportunity for us. The use of contextual bandits in the targeting of our recommendations enables us to pursue completely new approaches in personalization by also personalizing the delivery of the respective recommender to the user. We have already achieved good results in our newsletter in initial experiments and are now working on extending the approach to the entire newsletter by including more contextual data about the bandits arms. Furthermore, as a next step, we intend to apply the system to our online store as well, in order to provide our users with an even more personalized experience. To build this scalable solution, we are using Google’s open source tools such as TFX and TF Agents, as well as Google Cloud Services such as Compute Engine, Cloud Machine Learning Engine, Kubernetes Engine and Cloud Dataflow.”—Christian Sager, Product Owner, Personalization ( Digitec Galaxus)Since the existing architecture and system is also dynamic, it will automatically adapt to new behaviours, trends, and users. As a result, Digitec Galaxus plans to re-use the same components and extend the existing system to help them improve the personalization of their homepage and other current use cases they have within the company. Beyond clicks and user engagement, the system’s flexibility also allows for future optimization of other criteria. It’s a very exciting time and we can’t wait to see what they build next!Related ArticleIKEA Retail (Ingka Group) increases Global Average Order Value for eCommerce by 2% with Recommendations AIIKEA uses Recommendations AI to provide customers with more relevant product information.Read Article
Quelle: Google Cloud Platform

Amazon Aurora PostgreSQL unterstützt die pg_partman-Erweiterung für die Verwaltung der Tabellenpartitionierung auf Basis von Zeitreihen- oder Serien-IDs in AWS-GovCloud-Regionen (USA)

Die Amazon-Aurora-PostgreSQL-kompatible Edition unterstützt die Partitionsmanager-Erweiterung (pg_partman) in AWS-GovCloud-Regionen (USA). pg_partman ist eine PostgreSQL-Erweiterung, mit der Sie sowohl zeitreihen- als auch serienbasierte Tabellenpartitionssätze verwalten können, einschließlich der automatischen Verwaltung der Partitionserstellung und der Laufzeitwartung. pg_partman arbeitet mit nativer PostgreSQL-Partitionierung, sodass Benutzer von erheblichen Leistungsverbesserungen profitieren können.
Quelle: aws.amazon.com

Amazon Neptune kündigt Unterstützung für SPARQL 1.1 Graph Store HTTP Protocol an

Amazon Neptune kündigt Unterstützung für SPARQL 1.1 Graph Store HTTP Protocol (GSP) für Graphen an, die das Resource Description Framework (RDF) des W3C verwenden. Durch die Verwendung von GSP auf SPARQL 1.1-Endpunkten haben Kunden jetzt eine effiziente Methode, um mit vollständigen benannten Graphen innerhalb eines Graphspeichers zu interagieren. Dies kann die Erstellung von Graphenanwendungen mit Amazon Neptune und Tools, die das GSP der W3C-Empfehlung unterstützen, wie Apache Jena, rationalisieren.
Quelle: aws.amazon.com

Amazon SageMaker JumpStart führt neue Bildverarbeitungsmodelle für die Extraktion von Bildmerkmalsvektoren und Objekterkennung ein

Amazon SageMaker JumpStart hilft Ihnen, Ihre Machine-Learning-Probleme schnell und einfach zu lösen, indem Sie mit einem Klick auf beliebte Modellsammlungen von TensorFlow Hub, PyTorch Hub und Hugging Face (auch bekannt als „Modellzoos“) und auf 16 End-to-End-Lösungen zugreifen, die gängige Geschäftsprobleme wie Nachfrageprognosen, Betrugserkennung und Dokumentenverständnis lösen.
Quelle: aws.amazon.com