Bestellknopf: Amazon nimmt Dash Buttons vom Markt

Die Dash Buttons sind Geschichte. Amazon bietet den Bestellknopf nicht länger an. Der Rechtsstreit darum in Deutschland läuft noch. Lediglich die virtuellen Dash Buttons bleiben und auch das automatische Nachbestellprogramm will Amazon fortführen. (Amazon Dash, Onlineshop)
Quelle: Golem

New device modeling experience in Azure IoT Central

On the Azure IoT Central team, we are constantly talking with our customers to understand how we can continue to provide more value. One of our top pieces of product feedback has been for a clearer device modeling experience that separates the device instance from the device template. Previously, viewing the device and editing the device template took place on the same page through an “Edit Template” button. This caused a lack of clarity between when you were making a change that applied to the device or if your changes were getting applied to all devices in that template. Recently we've begun a flighted rollout of a new device modeling experience that begins to directly address this feedback.

For app builder roles, we have introduced a new “Device Templates” navigation tab that replaces the existing “Application Builder” tab, as well as updated the pattern in which you view or edit your device templates. To edit your device templates, you can visit the “Device Templates” tab to make changes. To view or interact with your device instance, you can still find this under the “Explorer” tab. We’re excited to get the first set of changes in your hands so that device templates and device explorer can continue to evolve independently from one another in order to best support how our users interact with their devices. These changes will both optimize the operator experience of viewing or interacting with devices, as well as streamline the builder workflow of creating or modifying a template.

These changes are an important first step towards continuing to optimize your device workflow for easier management and clarity. Please leave us feedback at Azure IoT Central UserVoice, as we continue to invest in understanding and solving our customer needs.

To learn more, please visit our documentation, “Set up a device template.”
Quelle: Azure

Getting started with the Couchbase Autonomous Operator in Red Hat OpenShift 3.11

This is a guest post from Couchbase’s Sindhura Palakodety, Senior Technical Support Engineer.  Couchbase is the first NoSQL vendor to have generally available, production-certified operator for the Red Hat OpenShift Container Platform. The Couchbase Autonomous Operator enables enterprises to more quickly adopt the Couchbase Engagement Database in production to create and modernize their applications for […]
The post Getting started with the Couchbase Autonomous Operator in Red Hat OpenShift 3.11 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Creating IoT applications with Azure Database for PostgreSQL

There are numerous IoT use cases in different industries, with common categories like predictive maintenance, connected vehicles, anomaly detection, asset monitoring, and many others. For example, in water treatment facilities in the state of California, IoT devices can be installed in water pumps to measure horse power, flow rate, and electric usage of the water pumps. The events emitted from these devices get sent to an IoT hub every 30 seconds for aggregation and processing. A water treatment facility company could build a dashboard to monitor the water pumps and build notifications to alert the maintenance team when the event data is beyond a certain threshold. They could then alert the maintenance team to repair the water pump if the flow rate is dangerously low. This is a very typical proactive maintenance IoT use case.

Azure IoT is a complete stack of IoT solutions. It’s a collection of Microsoft managed cloud services that connect, monitor, and control billions of IoT assets. The common set of components in the Azure IoT core subsystem include:

IoT devices that stream the events
Cloud gateway, where Azure IoT is most often used to enable communication to and from devices and edge devices
Stream processing that ingests events from the device and triggers actions based on the output of the analysis. A common workflow is the input telemetry encoded in Avro that may return output telemetry encoded in JSON for storage
Storage, that’s usually a database used to store IoT event data for reporting and visualization purposes

Let’s take a look at how we implement an end to end Azure IoT solution and use Azure Database for PostgreSQL to store IoT event data in the JSONB format. Using PostgreSQL as the NoSQL data store has its own advantages with its strong native JSON processing, indexing capabilities, and plv8 extension that further enhances it by integrating the JavaScript v8 engine with SQL. Besides the managed services capabilities and lower cost, one of the key advantages of using Azure Database for PostgreSQL is its native integration with the Azure ecosystem that enables modern applications with improved developer productivity.

In this implementation, we use Azure Database for PostgreSQL with the plv8 extension as a persistent layer for IoT telemetry stream for storage, analytics, and reporting. The high-speed streaming data is first loaded into the PostgreSQL database (master server) as a persistent layer. The master server is used for high speed data ingestion and the read replicas are leveraged for reporting and downstream data processing to take data-driven actions. You can leverage the Azure IoT Hub as the event processing hub and Azure Function to trigger the processing steps and extract what’s needed from emitted events to store them in Azure Database for PostgreSQL.

 

In this post, we’ll walk through the high-level implementation to get you started. Our GitHub repository has sample applications and a detailed QuickStart tutorial with step-by-step instructions for implementing the solution below. The QuickStart uses Node.js applications to send telemetry to the IoT Hub.

Step 1: Create an Azure IoT Hub and register a device with the Hub

In this implementation, the IoT sensor simulators are constantly emitting temperature and humidity data back to the cloud. The first step would be creating an Azure IoT Hub in the Azure portal using these instructions. Next, you’ll want to register the device name in the IoT Hub so that the IoT Hub can receive and process the telemetry from the registered devices.

In GitHub, you will see sample scripts to register the device using CLI and export the IoT Hub service connection string.

Step 2: Create an Azure Database for PostgreSQL server and a database IoT demo to store the telemetry data stream

Provision an Azure Database for PostgreSQL with the appropriate size. You can use the Azure portal or the Azure CLI to provision the Azure Database for PostgreSQL.

In the database, you will enable the plv8 extension and create a sample plv8 function that’s useful for querying to extract a temperature column from the JSON documents. You can use the JSON table to store the IoT telemetry data. You can locate the script to create a database and table and enable the plv8 extension in GitHub.

Step 3: Create an Azure Function Event Hub and extract message and store in PostgreSQL

Next you will create a JavaScript Azure Function with Event Hub trigger bindings to Azure IoT Hub created in step 1. Use the JavaScript index.js sample to create this function. The function is triggered for each incoming message stream in the IoT Hub. It extracts the JSON message stream and inserts the data into the PostgreSQL database created in Step 2.

Getting started by running the IoT solution end to end

We recommend that you try and implement this solution using the sample application in our GitHub repository. In GitHub, you will find steps on running the node.js application to simulate the generation of event data, creating an IoT Hub with device registration, sending the event data to the IoT Hub, deploying Azure function to extract the data from JSON message, and inserting it in Azure Database for PostgreSQL.

At the end of implementing all the steps in GitHub, you will be able to query and analyze the data using reporting tools like Power BI that allow you to build real-time dashboards as shown below.

We hope that you enjoy working with the latest features and functionality available in our Azure Database Service for PostgreSQL. Be sure to share your feedback via User Voice for PostgreSQL.

If you need any help or have questions, please check out the Azure Database for PostgreSQL documentation.

Acknowledgements

Special thanks to Qingqing Yuan, Bassu Hiremath, Parikshit Savjani, Anitah Cantele, and Rachel Agyemang for their contributions to this post.
Quelle: Azure

containerd Graduates Within the CNCF

We are happy to announce that as of today, containerd, an industry-standard runtime for building container solutions, graduates within the CNCF. The successful graduation demonstrates containerd has achieved the maturity, stability and community acceptance required for broad ecosystem adoption. containerd has already been deployed in tens of millions of production systems today, making it the most widely adopted runtime and an essential upstream component of the Docker platform. containerd was donated to the CNCF as a top-level project because of its strong alignment with Kubernetes, gRPC and Prometheus and is the fifth project to make it to this tier. Built to address the needs of modern container platforms like Docker Enterprise and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience.
From Docker’s initial announcement that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the last two years. The primary goal of Docker’s donation was to foster further innovation in the container ecosystem by providing a core container runtime that could be leveraged by container system vendors and orchestration projects such as Kubernetes, Swarm, etc. An important design principle for containerd was to have first-class support for Kubernetes without being exclusively tethered to it, opening the door for many use cases for containers such as developer desktop, CI/CD, single node deployments, edge and IoT.
For Docker, containerd is the runtime component of Docker Engine, which makes it available to mainstream developers without having to change their workflow; whether they use it from a laptop in Docker Desktop, a production Kubernetes cluster in Docker Enterprise, a mainframe where traditional applications are modernized with containers, or edge IoT devices for IoT scenarios. Regardless of which system they are using, developers and operators benefit from the application workflow portability that Docker Engine provides, enabling them to build and run containers using the same trusted codebase everywhere.
Community Contribution
Within both the Docker and Kubernetes communities, there has been a significant increase in contributions from independents and CNCF member companies including Docker, Google, Alibaba, NTT, IBM, Microsoft, AWS and ZTE. containerd’s focus on clean design and quality has attracted new contributions while the CNCF has provided a neutral home to instill confidence that contributions are accepted based on merit. The project has welcomed four new maintainers and eight reviewers since joining CNCF, allowing the project to scale as contributions have increased without compromising on quality or review time.
Evolution of containerd
The contributors and maintainers have been working to add key functionality to containerd since the initial donation, which provided users a seamless container experience including transferring container images, container execution and supervision. containerd 1.0 was released less than a year later providing users with a supported low level API along with cross-platform support, reliable resource management and an easy-to-use client interface. This was followed by containerd 1.1 which brought support for Kubernetes’ CRI built into containerd. As the user base expanded and the community grew, demand for a wider range of runtimes led to containerd stabilizing the low level runtime API in containerd 1.2, enabling support for VM runtimes like Kata, Firecracker, and Hyper-V. The upcoming 1.3 release will bring a supported Windows runtime.
Looking into the future, we anticipate even wider adoption as the core container runtime for any architecture and infrastructure. Follow the project and get involved.

Today @containerd, the industry-leading container runtime and core component of @Docker Engine, graduates from the @CloudNativeFdnClick To Tweet

For More Information:

Contribute to the containerd project
Learn more about containerd
For more information about Docker Engine 

The post containerd Graduates Within the CNCF appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/