Ankündigung: Preisreduzierung bei AWS Fargate um bis zu 50 %

Ab dem 7. Januar 2019 senken wir den Preis für AWS Fargate in allen Regionen, in denen Fargate derzeit verfügbar ist, für vCPU um 20 % und für Speicher um 65 %. Fargate ist eine Datenverarbeitungs-Engine für Amazon Elastic Container Service (ECS), mit der Sie Container ausführen können, ohne Server oder Cluster verwalten zu müssen. Auf der Seite mit den Preisdetails zu Fargate oder in unserem Blog finden Sie weitere Informationen zu der aktualisierten Preisgestaltung.
Quelle: aws.amazon.com

Einführung von Amazon DocumentDB (mit MongoDB-Kompatibilität) – Allgemein verfügbar

Amazon DocumentDB (mit MongoDB-Kompatibilität) ist ein schneller, skalierbarer, hochverfügbarer und vollständig verwalteter Dokumentdatenbankdienst, der MongoDB-Workloads unterstützt. Entwickler können denselben MongoDB-Anwendungscode, dieselben Treiber und Tools verwenden, um Workloads in Amazon DocumentDB auszuführen, zu verwalten und zu skalieren und die Leistung, Skalierbarkeit und Verfügbarkeit zu verbessern, ohne sich um die Verwaltung der zugrunde liegenden Infrastruktur kümmern zu müssen. Kunden können ihre MongoDB-Datenbanken, die sich vor Ort oder auf Amazon EC2 befinden, mit dem AWS Database Migration Service (DMS) kostenlos (für sechs Monate pro Instance) und praktisch ohne Ausfallzeiten auf Amazon DocumentDB migrieren. Es sind keine Vorabinvestitionen für die Nutzung von Amazon DocumentDB erforderlich und die Kunden zahlen nur für die von ihnen genutzte Kapazität.
Amazon DocumentDB ist allgemein verfügbar und Sie können Amazon DocumentDB in den folgenden AWS-Regionen verwenden: USA Ost (Nord-Virginia), USA Ost (Ohio), USA West (Oregon) und Europa (Irland).
Um mehr über Amazon DocumentDB zu erfahren, besuchen Sie https://aws.amazon.com/documentdb. 
Quelle: aws.amazon.com

Best practices for alerting on metrics with Azure Database for MariaDB monitoring

On December 4, 2018 Microsoft’s Azure Database for open sources announced the general availability of MariaDB. This blog intends to share some guidance and best practices for alerting on the most commonly monitored metrics for MariaDB.

Whether you are a developer, a database analyst, a site reliability engineer, or a DevOps professional at your company, monitoring databases is an important part of maintaining the reliability, availability, and performance of your MariaDB server. There are various metrics available for you in Azure Database for MariaDB to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid/public cloud. Here are some example best practices on how you can use monitoring data on your MariaDB server and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check

If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload.
If you think the load is expected, active connections limits can be increased by upgrading the pricing tier or vCores. You can check active connection limits for each SKU in our documentation, “Limitations in Azure Database for MariDB.”

Failed connections

Sample threshold (percentage or value): 10 failed connections in the last 30 minutes, checked every five minutes.

Things to check

If you see connection request failures over the last half hour, verify if this is expected by checking the logs for failure reasons.

If this is a user error, take the appropriate action. For example, if authentication yields a failed error check your username/password.
If the error is SSL related, check the SSL settings and input parameters are properly configured.

Example: psql "sslmode=verify-ca sslrootcert=root.crt host=mydemoserver.mariadb.database.azure.com dbname=mariadb user=mylogin@mydemoserver"

CPU percent or memory percent

Sample threshold (percent or value): 100 percent for five minutes or 95 percent for more than two hours.

Things to check

If you have hit 100 percent CPU or memory usage, check your application telemetry or logs to understand the impact of the errors.
Review the number of active connections. Check for connection limits in our documentation, “Limitations in Azure Database for MariaDB.” If your application has exceeded the max connections or is reaching the limits, then consider scaling up compute.

IO percent

Sample threshold (percent or value): 90 percent usage for greater than or equal to 60 minutes.

Things to check

If you see that IOPS is at 90 percent for one hour or more, verify if this is expected based on the application workload.
If you expect a high load, then increase the IOPS limit by increasing storage. Storage to IOPS mapping is illistrated below as a reference.

Storage

The storage you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and the PostgreSQL server logs. The total amount of storage you provision also defines the I/O capacity available to your server.

 
Basic
General purpose
Memory optimized

Storage type
Azure Standard Storage
Azure Premium Storage
Azure Premium Storage

Storage size
5GB TO 1TB
5GB to 4TB
5GB to 4TB

Storage increment size
1GB
1GB
1GB

IOPS
Variable

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

3IOPS/GB

Min 100 IOPS

Max 6000 IOPS

Storage percent

Sample threshold (percent or value): 80 percent

Things to check

If your server is reaching provisioned storage limits, it will soon be out of space and set to read-only.
Please monitor your usage. You can also provision for more storage to continue using the server without deleting any files, logs, and more.

If you have tried everything and none of the monitoring tips mentioned above lead you to a resolution, please don't hesitate to contact Microsoft Azure Support.

Acknowledgments

Special thanks to Andrea Lam, Program Manager, Azure Database for MariaDB for her contributions to this blog.
Quelle: Azure

AWS Database Migration Service unterstützt jetzt Amazon DocumentDB mit MongoDB-Kompatibilität als Ziel

AWS Database Migration Service (AWS DMS) bietet jetzt mehr Funktionalität durch Unterstützung vonAmazon DocumentDB (mit MongoDB-Kompatibilität) als Ziel. Mit DMS können Sie nun Live-Migrationen zu Amazon DocumentDB aus MongoDB-Replikasets, Sharded-Clustern oder jeder von AWS DMS unterstützten Quelle durchführen, einschließlich Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE und Microsoft SQL Server-Datenbanken mit minimalen Ausfallzeiten.
Weitere Informationen zur Migration von Daten mit DMS nach Amazon DocumentDB finden Sie in der Dokumentation des Amazon DocumentDB-Ziels. Das Amazon DocumentDB-Ziel im DMS ist in allen Regionen verfügbar, in denen Amazon DocumentDB verfügbar ist. Informationen zur Verfügbarkeit von AWS DMS und Amazon DocumentDB finden Sie in der AWS-Regionentabelle.

Quelle: aws.amazon.com

New year, newly available IoT Hub Device Provisioning Service features

We’re ringing in 2019 by announcing the general availability for the Azure IoT Hub Device Provisioning Service features we first released back in September 2018! The following features are all generally available to you today:

Symmetric key attestation support
Re-provisioning support
Enrollment-level allocation rules
Custom allocation logic

All features are available in all provisioning service regions, through the Azure portal, and the SDKs will support these new features by the end of January 2019 (with the exception of the Python SDK). Let’s talk a little more about each feature.

Symmetric key attestation

Symmetric keys are one of the easiest ways to start off using the provisioning service and provide an easy "Hello world" experience for those of you who want to get started with provisioning but haven’t yet decided on an authentication method. Furthermore, symmetric key enrollment groups provide a great way for legacy devices with limited existing security functionality to bootstrap to the cloud via Azure IoT. Check the docs to learn more about how to connect legacy devices.

Symmetric key support is available in two ways:

Individual enrollments, in which devices connect to the Device Provisioning Service just like they do in IoT Hub.
Enrollment groups, in which devices connect to the Device Provisioning Service using a symmetric key derived from a group key.

The documentation has more about how to use symmetric keys to verify a device's identity.

Automated re-provisioning support

We added first-class support for device re-provisioning which allows devices to be reassigned to a different IoT solution sometime after the initial solution assignment. Re-provisioning support is available in two options:

Factory reset, in which the device twin data for the new IoT hub is populated from the enrollment list instead of the old IoT hub. This is common for factory reset scenarios as well as leased device scenarios.
Migration, in which device twin data is moved from the old IoT hub to the new IoT hub. This is common for scenarios in which a device is moving between geographies.

We’ve also taken steps to preserve backward compatibility for those who need it. Check the documentation, “IoT Hub Device reprovisioning concepts,” to learn the details. The documentation also has more on how to use re-provisioning.

Enrollment-level allocation rules

Customers need fine-grain control over how their devices are assigned to the proper IoT hub. For example, Contoso is a solution provider with two large multinational companies as customers. Each of Contoso’s customers is using Contoso devices across the globe in a geo-sharded setup. Contoso needs the ability to tell the provisioning service that customer A’s devices need to go to one set of hubs distributed geographically and that customer B’s devices need to go to another set of hubs distributed geographically. Enrollment-level allocation rules allow Contoso to do just that.

There are two pieces of functionality that light up:

Specifying allocation policy per enrollment gives finer-grain control.
Linked hub scoping allows the allocation policy to run over a subset of hubs.

This is available for both individual and group enrollments.

Custom allocation logic

With custom allocation logic, the Device Provisioning Service will trigger an Azure Function to determine where a device ought to go and what configuration should be applied to the device. Custom allocation logic is set at the enrollment level.

To sum things up with a limerick:

New features we announced last fall

Are ready for one and for all.

More flexibility

Makes provisioning easy

For devices from big to the small.
Quelle: Azure

Implement predictive analytics for manufacturing with Symphony Industrial AI

Technology allows manufacturers to generate more data than traditional systems and users can digest. Predictive analytics, enabled by big data and cloud technologies, can take advantage of this data and provide new and unique insights into the health of manufacturing equipment and processes. While most manufacturers understand the value of predictive analytics, many find it challenging to introduce into the line of business. Symphony Industrial AI has a mission: to bring the promise of Industrial IoT (IIoT) and artificial intelligence (AI) to reality by delivering real value to their customers through predictive operations solutions. Two solutions by Symphony are specially tailored to the process manufacturing sector (chemicals, refining, pulp and paper, metals and mining, oil, and gas).

There are two solutions offered by Symphony Industrial AI:

Asset 360 AI
Process 360 AI

The first focuses on existing machinery, and the second on common processes.

Problem: the complexity of data science

Manufacturers have deep knowledge of their manufacturing processes, but they typically lack the expertise of data scientists, who have a deep understanding of statistical modeling, a fundamental component of most predictive analytics applications. And when the application of predictive analytics is a success, most deployments fail to provide users with root causes, or contributing factors, of identified (predicted) issues so that they can take quick and decisive action on the new-found insight.

Solution: predictive analytics made easy

Symphony Industrial AI answers with a pre-built, template-driven approach that minimizes data scientist requirements and promotes rapid predictive analytics deployments. The solution features a data management platform for the process manufacturing sector. It provides real-time stream processing on time-series and related data for predictive analytics, leveraging cloud and big data technologies. The figure below shows an example of the solution’s dashboard.

Symphony Industrial AI’s solution speeds time-to-value through rapid deployment for minimized time and financial investment. Some of its features include:

Operations Date Lake (ODL): Pre-built integrations to existing systems of record (historians, EAM/CMMS, SCADA, and more).
Equipment and process template library: A library of equipment and process templates (pre-packaged analytics) that accelerate implementation and time-to-value.
AI/ML algorithms: Pre-packaged algorithms for failure/anomaly prediction.
Asset 360 AI and Process 360 AI: Pre-packaged solutions for asset performance intelligence and operations/process intelligence, respectively.

Two solutions: equipment models and process models

Predictive analytics solutions tend to focus on equipment health as scenarios, as the data is readily modeled. To ease the implementation, Asset 360 AI deploys equipment models (also known as asset models) from a template library — which includes heat exchangers, pumps, compressors, and so forth.

Symphony AI’s second solution Process 360 AI helps users create predictive models of their processes. A process is defined at the high level as the items (such as chemicals, fuels, metals, other intermediate and finished products) that are being produced through the equipment. Process template examples include an ammonia process, an ethylene process, an LNG process, and a polypropylene process. Process models help predict process upsets and trips — which equipment models alone may not be able to predict.

Benefits

Built with AI and machine learning (ML), Asset 360 AI and Process 360 AI integrate seamlessly with the equipment and devices already owned. The solutions predict failures before they happen, resulting in several benefits.

A reduction in unplanned downtime and process trips.
A reduction in capital expenditure and asset maintenance costs.
Improvement in quality using gathered process and product data.
Improvement in safety and in tracking workforce effectiveness.

Microsoft technologies

Symphony Industrial AI’s solution is delivered as a SaaS model on Azure using the following services:

Azure IoT Hub
Azure Machine Learning

These services ensure the latest features of IoT and AI advances can be implemented. Additionally, Power BI gives users a rich surface to use for finding insights and monitoring processes.

For manufacturers looking for a way to introduce predictive analytics, Symphony Industrial AI offers two solutions that are easy to implement through a template-driven process. The template libraries include models for existing equipment and standard manufacturing flows. To find out more, go to Asset 360 AI or Process 360 AI and select Contact me.
Quelle: Azure