PaaS vs KaaS: What’s the difference, and when does it matter?

The post PaaS vs KaaS: What’s the difference, and when does it matter? appeared first on Mirantis | Pure Play Open Cloud.
Earlier this month I had the pleasure of addressing the issue of Platform as a Service vs Kubernetes as a Service. We talked about the differences between the two modes, and their relative strengths and weaknesses. If you’d like to see the full webinar, you can click this link, but there were several questions we didn’t get to, and I promised to recap all of them in a blog.
Is it possible to have a PaaS that’s also a KaaS?
Yes.  While a PaaS is designed to provide applications for developers to use without having to worry about deploying them, a KaaS is designed to deploy Kubernetes clusters for developers.  There’s no reason that a PaaS can’t offer a Kubernetes cluster as an “application” to be deployed, though not all do.
Can you deploy a PaaS on Kubernetes?
Yes. At the end of the day, a PaaS is just an application, and it needs to be deployed somewhere.  If it’s already containerized, of course it can be deployed on Kubernetes. If not, it might take some additional work, but yes, it’s possible.
If the applications are written for AKS or EKS or PKS will they be locked into that respective provider APIs?
Any time you’re writing to a specific API, you’re locked into that API.  If it’s the Kubernetes API, or the OpenStack API, or any other open source API, your application can then be used anywhere that API is available.  If, however, you write to an API that’s only available from a particular provider (such as AKS or EKS or PKS) then you’re locked into that provider.
Are there open source KaaS solutions out there? Or do most people resort to Ansible/kubeadm automation to standup the clusters? This may only be relevant to organizations that run K8s on prem.
In general, most people do resort to a single-cluster tool such as Kubeadm, but once you get past a couple of clusters, a full-blown KaaS solution is generally more convenient.  There are some open source solutions, such as Kubespray, KQueen and Gardener, but so far, none that have really captured the market.
Can you talk more about why you consider OpenShift to be more PaaS than KaaS?
Most of my experience with OpenShift has been with OpenShift 3, which is definitely a PaaS; it’s essentially a single Kubernetes cluster with an application catalog and a wrapper, oc, for OpenShift-specific commands.  A single tenant can deploy a “project” which is architecturally just a namespace. (Which leads to the interesting side-effect that every project has to have a globally-unique name, but that’s just a side issue.) In to that project, OpenShift uses Operators to deploy applications of the user’s choice.
The important thing to note here is that the user has NOT been provisioned a Kubernetes cluster; they’re just squatting on the main cluster that is OpenShift.  So OpenShift 3 is definitely not a KaaS.
As another attendee of the webinar pointed out, OpenShift’s original motivations were to provide easy access to CI/CD and Software Defined Networking; Kubernetes was an afterthought.  (In fact early versions of OpenShift didn’t use it at all.)
I’ve been told that OpenShift 4 is more KaaS-like, which I assume means that it can deploy an independant Kubernetes cluster for you to use, but I’ve been unable to verify that through the documentation, and OpenShift Online still uses version 3.  (If someone has more information on this issue, I’d love to hear it.)
In terms of adoption, do we see more KaaS compared to PaaS?
That all depends on how you’re defining each. KaaS is definitely going to take off in the next few years, particularly as Edge Computing becomes more important and the need for deploying multiple clusters becomes impossible to ignore.  That said, however, many KaaSes also include application catalogs, so while the function of PaaS will continue to be important, it’s possible that stand-alone PaaSes themselves might begin to fall by the wayside.
Does KaaS provision nodes across a cluster or even multiple pods in a node?
Let’s get straight where KaaS fits in in the “provisioning” world. KaaS provisions the actual Kubernetes cluster, and not individual pods.  For example, if I were using Mirantis KaaS, I might define 5 servers to be used, and then specify that I want 3 control nodes and 2 worker nodes, which would then be spread across those 5 machines.
Once the cluster itself had been provisioned, I could then deploy my pods on those Kubernetes nodes, but that’s independent of the KaaS.
In the PDF you list Pivotal’s PKS but not PCF. I thought PKS was still beta. Can you speak where PCF fits in? It is clearly (non-k8s) PaaS, but is there anything more to add?
By PCF I assume that you’re referring to Pivotal Platform, which is not so much PaaS as an umbrella project for multiple things, including Pivotal Container Service (the KaaS), Pivotal Application Service, and Pivotal Function Service. Like OpenShift, it appears to be focused more on the CI/CD process.
If you know, what do you think about the EIRINI CloudFoundry project? Integrating Application Runtime & K8s. Is it a real convergence between PaaS and KaaS?
Yes, it does give CF KaaS capabilities. It only allows for CF Orchestration to be applied to Kubernetes Containers as well as VMs. They created a “plugin” to their CF Engine to call the same things, for example, that might be called in a KaaS.
Does OpenShift support Kubespray?
It doesn’t appear to, and there’s no mention of it in the documentation.
In your opinion, what is more cost-effective, KaaS or PaaS?
Like most questions that involve cost and technology, the answer is “it depends”.  There are a number of different factors that matter, such as what you’re trying to accomplish, your infrastructure, and your use case.  (Contact us and we’ll be happy to help you take a look at your situation.)
 
The post PaaS vs KaaS: What’s the difference, and when does it matter? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to Handle OpenShift Worker Nodes Resources in Overcommitted State

One of the benefits in adopting a system like OpenShift is facilitating burstable and scalable workload. Horizontal application scaling involves adding or removing instances of an application to match demand. When OpenShift schedules a Pod, it’s important that the nodes have enough resources to actually run it. If a user schedules a large application (in the form of Pod) on a node with limited resources , it is possible for the node to run out of memory or CPU resources and for things to stop working!
It’s also possible for applications to take up more resources than they should. This could be caused by a team spinning up more replicas than they need to artificially decrease latency or simply because of a configuration change that causes a program to go out of control and try to use 100% of the available CPU resources. Regardless of whether the issue is caused by a bad developer, bad code, or bad luck, what’s important is how a cluster administrator can manage and maintain control of the resources.
In this blog, let’s take a look at how you can solve these problems using best practices.
What does “overcommitment” mean in OpenShift ? 
In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. 
Overcommitment might be desirable in development environments where a tradeoff of guaranteed performance for capacity is acceptable.Therefore, in an overcommitted environment, it is important to properly configure your worker node to provide the best system behavior. With this note let’s find out what needs to be enabled on the worker nodes in an overcommitted environment.
Prerequisites for the overcommitted worker nodes: 
The following prerequisites flow chart describes all the checks that should be performed on the worker nodes. Let’s go into the details one by one.

 1. Is the worker node ready for overcommitment? 
In OpenShift Container Platform overcommitment is enabled by default. If not, it is always advisable to cross check. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory.
To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1, overriding the default operating system setting.
OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting vm.panic_on_oom parameter to 0. A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority.
You can view the current setting by running the following commands on your nodes:
$ oc debug node/<worker node>
Starting pod/<worker node>-debug …
If you don’t see a command prompt, try pressing enter.
sh-4.2# sysctl -a |grep commit
vm.overcommit_memory = 1
sh-4.2# sysctl -a |grep panic
vm.panic_on_oom = 0

In case your worker node settings are different than the expected you can easily set it via machine-config operator for RHCOS and for RHEL via below command .
$ sysctl -w vm.overcommit_memory=1
 2. Is the worker node enforcing CPU limits using CPU CFS quotas ?
The Completely Fair Scheduler (CFS) is a process scheduler which was merged into the Linux Kernel 2.6.23 release (October 2007) and is the default scheduler. It handles CPU resource allocation for executing processes, and aims to maximize overall CPU utilization while also maximizing interactive performance.
By default, the Kubelet uses CFS quota to enforce pod CPU limits. For example, when a user sets a limit on CPU to 100 millicores for the pod, kubernetes (via the kubelet on the node) specifies a CFS quota for CPU on the pod’s processes. The pod/’s processes get throttled if they try to use more than the CPU limit.
When the node runs many CPU-bound pods, the workload can move to different CPU cores depending on whether the pod is throttled and which CPU cores are available at scheduling time.  Many workloads are not sensitive to this migration and thus work fine without any intervention.
kubeletArguments:
 cpu-cfs-quota:
   – “True”
 3. Are enough resources reserved for system and kube processes per node?
To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by all underlying node components (such as kubelet, kube-proxy) and the remaining system components (such as sshd, NetworkManager) on the host.
CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings:

kube-reserved
Resources reserved for node components. Default is none.

system-reserved
Resources reserved for the remaining system components. Default is none.

 
If a flag is not set, it defaults to 0. If none of the flags are set, the allocated resource is set to the node’s capacity as it was before the introduction of allocatable resources.
The below table summarizes the recommended resources to be reserved per worker node . This is based upon OpenShift version 4.1. Also note that this does not include the resources required to run any 3rd party CNI plugin , its operator etc.

You can set the reserved resources with the help of machineconfigpool and KubeletConfig (CR) as shown in the example below .
Find out the correct machineconfigpool for your worker node and label it if not done already.
$ oc describe machineconfigpool worker

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
 creationTimestamp: 2019-02-08T14:52:39Z
 generation: 1
 labels:
   custom-kubelet: small-pods

$ oc label machineconfigpool worker custom-kubelet=small-pods
Create a KubeletConfig as shown below and set the desired resources for system and kube processes .
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
 name: set-allocatable 
spec:
 machineConfigPoolSelector:
   matchLabels:
     custom-kubelet: small-pods 
 kubeletConfig:
   systemReserved:
     cpu: 500m
     memory: 512Mi
   kubeReserved:
     cpu: 500m
     memory: 512Mi

 4. Is swap memory disabled on the worker node?
By default, OpenShift disables swap partitions in the Node. A good practice in Kubernetes clusters is to disable swap on the cluster nodes in order to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement.
For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed.
Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure, resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event.
 5. Is QoS defined ?
In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resources than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class.
For each compute resource, a container is divided into one of three QoS classes with decreasing order of priority.

Priority
Class
Name
Description


Guaranteed
If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the container is classified as Guaranteed.

2
Burstable
If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the container is classified as Burstable.


BestEffort
If requests and limits are not set for any of the resources, then the container is classified as BestEffort.

 
A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than one billion for critical pods that should not be preempted or evicted. For the critical Pods we define two classes .For example :

System-node-critical – This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node.
System-cluster-critical – This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances.

You can also use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower QoS classes from using resources requested by pods in higher QoS classes. For example a value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class i.e. Guaranteed QoS. Similarly, a value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to the requested memory.
 
Mechanisms to control the resources on the overcommitted worker nodes :
After executing the prerequisites on the worker nodes / cluster  it’s time now to see what all mechanisms are available from the kubernetes side to control the resources like CPU, Memory , Ephemeral storage, Ingress and Egress traffic etc.

Limit Ranges: 

A limit range, defined by a LimitRange object, enumerates compute resource constraints in a project at the pod, container, image, image stream, and persistent volume claim level, and specifies the amount of resources that a pod, container, image, image stream, or persistent volume claim can consume.All resources create and modification requests are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, then the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, then the default value is applied to the resource.
Below is the example of limit range definition.
apiVersion: “v1″
kind: “LimitRange”
metadata:
 name: “core-resource-limits” 
spec:
 limits:
   – type: “Pod”
     max:
       cpu: “2” 
       memory: “1Gi” 
     min:
       cpu: “200m” 
       memory: “6Mi” 
   – type: “Container”
     max:
       cpu: “2” 
       memory: “1Gi” 
     min:
       cpu: “100m” 
       memory: “4Mi” 
     default:
       cpu: “300m” 
       memory: “200Mi” 
     defaultRequest:
       cpu: “200m” 
       memory: “100Mi” 
     maxLimitRequestRatio:
       cpu: “10” 

 2. CPU Requests:
Each container in a pod can specify the amount of CPU it requests on a node. The scheduler uses CPU requests to find a node with an appropriate fit for a container.The CPU request represents a minimum amount of CPU that your container may consume, but if there is no contention for CPU, it can use all available CPU on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers on the system for how much CPU time the container may use.On the node, CPU requests map to Kernel CFS shares to enforce this behavior.
 3. CPU Limits:
Each container in a pod can specify the amount of CPU it is limited to use on a node. CPU limits control the maximum amount of CPU that your container may use independent of contention on the node. If a container attempts to exceed the specified limit, the system will throttle the container. This allows the container to have a consistent level of service independent of the number of pods scheduled to the node.
 4. Memory Requests:
By default, a container is able to consume as much memory on the node as possible. In order to improve placement of pods in the cluster, specify the amount of memory required for a container to run. The scheduler will then take available node memory capacity into account prior to binding your pod to a node. A container is still able to consume as much memory on the node as possible even when specifying a request.
 5. Memory Limits:
If you specify a memory limit, you can constrain the amount of memory the container can use. For example, if you specify a limit of 200Mi, a container will be limited to using that amount of memory on the node. If the container exceeds the specified memory limit, it will be terminated and potentially restarted dependent upon the container restart policy.
 6. Ephemeral Storage Requests:
By default, a container is able to consume as much local ephemeral storage on the node as is available. In order to improve placement of pods in the cluster, specify the amount of required local ephemeral storage for a container to run. The scheduler will then take available node local storage capacity into account prior to binding your pod to a node. A container is still able to consume as much local ephemeral storage on the node as possible even when specifying a request.
 7. Ephemeral Storage Limits:
If you specify an ephemeral storage limit, you can constrain the amount of ephemeral storage the container can use. For example, if you specify a limit of 2Gi, a container will be limited to using that amount of ephemeral storage on the node. If the container exceeds the specified memory limit, it will be terminated and potentially restarted dependent upon the container restart policy.
 8. Pods per Core:
The podsPerCore parameter limits the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node is 40.
 9. Max Pods per node:
The maxPods parameter limits the number of pods the node can run to a fixed value, regardless of the properties of the node.Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods. If you use both options, the lower of the two limits the number of pods on a nodeIn order to configure these parameters , label the machineconfigpool.
$ oc label machineconfigpool worker custom-kubelet=small-pods

$ oc describe machineconfigpool worker

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
 creationTimestamp: 2019-02-08T14:52:39Z
 generation: 1
 labels:
   custom-kubelet: small-pods
Create the KubeletConfig (CR) as shown below.
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
 name: set-max-pods 
spec:
 machineConfigPoolSelector:
   matchLabels:
     custom-kubelet: small-pods 
 kubeletConfig:
   podsPerCore: 10 
   maxPods: 250

 10. Limiting the bandwidth available to the Pods:
You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods.To limit the bandwidth on a pod you can specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations as shown below.
{
   “kind”: “Pod”,
   “spec”: {
       “containers”: [
           {
               “image”: “openshift/hello-openshift”,
               “name”: “hello-openshift”
           }
       ]
   },
   “apiVersion”: “v1″,
   “metadata”: {
       “name”: “iperf-slow”,
       “annotations”: {
           “kubernetes.io/ingress-bandwidth”: “10M”,
           “kubernetes.io/egress-bandwidth”: “10M”
       }
   }
}
Conclusion:
As you can see from the post, there are about 10 mechanisms available from the kubernetes perspective which can be used very effectively to control the resources on the worker nodes in the overcommitment state provided pre-requisites are applied at first place. Now which mechanism to be used preciously is entirely depends upon the end user and the use case he or she is trying to solve.
The post How to Handle OpenShift Worker Nodes Resources in Overcommitted State appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Startup helps food companies reduce risk and maintenance costs with IBM Cloud solution

EcoPlant is helping food and beverage companies significantly improve energy use, optimize maintenance and save money. Our software as a service (SaaS) solution continually monitors and optimizes compressed air systems in near real time to help food and beverage makers, as well as companies in other industries, maintain and manage air compression systems.
Air compression systems are vital to the food and beverage industry. The systems are used every day to manufacture, shape, package and process the food and beverages products. Air compression systems are also used to help clean manufacturing equipment.
The challenge for food and beverage makers is keeping air compression systems and their multiple sub-systems well maintained and running at optimal efficiency. Systems that run inefficiently can cost businesses millions in wasted energy and emit tons of carbon dioxide (CO2) into the atmosphere. They can also put food safety at risk. A single filter leak, for instance, can introduce a host of contaminants and microorganisms into food containers.
Our EcoPlant platform, a smart monitoring and control system solution built and powered by IBM Cloud technologies, offers a solution.
Accelerating platform deployment with IBM
As a young startup, we wanted to develop our platform quickly, but it was important to keep infrastructure costs down. We didn’t want to have to set up, configure and maintain our own servers. We wanted to focus on writing code and building our proactive engine logic and AI algorithms. We also needed advanced capabilities to aggregate and analyze the data we were capturing from compressors, along with security features and scalability.
We found all of this, and more, in the IBM Alpha Zone accelerator. During the 20-week program, we talked to IBM experts and received technical training and support. We also had access to IBM infrastructure, like IBM Cloud Functions, a functions as a service (FaaS) programming platform and a service of IBM Cloud. Using the built-in platform capabilities, like events and periodic execution, we built our advanced analytics engine. Best of all, we only paid for the time we used, not a penny more.
Some very talented software architects helped us develop the platform the right way. For instance, we chose the IBM Watson IoT Platform to process, secure and analyze our customers’ air compression systems data. And because the Watson IoT Platform is a service of the IBM Cloud, we also get the scalability and security capabilities we need as our business grows. Plus, telling customers we use Watson technology gives us credibility.
Bringing predictive maintenance to air compression systems
Our platform collects data from air compression systems in near real time using strategically placed sensors and smart devices called EcoBoxes. The EcoBoxes send the data to the Watson IoT Platform where it’s analyzed by the predictive, AI-powered algorithms of our advanced analytics engine. If it detects a problem with the air compression system, like a leak in a filter, it sends an alert to the operations manager so he or she can address the problem proactively.
But what’s unique about our predictive maintenance solution is that it can also dynamically control the air compression systems. So, when it detects a leak in a filter, for instance, it sends the operations manager a suggested plan to fix it, such as closing a problematic valve or compressor. If the manager agrees, the platform sends the plan to the EcoBox, which then runs it and closes the valve.
Improving facility maintenance is win-win for business and environment
Today, we have customers throughout Europe and we’re rapidly expanding into the US market from our Minnesota office.
Through predictive maintenance and by optimizing the efficiency of air compression systems, we’re helping the food and beverage industry prevent contamination. We’re also helping companies reduce energy consumption, energy waste and costs.
For instance, a global food and beverages provider in Israel cut its energy consumption by roughly 25 percent. By reducing energy use it saved a total of USD 85,000 in less than five months, and USD 170,000 annually. The plant also reduced its annual CO2 emissions by nearly 700 tons by using our platform.
Even hospitals and commercial buildings can realize these benefits by applying the technology to pumps and chillers. In fact, on average, industrial plants can realize up to 50 percent in energy savings.
It’s a win-win for businesses and the environment alike.
Learn more about the EcoPlant solution.
 
The post Startup helps food companies reduce risk and maintenance costs with IBM Cloud solution appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Extending the power of Azure AI to business users

Today, Alysa Taylor, Corporate Vice President of Business Applications and Industry, announced several new AI-driven insights applications for Microsoft Dynamics 365.

Powered by Azure AI, these tightly integrated AI capabilities will empower every employee in an organization to make AI real for their business today. Millions of developers and data scientists around the world are already using Azure AI to build innovative applications and machine learning models for their organizations. Now business users will also be able to directly harness the power of Azure AI in their line of business applications.

What is Azure AI?

Azure AI is a set of AI services built on Microsoft’s breakthrough innovation from decades of world-class research in vision, speech, language processing, and custom machine learning. What I find particularly exciting is that Azure AI provides our customers with access to the same proven AI capabilities that power Xbox, HoloLens, Bing, and Office 365.

Azure AI helps organizations:

Develop machine learning models that can help with scenarios such as demand forecasting, recommendations, or fraud detection using Azure Machine Learning.
Incorporate vision, speech, and language understanding capabilities into AI applications and bots, with Azure Cognitive Services and Azure Bot Service.
Build knowledge-mining solutions to make better use of untapped information in their content and documents using Azure Search.

Bringing the power of AI to Dynamics 365 and the Power Platform

The release of the new Dynamics 365 insights apps, powered by Azure AI, will enable Dynamics 365 users to apply AI in their line of business workflows. Specifically, they benefit from the following built-in Azure AI services:

Azure Machine Learning which powers personalized customer recommendations in Dynamics 365 Customer Insights, analyzes product telemetry in Dynamics 365 Product Insights, and predicts potential failures in business-critical equipment in Dynamics 365 Supply Chain Management.
Azure Cognitive Services and Azure Bot Service that enable natural interactions with customers across multiple touchpoints with Dynamics 365 Virtual Agent for Customer Service.
Azure Search which allows users to quickly find critical information in records such as accounts, contacts, and even in documents and attachments such as invoices and faxes in all Dynamics 365 insights apps.

Furthermore, since Dynamics 365 insights apps are built on top of Azure AI, business users can now work with their development teams using Azure AI to add custom AI capabilities to their Dynamics 365 apps.

The Power Platform, comprised of three services – Power BI, PowerApps, and Microsoft Flow, also benefits from Azure AI innovations. While each of these services is best-of-breed individually, their combination as the Power Platform is a game-changer for our customers.

Azure AI enables Power Platform users to uncover insights, develop AI applications, and automate workflows through low-code, point-and-click experiences. Azure Cognitive Services and Azure Machine Learning empower Power Platform users to:

Extract key phrases in documents, detect sentiment in content such as customer reviews, and build custom machine learning models in Power BI.
Build custom AI applications that can predict customer churn, automatically route customer requests, and simplify inventory management through advanced image processing with PowerApps.
Automate tedious tasks such as invoice processing with Microsoft Flow.

The tight integration between Azure AI, Dynamics 365, and the Power Platform will enable business users to collaborate effortlessly with data scientists and developers on a common AI platform that not only has industry leading AI capabilities but is also built on a strong foundation of trust. Microsoft is the only company that is truly democratizing AI for businesses today.

And we’re just getting started. You can expect even deeper integration and more great apps and experiences that are built on Azure AI as we continue this journey.

We’re excited to bring those to market and eager to tell you all about them!
Quelle: Azure