IBM and Volkswagen team for urban Mobility Advisor app

Traveling across town is getting easier.
SEAT, a Spain-based member of the Volkswagen Group, and IBM are collaborating to develop Mobility Advisor, an app that uses Watson artificial intelligence (AI) to help city residents more effectively navigate city congestion and make smarter transportation decisions.
“With its advanced cloud and AI technologies, IBM is helping us to innovate new approaches to mobility that will transform our business strategy while improving the lives of people living in urban areas,” said Jordi Caus, SEAT’s head of new urban mobility concepts.
The announcement came this week at Mobile World Congress, just a day after SEAT unveiled its new concept car, Minimó.
Through its connection to the IBM Cloud, the Mobility Advisor app will incorporate user preferences and dynamically adapt to changing weather forecasts, traffic reports and ongoing events to help people decide which mode of transit — cars, bicycles, scooters, public transportation — is best for their crosstown trips.
The app is under development and will run as a mobile app on 4G/5G networks using a conversational interface.
Learn more about the Mobility Advisor app from SEAT and IBM in the full story at Bloomberg.
The post IBM and Volkswagen team for urban Mobility Advisor app appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Getting started with the Couchbase Autonomous Operator in Red Hat OpenShift 3.11

This is a guest post from Couchbase’s Sindhura Palakodety, Senior Technical Support Engineer.  Couchbase is the first NoSQL vendor to have generally available, production-certified operator for the Red Hat OpenShift Container Platform. The Couchbase Autonomous Operator enables enterprises to more quickly adopt the Couchbase Engagement Database in production to create and modernize their applications for […]
The post Getting started with the Couchbase Autonomous Operator in Red Hat OpenShift 3.11 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Introduction to YAML: Deployments and other Kubernetes Objects Q&A

The post Introduction to YAML: Deployments and other Kubernetes Objects Q&A appeared first on Mirantis | Pure Play Open Cloud.
Recently we gave a webinar that gave an Introduction to YAML: Deployments and other Kubernetes Objects. Here’s a look at some of the Q&As we covered (and some that we didn’t have time for).
Q: What tool compiles a YAML file?
A: YAML files are meant to be human-readable text, but they can be compiled into objects if necessary. For example, SaltStack compiles YAML into objects. In the case of Kubernetes, if YAML needs to be compiled it would likely be the Go code behind the component that’s utilizing it.
Q: In context to the YAML structure, can we write labels as labels: [app: nginx type: webserver]?
A: It would be labels: [app: nginx, type: webserver], but yes, you can mix YAML and JSON styles.
Q: Why do you have to provide an image name? If the container is up, can we just specify the name?
A: Kubernetes is based on Infrastructure as code, so you’re describing what you want to happen and letting Kubernetes figure out how to get there; if you were to reference the container itself, it would break that paradigm.
Q: What are all possible Kinds possible, such as DaemonSet, or CronJob?
A: Rather than me just listing them, it’s probably best that you check out the Kubernetes reference documentation.
Q: Can we use just JSON notation for creating kubernetes objects, and not YAML?
A: You can; any JSON object is a valid YAML object.
Q: Does “targetPort:” mean destination port ?
OK, so I get this mixed up all the time. The port is the port of the request, and the targetPort is the port on the destination container.
Q: What does Ingress mean in this context?
Ingress routes requests from outside the cluster to cluster services. You can then create rules that route traffic and create load balancing, name-based virtual hosting, and so on.
Q: Can we debug YAML to verify correct values before pushing to kubernetes?
Yes. In addition to using a YAML linter (such as http://www.yamllint.com/) to check the formatting and indenting and such, you can use the –dry-run parameter to “run” the command and see what happens without actually persisting the changes.
Q: Can you please talk more about nested -s(dashes)
I’m guessing that you’re asking about lists of lists. The answer is that you can have as many levels as you need, and you can have lists of lists, maps of lists, maps of maps, and maps of lists. The important thing is to make sure that you have the indentation straight, since YAML uses solely that to determine the structure.
Q: In the volume section you mentioned awsstorage but not PersistentVolumeClaim. Can it work like this?
A: A volume has a certain type, which in this example was awsstorage. Once you’ve created a PersistentVolume, you can make a separate PersistentVolumeClaim to get access to that volume.
Q: Possibly, not for this session, but will be good if you give a brief overview of what Ingress and Egress are?
A: Strictly speaking, “Ingress” refers to requests coming into the cluster, and “Egress” refers to traffic flowing out of the cluster (as in “This way to the Egress”(http://www.ptbarnum.org/egress.html)).
Q: Did you mean that you can have two network interfaces on the same container?
A: You can indeed have multiple network interfaces on the same container, but you may need to use a CNI. For example, virtlet uses CNI-Genie to have not just multiple interfaces, but potentially multiple interfaces of different plugin types.
Q: Is there any object for multicluster kubernetes management?
A: Kubernetees objects are generally meant to be used within a single cluster, so in general multicluster management is handled outside of the cluster or through the application. A possible exception would be the Federation-related objects, which enable identities to be shared throughout the federated clusters.
Q: What will actually happen when you try to overwrite the value in this example:
backend: &stdbe
  serviceName: test
  servicePort: 80
– path: /realpath
  backend: *stdbe
    servicePort: 443
A: Written like this you’ll get an error, but you can actually find the corrected code — which DOES work — here.
Q: What happens when I have a PersistentVolume of one size (let’s say 10G) and I have PersistentVolumeClaim of a different size (let’s say 2G)?
A: You’ll get a claim of 10G, because there’s a 1:1 relationship between a PersistentVolume and a PersistentVolumeClaim, meaning that you can’t have separate areas of the volume staked out by different claims. That said, multiple pods can reference the same PVC in order to use the same (full) volume.
Q: What is really the concept behind using running a container inside a Pod, and what is the key point behind Pod images?
A: A Pod is the smallest unit that can be managed by Kubernetes, basically providing a layer of abstraction above the container. A Pod can have a single container, which yes, makes it seem redundant, but it can also contain multiple containers, all of which are managed as a single chunk, which is where it begins to make more sense. As such, there’s no such thing as a “Pod image”; it’s a container image instantiated by the Pod.5
Q: Do you have any other information for Kubernetes that would be useful to Solution Architects, more of an inch deep but mile wide approach?
A: Stay tuned.

OpenShift Commons Briefing: State of Open Source Security Report Review with Liran Tal (Snyk)

## OpenShift Commons Briefing Summary In this briefing, Snyk’s Liran Tal shows the results of his company’s State of Open Source Security 2019 Report. Liran explains each step of the process, from development, to testing, to deployment, and follows the chains of responsibility across those domains. Who is responsible for the security of container images? […]
The post OpenShift Commons Briefing: State of Open Source Security Report Review with Liran Tal (Snyk) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Partner Reference Architectures

Red Hat’s Partners play a key role in developing customer relationships, understanding customer needs, and providing comprehensive joint solutions. As customers use Red Hat technologies to help solve increasingly complex business issues, partners provide reliable guidance, technical information, and even engineered integrations to assist customers in making sound technology decisions. For this post, the focus […]
The post OpenShift Partner Reference Architectures appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to remove the top 3 barriers to AI adoption in business automation

Many businesses are looking to automation to increase productivity, save costs and improve customer and employee experiences.
For example, imagine a large insurance company that processes millions of claims a year. Around 60 percent of its claims are automatically processed, but it’d like to automate 85 percent of claims and reduce error rates, reduce costs and increase topline revenue. To make these improvements, the company is considering artificial intelligence (AI) to extend its automation capabilities.
While AI technology has the potential to make automation truly intelligent, there are barriers to adopting AI across operations that could limit early success. Three barriers we see most often are:

Business people don’t know how and where AI can be best applied to their problems.
AI algorithms are often disconnected from daily business operations.
AI is difficult for business people to trust, control and monitor.

Introducing IBM Business Automation Intelligence with Watson
To help eliminate these barriers, IBM is designing a learning system to help business managers improve productivity and customer experiences using AI in their daily business operations. IBM Business Automation Intelligence with Watson is an automation capability for creating, managing and governing AI across the enterprise and applying it to operations using Watson. It will be able to access and act on the operational data generated by the IBM Automation Platform for Digital Business.
With Business Automation Intelligence, business leaders will be able to automate work from the mundane to the complex while measuring the impact of AI on business outcomes. Users will be able to apply AI to existing apps to capture the necessary data; run analytics at scale; and deliver continuous, AI-enabled operational improvements.
Overcoming AI barriers
Here’s how Business Automation Intelligence could address each barrier to AI adoption using the hypothetical insurance company example above:
1. Business people don’t know how and where AI can be best applied to their problem.
Business Automation Intelligence will enable business users to identify opportunities for automation by seeing where automation agents (bots that handle specific tasks or functions, with or without intelligence) could potentially have the most impact. Business Automation Intelligence will provide built-in analysis using process mining to find hotspots for automation.
For example, imagine Lisa, an employee at the insurance company. As the business owner of the claims processing system, she uses Business Automation Intelligence to analyze the claims processing operational data and finds her employees are spending a lot of time extracting information from claims documents and entering it into their claims processing system. Based on this data, she could prioritize automating this part of the workflow.
2. AI algorithms are often disconnected from daily business operations.
Business Automation Intelligence is designed to enable you to apply AI at scale to a wide range of styles of work, from the mundane clerical to complex knowledge work. Our goal is to help clients move past one-off AI experiments and use Business Automation Intelligence to methodically discover, create, manage, govern and apply AI to automated business operations across the enterprise, delivering continuous, AI-enabled operational improvements. It will do this with built-in connectivity to the IBM Automation Platform for Digital Business, as well as with several Watson capabilities.
For example, Lisa’s knowledge workers must analyze every claim that isn’t automatically processed and then manually route it to the right claims processor based on complexity. With Business Automation Intelligence, built-in machine learning evaluates the complexity of the claim and automatically routes it to the claims specialist with the appropriate level of experience and expertise.
3. AI is too hard for business people to trust, control and monitor.
Business Automation Intelligence will include work guardrails and performance monitoring so business leaders can control and manage the digital workforce initiatives based on business outcomes. Guardrails will use natural-language rules to define and control the conditions under which the automation operates. To monitor the performance of automation agents, prebuilt dashboards will be included with KPIs that the user defines.
For example, some insurance claims require specialized handling. Lisa sets up guardrails in Business Automation Intelligence that define the types of claims that get immediately routed to a specialist instead of being processed automatically. This helps as the company handles specific compliance situations. When these guardrails are embedded alongside the AI algorithm, claims can be managed more comprehensively, ensuring the AI tech is applied consistently.
Learn more about what Business Automation Intelligence can do, or request an invitation to the early access program.
The post How to remove the top 3 barriers to AI adoption in business automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

SPF Private Clients uses AI virtual assistant to support first-time home buyers

In 2013, the UK government launched the Help to Buy initiative to support people struggling to own property.
It brought in an influx of mortgage applications to brokers such as SPF Private Clients. In response, we at SPF teamed with EscalateAI to create a portal and AI-powered virtual assistant based on IBM Cloud and Watson solutions to speed up qualification.
Making the most of new opportunities
Help to Buy brought new business and the opportunity to help more people buy homes to SPF Private Clients. However, the complexity of the applications left us with a huge administrative workload. Brokers had to inspect lots of information from each applicant, so it would sometime take two days to evaluate a single application.
SPF Private Clients wanted a way to simplify and speed up approvals, so we could qualify more people for Help to Buy. We began looking for a technology partner to automate the process.
Speeding up the move from applicant to customer
We engaged EscalateAI to design a solution based on IBM Watson technology to speed up qualification for Help to Buy applications. Watson was an obvious choice, designed for use cases just like ours. It helped us bypass adapting the technology to fit our needs or worrying about scaling it. Watson also had extra features we could turn on in the future to augment the solution.
We developed an AI-powered virtual assistant named Ava based on IBM Watson Assistant to provide relevant information to clients. EscalateAI combined IBM Watson Tone Analyzer with its technology to create a hybrid chatbot. Ava can respond automatically to a range of client queries, but if the solution detects a low level of confidence or tone, it automatically brings in a mortgage advisor to take over the interaction.
EscalateAI created a portal for Help to Buy applicants based in IBM Cloud Foundry. In the portal, potential clients can submit information, upload documents and interact with Ava, even outside of office hours, enabling around-the-clock service. It securely holds and manages client information in IBM Cloud Object Storage with IBM Compose for MongoDB.
Once information has been submitted through the portal, we use algorithms to provide immediate feedback on applicants’ viability for a mortgage. Visitors can get a quick mortgage indication in three minutes, a decision in principle in 15 minutes and a tailored mortgage recommendation in 30 minutes.
The solution assigns a traffic light rating to each lead. Green indicates leads qualified by Ava for action, amber those needing further information and red applications that require attention from an SPF Private Clients specialist team.
Making customers feel at home
Today, SPF Private Clients can process more Help to Buy leads at greater speed. Best of all, we achieved these improvements without expanding our team. Leads come in prequalified with the supporting documentation verified and uploaded in one place, enabling brokers to focus on more urgent client requests and drive up conversion rates.
We’re also improving the experience for clients. Applicants can receive instant responses to frequently asked questions from Ava at any time of the day and quickly find out answers to simple and complex mortgage inquiries. This gives clients peace of mind in a time that can be quite stressful.
The speedy, efficient service enabled by our portal and Ava has given us a strong head start on our competitors, but we won’t stop here. Based on its success, we plan to extend the solution to other departments, such as commercial finance, remortgages, insurance, wealth-management and short-term financing.
The IBM and EscalateAI solution is helping us transform mortgage qualification and the client experience. That’s a win-win scenario we’re keen to replicate elsewhere in the business.
Read the case study for more details.
The post SPF Private Clients uses AI virtual assistant to support first-time home buyers appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Fine-Grained Policy Enforcement in OpenShift with Open Policy Agent

At a high-level, the Kubernetes control plane executes a relatively simple control loop: requests to the API are parsed by the Master API service and, if accepted, are stored in etcd. A set of controllers and custom operators watch changes in ectd and take actions to converge the current state to the desired state. Inside […]
The post Fine-Grained Policy Enforcement in OpenShift with Open Policy Agent appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Briefing: State of the Operators with Daniel Messer (Red Hat)

OpenShift Commons Briefing Summary In this briefing, Red Hat’s Daniel Messer gives an in-depth look at the state of Kubernetes Operators. He also delves into the Operator Framework, SDK,  Lifecycle Manager and the Operator Hub. Access the slides from this briefing: State of the Operators – Commons Briefing 02-19-2019 Join the Community at the Upcoming OpenShift […]
The post OpenShift Commons Briefing: State of the Operators with Daniel Messer (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Commons Briefing: OpenShift 4.0 Release Update with Ali Mobrem

 OpenShift Commons Briefing Summary In this briefing, Red Hat’s Ali Mobrem gives an in-depth look at the release plans for OpenShift 4.0, as well as a general overview of what will be changing in this platform update release.He discussed the use of Operators to deliver cluster management and automation to OpenShift 4.0 and the […]
The post OpenShift Commons Briefing: OpenShift 4.0 Release Update with Ali Mobrem appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift