New Dialogflow Mega Agent for Contact Center AI increases intents by 10 times to 20,000

Contact centers are one of the most important ways that businesses interact with customers. But, consistently providing great customer interactions over the range of potential conversations presents a number of complex challenges, challenges that businesses are increasingly turning to artificial intelligence (AI) and the cloud to help solve.We recently announced Contact Center AI as GA, with Virtual Agent and Agent Assist to help businesses consistently deliver great customer experiences. To help improve your customer interactions even further, today we’re announcing significant updates to Dialogflow, the core technology of Contact Center AI, including increasing the number of intents available to your virtual agents by 10 times to 20,000. Dialogflow is an industry-leading platform for building chatbots and interactive voice responses (IVR), and powering contact centers globally with natural and rich conversational experiences. Increasing the number of intents means more training phrases, actions, parameters, and responses to help your Virtual Agent interact with customers and get their issues resolved more efficiently. In addition to added intents, we’ve made some other updates to Dialogflow to help you deliver the experience that your customers expect, while making it easier than ever to scale your Contact Center implementation. Here’s an overview of the updates we’re sharing today: Dialogflow Mega Agent (Beta): Get better customer conversations with up to 20,000 intents  Dialogflow Agent Validation (GA): Identify agent design errors in real time for faster deployment and higher quality agents Dialogflow Versions and Environments (GA): Create multiple versions of agents and publish them to different environments Dialogflow Webhook Management API (GA): Create and manage your queries more quickly and easilyLet’s take a closer look at each feature.Mega Agent: Answer 10x more customer questionsWhen your customer says or writes something, Dialogflow captures their request and matches it to the best intent in the agent. More intents lead to better customer conversations. A regular Dialogflow agent comes with a limit of 2,000 intents—which is the most intents available in the market, based on public information. With Dialogflow Mega Agent, now in beta, you can combine multiple Dialogflow agents into a single agent, and expand your intent limit by 10 times to 20,000 intents. With increased intents, customers can have more natural, seamless conversations, pivot intents and questions when they want, and get their questions covered. This greatly increases scale and your ability to tackle more use cases, to better serve your customers’ needs and solve their problems. Dialogflow Mega Agent also makes it easier for developers to create and manage their Dialogflow experience. If you have multiple teams building an agent, each team can now be responsible for one sub-agent, simplifying change conflicts and creating better governance across teams.Companies are already using Dialogflow Mega Agent to provide a more seamless and integrated customer experience: “At KLM we are building multiple (chat)bot services using Dialogflow,” said Joost Oremus, Head of Social Technology at KLM Royal Dutch Airlines. “As travel is a complex product, making sure that our customers are guided towards the right agent (both human agent and multiple automated agents) can be challenging. Our first trial experience with Mega Agent shows promising results in solving this challenge for us.”Agent Validation: Better conversations lead to better customer experiences Frustrating interactions with your contact center is a sure way to lose customers. Yet, an internal study showed that 80% of Dialogflow agents had easy-to-fix quality issues. Dialogflow’s Agent Validation helps eliminate these negative interactions by helping designers identify errors to create high-quality agents and improve customer experiences.It does this by highlighting quality issues in the Dialogflow agent design—such as overlapping training phrases, wrong entity annotations, and other issues—and giving developers real-time updates on issues that can be corrected. Reducing errors leads to faster bot deployment, and ultimately, higher quality Dialogflow agents in production. Contact Center AI is designed to make implementation as easy as possible. The following two features simplify the deployment stage even further, so your developers can spend their time on testing and reiterating on products. Versions & Environments: Create, test, and deploy your agent, all in one placeVersions and Environments, now GA, lets you create multiple versions of your agent and publish them to a variety of custom environments, including testing, development, staging, production, and so on. This means that developers can now test different agent versions, track changes, and manage the entire deployment process in the Dialogflow agent itself. Webhook Management API: Reduce webhook response time and save developer resourcesWith Webhook Management API, you can now create and manage webhooks, making it easier for enterprises to programatically fulfill their queries. As Dialogflow processes and fulfills millions of queries daily with webhook, this new API—which was previously limited to the Dialogflow console—will help enterprises speed up their agent design process. A great customer experience builds loyalty and leads to repeat business. With these updates to Dialogflow, we aim to make developing a great customer experience easier than ever before (Dialogflow pricing is available here). You can access all these features today through your Dialogflow console or API, which are all available for your Contact Center AI integrations.
Quelle: Google Cloud Platform

The OpenShift Troubleshooting Workshop

Illustrated by: Mary Shakshober
The first workshop in our Customer Empathy Workshop series was held October 28, 2019 during the AI/ML (Artificial Intelligence and Machine Learning) OpenShift Commons event in San Francisco. We collaborated with 5 Red Hat OpenShift customers for 2 hours on the topic of troubleshooting. We learned about the challenges faced by operations and development teams in the field and together brainstormed ways to reduce blockers and increase efficiency for users. 
The open source spirit was very much alive in this workshop. We came together with customers to work as a team so that we can better understand their unique challenges with troubleshooting. Here are some highlights from the experience.
What we learned
Customers participated in a set of hands-on activities mirroring the initial steps of the design thinking process: empathize, define, ideate.
They were able to discover key problems, connect with similar users, and impact future solutions.

Empathize
For the first activity, participants were asked, “What words come to mind when you think of troubleshooting in OpenShift?” Users had a chance to reflect on past experiences and provide others with a way to discover what they were thinking, seeing, feeling, and doing. Participants wrote down a variety of words such as “complex,” “overloaded,” “tough,” and “painful but good.”

Next, participants shared more about their experiences by thinking about the question, “What went wrong the last time you had to troubleshoot, and why was it a problem?” In this phase, they worked individually and wrote one answer per sticky note to describe the setbacks in their troubleshooting experiences. 
In small teams, we discussed the pain points and noted similarities between users by grouping the sticky notes into common buckets. Here are the common themes that emerged, along with paraphrased responses from customers noting why these are pain points: 

Installation challenges: “It’s hard to get started setting up new features.” 
Dependencies: “I have trouble tracing dependencies when they are not automatic. Can I resolve issues with a parent resource and expect the related resources to be updated as well?”
Logging and tracing: “It is difficult to differentiate which log is needed, and accessing the right logs to find what I’m looking for can be difficult.”
Root cause analysis: “I am struggling to obtain the original cause of an issue to know where to focus on a resolution.”
Vague errors: “Errors (or alerts) are not specific enough and often do not provide next steps or suggested actions.”
Steep learning curve: “There are lots of new users with a lack of knowledge on Kubernetes. It can be overwhelming, and we need help learning more through the UI.”
Autoscaling: “It is difficult to set up and especially complex for new users to know how to use cluster, machine, and pod autoscaling.”
Deployment and network issues: “I have issues where a service will not start due to a deployment, but it’s unclear why. Network policies, firewalls, and certificate security issues often crop up for my team.” 
Config changes: “I often have problems with pod or container configs. It’s also easy to get configuration drift with RBAC management.”

Define
After identifying common pain points, each group was asked to select one pain point and convert it into a problem statement. Here are the problem statements the teams created:

How might we make it easier and faster to access the right logs?
How might we improve our root cause analysis?
How might we manage, secure, and audit app/infra config changes?

Ideate
The ideation part of the workshop encouraged participants to start brainstorming possible solutions to the various challenges that have been shared. We used the “Yes, and” method to encourage participants to work together and build on the suggestions and ideas of others. Individuals offered solutions to address the problem statement by shouting, “Yes, and,” then explaining their great idea.
Problem statement: How might we make it easier and faster to access the right logs?

Group similar errors and notifications together.
Include date ranges and error codes.
Make alerts customizable.
Filter errors based on user type and privileges.
Show a pop-up with cause and solution.
Provide links from notifications to logs and application logs.
Bring users to the right place in the logs.
Only surface relevant parts of the log for errors and warnings.
Pull docs into the log view. 

Problem statement: How might we improve our root cause analysis?

Use machine learning to recommend a solution.
Always include the pod ID in the error messages.
Add a tool to correlate the logs with the error.
Show what has changed since last time (try to determine the cause).
Capture non-persistent state information during a crash.
When users do resolve issues, provide a way to add comments somehow so next time the problem arises there is a reference and knowledge base already. 
Automate the resolution.

Problem statement: How might we manage, secure, and audit app/infra config changes?

Visualize changes through the GUI and CLI.
Rollback config state.
Have Git manage config changes.
Provide a comparison tool.
Show why people made changes, and allow comments. 
Secure configs with RBAC or add a security analysis tool.
Track config changes verses application verses environment. 
Set up policy for the config changes.

To finish up, each group presented the problem and solutions to the room. Participants were given a set of stickers to vote for their top ideas. By taking part in the prioritization, they had an opportunity to impact the direction of the product and help the OpenShift product management team with the difficult job of prioritizing upcoming features.
Below are the highest-voted solutions.

What’s next
The ideas generated by customers at this troubleshooting workshop will help shape future designs for OpenShift. Using this foundation, the Red Hat user experience and product management teams will work through the next phases of the design thinking process to design, prototype, and test. 
Customer validation is equally as important during these phases, so we need your help. Sign up to be notified about research participation opportunities or provide feedback on your experience by filling out this brief survey.
 Stay tuned for upcoming workshops! Future events will be posted on the OpenShift Commons calendar. If you have general feedback or questions, reach out to our team by email. 
 
The post The OpenShift Troubleshooting Workshop appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Phasa-35: BAE Systems testet Solardrohne

Nach 20 Monaten der erste Start: Die britischen Unternehmen BAE Systems und Prismatic haben eine Drohne entwickelt, die mit Sonnenenergie fliegt. Gedacht ist das für Langzeiteinsätze, etwa als fliegende Kommunikationsstation. (Luftfahrt, Technologie)
Quelle: Golem