IBM Voice Gateway: New features revolutionize your cognitive call center

Improving customer support is a never-ending job. You must continually listen to your customers and pivot your business to adapt to their needs.
One way to improve customer service is to use cognitive virtual agents. The use of these agents has been growing in the online space for a few years. And with the arrival of the IBM Voice Gateway, you can bring those agents to your call centers.
We recently introduced IBM Voice Gateway, a cognitive call center solution that signals the future of where technology is heading: cognitive services. Voice Gateway enhances your call center operations by connecting Watson services to act as a self-service agent handling calls instead of a live contact center agent. Voice Gateway also uses IBM Watson to assist contact center agents in real-time. This is artificial intelligence in action, delivered through cognitive capabilities. Essentially, Voice Gateway and Watson services creates a cognitive interactive voice response (IVR) system, improving customer support and helping reduce the strain of live agents during peak hours.
As part of our continuous delivery model we’re constantly improving both the Watson services and IBM Voice Gateway’s capabilities. Our recent 1.0.0.2 release added the following capabilities:

Support for configuring a multi-tenant Voice Gateway environment, so that you can host multiple phone numbers and have them connecting to different Watson services—all through the same Voice Gateway deployment
Enhancements to the Voice Gateway API, including action tags which you can use to trigger a single action or sequence of actions in the Voice Gateway from the conversation service
Support for Watson Virtual Agent. You can use this agent instead of the conversation service when creating self-service agents to provide automated service to customers. Watson Virtual agent allows you to get to market faster, and learn more about how your cognitive agents are being utilized by your customers
Additional resiliency by providing the ability to configure whether to disconnect calls when transfers fail, or allow the conversation dialog to decide on the next steps, such as routing them to a new destination

For additional details, you can read about the latest features here.
Clients can expect to experience overall benefits of Voice Gateway. From improved telephone-based customer service to driving down costs and deployment flexibility, Voice Gateway with Watson services brings you the next-generation, cognitive automation into your business.
Interested in learning more? Check out the demo. And if you’re ready to start integrating and building a revolutionary call center solution contact us for details.
The post IBM Voice Gateway: New features revolutionize your cognitive call center appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The Weather Company teams with Twitter to stream eclipse across US

The last time a total solar eclipse crossed the United States was 8 June 1918. That makes the one that’s taking place Monday, 21 August, a once-in-a-lifetime occurrence. The Weather Channel’s digital properties and Twitter are going to make the most of it.
“This eclipse is a once-in-a-hundred-year event, and we’re going to party like it’s New Year’s Eve,’” said Neil Katz, head of global content and editor-in-chief for The Weather Company, an IBM Business. “This eclipse is a celestial phenomenon and cultural moment that can’t be missed, and we couldn’t imagine a better partner than Twitter to celebrate this with.”
Starting at noon Eastern time, interactive, live coverage of “Chasing Eclipse 2017” will begin, spanning cities across the US, including Stanley, ID; Carbondale, IL; St. Joseph’s, MO; Alliance, NE; Hopkinsville, KY; McMinnville, OR; Belton, SC; Nashville, TN; and Casper, WY. The coverage will be available via Twitter, the Weather Channel app and weather.com.
In addition to high-resolution and aerial drone footage of the celestial event itself, the stream will also include live updates from viewing parties, interactive social segments and even live coverage of an eclipse-themed wedding.
The Weather Company uses IBM Cloud technology and the Watson Internet of Things (IoT) platform to deliver approximately 25 billion forecasts each day.
For more about The Weather Company and Twitter’s live stream, see the full press release.
 
The post The Weather Company teams with Twitter to stream eclipse across US appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Security Management Operations

This article positions key features employed by Red Hat CloudForms to secure the wealth of management operations it provides as a Cloud Management Platform (CMP).
 

CloudForms Providers
Red Hat CloudForms connects to providers, these are management end-points that provide to CloudForms inventory, metrics, event and automation capabilities. Red Hat CloudForms transforms these provider capabilities into business aligned Service Management, Compliance and Optimization.
 

 
When connecting to a provider Red Hat CloudForms does so using a set of credentials or token/key access.
 

Accessing CloudForms Providers
The level of access to a provider is determined by the use cases you wish to fulfill. This infers that the level of access could be restricted to the minimum requirements which is a good security practice.
However the features that CloudForms provides as a Cloud Management Platform to the providers it supports are such that the level of access required is typically way past the threshold of minimum requirements, in fact in practice for Red Hat CloudForms the administrator account is required but not mandated for most providers.
Each provider differs in the capabilities it can provide for automation, metric stored, inventory discovered or events collected. Therefore there is no single solution on the provider side than to configure a custom set of privileges per provider platform, maintain this as new use cases are covered by Red Hat CloudForms. This is a huge undertaking as follows;
Day 1 : The service account is configured with least privilege for the only provider connected to CloudForms. All working OK.
Day 2 : A new use case is to be covered by CloudForms, this would require updating the service account to include any new privileges required to meet the new use-case.
This process has to be repeated every time Red Hat CloudForms is given new use cases to cover in the environment.
Accessing a provider with an account of most privilege is of concern for the following reasons;

Extends the attack surface of the platform to encompass the management platform too.

This is a valid concern. Without the management platform you could state that number of entry points to the provider platform is less. The issue with this argument is that the provider platform itself should be questioned on its ability to granularly apply role based access rules to every operation performed from its tooling. This is where CloudForms adds security value to the provider platform, ensuring compliance and governance is adhered to, a capability not covered by any of the provider endpoints that Red Hat CloudForms integrates with.

The management platform has destructive capabilities that it can perform on a provider platform.

The capability to remove objects from a provider platform or add new ones to it forms a major role in most Cloud Management use cases.
Example – Quarantine virtual machines that are exposed to Heartbleed.
This use case, requires change rights in a provider platform. The use case also addresses a primary security concern.
In both cases one has to balance the need for automation and compliance against that the need of security requirements. This is why when addressing security requirements it’s important to do so knowing that the answer is not always a simple yes or no to compliance.  Often there is need to defer a requirement to a dependent secondary service such as authentication, or mitigate the compliance to the requirement through the use of a solution rather than just a single feature.
 

Secure Operations Management
The balance addressed in this article is to simply give the management platform full rights to the provider platform but in turn secure the management platform to meet the security requirements for connecting to the environment. After all, every environment should have  a pattern for onboarding management services as its a base capability for a management platform to connect to an end point.
We discuss the features that CloudForms employs under two headings Deferring or Mitigating.
 

Deferring
The following features in CloudForms would be considered deferrable. By that we mean secondary services should meet the security requirement.
Red Hat CloudForms supports the following authentication solutions;

IPA/AD Trust Authentication – Active Directory (AD) Trust Authentication in Red Hat CloudForms is supported with External Authentication to IPA.

This provides IPA Users access to the Appliance Administrative UI and the REST API using their AD credentials using the kerberos authentication protocol.

LDAP/LDAPS – This is most common and is native to CloudForms. LDAP/LDAPS allows for access to Microsoft Active and other directories. The groups are fetched and matched in CloudForms, giving pass through access to CloudForms Role Based Access mode and tenant spaces.
Single Sign On – Implemented with Keycloak and using SAML v2.0. The SAML implementation has been tested with KeyCloak but is implemented generically using Apache’s mod_auth_mellon module and should work with other SAML Identity Providers.

The current implementation only secures the Appliance’s Web administrative UI with SAML.

Two Factor Authentication – CloudForms external authentication provides 2-Factor Authentication via IPA. This provides IPA Users access to the Appliance Administrative UI and the REST API using their IPA Password followed by a One-Time-Password.

 
Apache Web Server
CloudForms uses the Apache web server to present the API and UIs through. SSL encryption is fully supported for all access.
 
Red Hat Enterprise Linux
Red Hat CloudForms is available as virtual appliance, this has a based operating system of Red Hat Enterprise Linux. Direct access to the operating system is governed as any secure implementation of Red Hat Enterprise Linux.
 
Firewall
The firewall configuration for CloudForms by default is the minimum viable for full product function.
 
Security Technical Implementation Guide – (STIG)
CloudForms can be configured to be locked down to be complaint against STIG requirements.
 
Security Content Automation Protocol – (SCAP)
CloudForms can be configured to be locked down to be complaint against SCAP vulnerabilities.
 

Mitigating
We mitigate the use of administrator access to a provider platform because CloudForms can secure its user and api entry points against misuse or attack using the following features;
 
Auto Logout
Users are automatically logged out of Red Hat CloudForms after 5 mins of inactivity. This is configurable.
 
Role Based Access
CloudForms employs Role Based Access Control. Within CloudForms and an administrator can define new custom roles or use out of the box ones to finely control what areas of the User Interface a role may access and to what level of access such as view, modify, execute or delete.
The following is the role tree that defines the object areas within the CloudForms UI and the level of access.
 

 
The example shows the out of the box “Limited Self Service User” role having access to view Templates, VMs and VMs & Template accordions.
Another example could be the out of the box “Container Operator” role, that grants only access to view dashboards and topologies under container providers.
 

 
With Role Based Access you can define roles to meet the security requirements of a persona executing the use case. If you wish to allow access to Container Operators then do so using a role template similar to the above. This allows for CloudForms to connect to the container provider platform using administrator rights, but the users accessing the same provider platform though CloudForms are restricted to their use case/persona. No user will perform a duty in the CloudForms UI unless the Role for that User allows them to, this is configured by a CloudForms Administrator as part of implementation of the cloud management platform.
 
Tenancy
Red Hat CloudForms implements a tenancy model for defining your organisation structure in. This allows for group level access to the resources that are available in the tenant space. No one tenant can access each other’s resources without the rights to do so.  Tenancy is configured by a CloudForms Administrator as part of implementation of the cloud management platform.
The default tenant model out of the box is as follows;
 

 
The tenant “My Company” is allowed access to all resources that the group list shows in the graphic.
 

 
Shown is the tenant “Line of Business”, that has only the “Standard User” group assigned. This group has only access to resources as defined by the CloudForms administrator. The group belongs to a Role as described in the previous section.
Tenants also support child tenants and projects. Either of which inherit from the parent and can have their own group assignments. Tenants, Child Tenants and Projects should meet most organizational structures.
 
Quota
You will want to protect your provider platforms from over provisioning, CloudForms introduces quota to the provider platforms it supports. Tenants, Projects, Groups or individual Users can be assigned quota limits. Example;
 

 
The example shows that the “Line of Business” tenant has 100 Virtual CPUs Allocated.  Currently nothing has been provisioned, but as new VMs are provisioned into a provider for users in this tenant the “In Use” will increase;
 

 
The graphic now shows that 50 Virtual CPUs have been consumed and 100 Allocated so 50 remain.
Having quota protects the provider platforms from being over provisioned. The provider platform administrators can adjust the quotas in CloudForms to meet the environmental limits or business needs, such as project budget control.
 
Reporting
CloudForms has extensive reporting capabilities. The reporting engine can access any of the event, inventory, metric or request history stored in the Virtual Machine Database. Reports generated can be automatically emailed to alert users of issues or status of the provider, their virtual machines or CloudForms itself. For example provider platform administrators could ask for a report to be generated that details the users defined in CloudForms and actions performed on a daily basis.
 

 
The graphic shows the request history as a sample report that can be saved or emailed. This can alert provider platform administrators to issues with the platform or to user activity.
 
Dashboards
CloudForms users are presented dashboards of information pertaining to their resources in their tenant space.
The default out of the box dashboards mixes user and operator information in a single view.
 

 
You may wish to restrict what users see, such as there is little requirement for a user to view the capacity and utilization of the underlying hypervisor hosts.
 

 
The graphic shows a new dashboard with only the “Guest OS Information” and “Vendor and Guest OS Chart”. The result is
 

 
Having the ability to mashup dashboard for your business needs is also a security requirement to ensure that the management platform is presenting the right level of information to varying user groups.
 
Smart State
CloudForms is unique as a Cloud Management Platform in that it can scan the internal file system of virtual machines and instances. The technology is called Smart State and can return Users, Groups, Processes, Packages, Applications, Registry Keys, Files and File Contents. The capability has varying support for both Windows and Linux file systems.
 

 
Users and Groups identified can be used to see if virtual machines including CloudForms itself has had its operating system account database adjusted.
 

 
This example shows the packages found along with files identified. You can configure CloudForms to scan for certain files and collect their contents back for further conditional processing. Knowing the packages and package versions allows you to identify vulnerabilities or misconfiguration of server roles.
 

 
This example shows how the contents of a file can be retrieved by CloudForms. Having the contents available inside CloudForms allows for further reporting of misconfigurations or non-compliance. This feature can be used to identify if CloudForms itself is drifting in configuration from the requirements mandated by the provider platform.
 
Compliance
CloudForms can check the compliance of any virtual machine or instance.
 

 

 
The example shown is for a virtual machine where a compliance policy has been defined and ran for SELinux enforcement. For the compliance to check this, the previously discussed Smart State technology was used to collect the contents of the selinux.cfg file on the filesystem and conditionally process its contents for the desired setting. CloudForms can have the provider platforms requirements defined as policies within its “Control” capability and applied to itself, alerting non-compliance via email or any other means available to automate or Ansible.
 
Container Smart State
CloudForms also offers the same compliance as virtual machines against the Smart State detail collected for container images.
 

 

 
The example shows the packages collected and also an additional compliance feature of SCAP results.
 

 
The SCAP results show what vulnerabilities exist in container images. This is a value item to the provider platform. Without CloudForms performing this duty, the container provider platform is exposed to running non-compliant container images.
 
Automate
The Automate area of the product allows for automation to be defined and executed. This area of the product is governed by the Role Based Access system and can be enabled or disabled per role and group. Only CloudForms administrators and Automation Engineers would/should have access to this area.
 
RESTapi
CloudForms can be accessed over a RESTapi, this is secured using the same authentication subsystem to that of the CloudForms user interfaces.  This means that when you authenticate with the RESTapi the visibility and actions you can perform is limited by the role based access model defined for your tenant, project or group space.

Virtual Management Database (VMDB)
CloudForms stores all the requests, metrics, events and inventory information within its VMDB. This means that you can track or trace any action performed by CloudForms, or on the provider by other tooling. You can use CloudForms to audit the provider platform and provide reports to unusual activity or meet regulatory requirements for audit data retention.
 
CloudForms Log Files
CloudForms stores all user actions for who, from, when and what they did within CloudForms . This means that you can track or trace any action performed by users. The log files can be picked up by any log analysis tool to identify any non compliance or you could define CloudForms policies to do this using Smart State and CloudForms Control.
 

Summary
“Users logging into CloudForms is NOT the same as CloudForms connecting to a Provider platform with Administrative rights”
This means, just because CloudForms is connected to a provider platform using an admin account does not give the users logging into CloudForms the same rights. It can, but we advise that CloudForms is implemented as a Cloud Management Platform, utilizing its RBAC model and many authentication integration points.
Quelle: CloudForms

Why base capabilities are no longer enough for operations management

The business objectives of an IT or network operations team have not changed substantially for years or even decades. Measures like mean time to repair (MTTR) or budget use frequently can be reduced to time, money and quality of service. Fundamentally, IT and network teams must maximize the availability of high-quality services while minimizing the cost of doing so.
The demand for services supported by larger, more sophisticated infrastructure has increased steadily, even if the objectives have not changed. Disciplines such as fault or event management have emerged and matured, and led to a set of key capabilities that are table-stakes for a credible solution. More complex infrastructure requires a solution that can:

Consume events from highly heterogeneous environments
Minimize the amount of noise that is presented to the people or processes tasked with responding to events
Integrate with other operations support systems, folding applicable context into the process of event management and resolution
Help pinpoint the probable causes of events
Scale and grow as the business and attendant infrastructure grows
Help automate responses to events
Drive efficiency improvements in operations

The new operations management playing field
I talked about what has stayed the same for IT and networks teams. So what’s changed? Pretty much everything else.
Businesses are increasingly driven by the demand for continuous delivery of cloud-scale applications and service capabilities. companies are employing key enabling technologies and architectural patterns including virtualization, containerization and microservices. And they’re relying on newer methodologies and practices, such as agile software development and DevOps. Many new services and applications sit atop and leverage backend systems that have been developed and updated over years. Some IBM clients are enabling their users with new, rapidly evolving systems of engagement—like mobile—by taking advantage of hybrid cloud.
Two things have driven the emergence of highly instrumented monitored environments: a renewed focus on the user’s experience of a business service or application, and extremely high expectations for availability. Faults and events are reported from the bottom of the technology stack to the top in traditional, cloud and hybrid environments.
While large portions of the industries we serve have begun to standardize on mechanisms for communicating management data—such as RESTful HTTP interfaces—the payload formats remain heterogenous and relate to additional layers of infrastructure with complex patterns of dependency.
In summary, apps and services are becoming more complex, dynamic, business critical and talkative. And the companies that build them have much higher expectations on availability and time-to-market.
So, how are DevOps managers and developers tasked with managing these environments going to be successful? IT and network applications might move to a point where successful operations cannot be achieved with human cognition alone.
In the next blog in this series, I’ll talk about how event analytics in Netcool Operations Insight helps with the challenges that operations management face.
To learn more, register for our webinar on the value predictive insight brings to IT operations. Check out the earlier posts in our IBM Operations Analytics series. And stay tuned for additional key learnings from our colleagues in coming weeks.
The post Why base capabilities are no longer enough for operations management appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mirantis Launches Industry-First Course for Certified Kubernetes Administrator Exam

The post Mirantis Launches Industry-First Course for Certified Kubernetes Administrator Exam appeared first on Mirantis | Pure Play Open Cloud.
SUNNYVALE, Calif., Aug. 16, 2017 (GLOBE NEWSWIRE) — Mirantis today announced Kubernetes and Docker Bootcamp II, the first course available for Kubernetes users to train for the Certified Kubernetes Administrator exam (CKA). The CKA exam, announced in June by the Cloud Native Computing Foundation (CNCF), is still in Beta and expected to launch in September 2017.

The Kubernetes and Docker Bootcamp II (KD200) course guides students in detail through the topics covered by the exam, and helps ensure they are fully prepared to successfully achieve their CKA certification. This advanced Docker and Kubernetes course is a continuation of Kubernetes and Docker Bootcamp (KD100), the first vendor-agnostic Kubernetes and Docker training, announced in December 2016. KD200 is designed for deployment engineers and cloud administrators who want to acquire complete knowledge in using Kubernetes for deploying and managing containerized applications, and when combined with KD100, is the most comprehensive Kubernetes training available on the market today.

“Mirantis has provided training on open source software for years, offering career growth opportunities and giving businesses peace of mind that they are getting properly trained engineers,” said Lee Xie, senior director of Education Services, Mirantis. “This course gives Kubernetes users a chance to gain essential hands-on experience and expert guidance before taking the CKA exam.”

As one of the fastest-growing open source projects, Kubernetes use is expected to explode as companies increasingly evolve towards cloud-native software development. This course and certification ensures enterprises feel more secure when hiring a certified partner or developer. Cloud computing skills have progressed from being niche to mainstream as the world’s most in-demand skill set. The OpenStack User Survey shows Kubernetes taking the lead as the top Platform-as-a-Service (PaaS) tool, while 451 Research has called containers the “future of virtualization,” predicting strong container growth across on-premises, hosted and public clouds.

Mirantis has been a leader in open source training for 6 years, training more than 15,000 cloud professionals, many of those employed with Fortune 500 companies.

The first KD200 class from Mirantis will take place on October 3, 2017 in Sunnyvale and virtually, and is currently available at an introductory price.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Launches Industry-First Course for Certified Kubernetes Administrator Exam appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Introducing opstools-ansible

Introducing Opstools-ansible

Ansible

Ansible is an agentless, declarative configuration management tool. Ansible can be used to install and configure packages on a wide variety of targets. Targets are defined in the inventory file for Ansible to apply the predefined actions. Actions are defined as playbooks or sometime roles in the form of YAML files. Details of Ansible can be found here.

Opstools-ansible

The project opstools-ansible hosted in Github is to use Ansible to configure an environment that provides the support of opstools, namely centralized logging and analysis, availability monitoring, and performance monitoring.

One prerequisite to run opstools-ansible is that the servers have to be running with CentOS 7 or RHEL 7 (or a compatible distribution).

Inventory file

These servers are to be defined in the inventory file with reference structure to this file that defines 3 high level host groups:

am_hosts
pm_hosts
logging_host

There are lower level host groups but documentation stated that they are not tested.

Configuration File

Once the inventory file is defined, the Ansible configuration files can be used to tailor to individual needs. The READM.rst file for opstools-ansible suggests the following as an example:

fluentd_use_ssl: true

fluentd_shared_key: secret

fluentd_ca_cert:

—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–

fluentd_private_key:

—–BEGIN RSA PRIVATE KEY—–

—–END RSA PRIVATE KEY—–

If there is no Ansible configuration file to tune the system, the default settings/options are applied.

Playbooks and roles

The playbook specifies what packages are to be installed in for the opstools environment by Ansible. Basically, the packages to be installed are:

ElasticSearch
Fluentd
Kibana
Redis
RabbitMQ
Sensu
Uchiwa
CollectD
Grafana

Besides the above packages, opstools-ansible playbook also applies these additional roles

Firewall – this role manages the firewall rules for the servers.
Prereqs – this role checks and installs all the dependency packages such as python-netaddr or libselinux-python … etc. for the successful installation of opstools.
Repos – this is a collection of roles for configuring additional package repositories.
Chrony – this role installs and configures the NTP client to make sure the time in each server is in sync with each other.

opstools environment

Once these are done, we can simply apply the following command to create the opstools environment:

ansible-playbook playbook.yml -e @config.yml

TripleO Integration

TripleO (OpenStack on OpenStack) has the concept of Undercloud and Overcloud

Undercloud : for deployment, configuration and management of OpenStack nodes.
Overcloud : the actual OpenStack cluster that is consumed by user.

RedHat has an in-depth blog post on TripleO and OpenStack has this document on contributing and installing TripleO

When opstools is installed at the TripleO Undercloud, the OpenStack instances running on the Overcloud can be configured to run the opstools service when it deployed. For example:

openstack overcloud deploy …

-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml

-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml

-e params.yaml

There are only 3 steps to integrate opstools with TripleO with opstools-ansible. Detail of the steps can be found here.

Use opstools-ansible to create the opstools environment at the Undercloud.
Create the params.yaml for TripleO to points to the Sensu and Fluentd agents at the opstools hosts.
Deploy with the “openstack overcloud deploy …” command.

Quelle: RDO

OpenWhisk on IBM Bluemix powers the “Internet of Garbage”

The “Internet of Garbage” doesn’t refer to all the ridiculous and inane things one might find on social media. It’s literally about garbage trucks with sensors.
GreenQ has installed sensors on trucks to gather real-time data to optimize the waste collection process. When a waste bin is picked up, the sensors on the truck measure the amount of garbage inside the container and monitor the time and location of the pick-up.
There’s a cloud-based system that collects, analyzes and displays the real-time data and analytics. GreenQ calls it the Internet of Garbage.

The Internet of Garbage foundation
As the company grew, the amount of data and the need for computing power grew.
GreenQ participated in the IBM Alpha Zone accelerator program, which is a 20-week professional and deep immersion program for developing solutions for the enterprise market. The program, run out of the IBM Israel office, aims to create long-term technology partnerships between IBM and the participants.
GreenQ migrated to the scalable IBM Bluemix infrastructure, the heart of which is OpenWhisk, an on-demand, serverless platform.
Working with OpenWhisk on Bluemix will enable GreenQ to add more capabilities to its system in the future, such as Watson for cognitive computing and visual recognition.

Improved service and route planning
GreenQ helps its customers do the same job for less cost or use the same budget to provide a better quality of service. Data is shared over web-based dashboard or a mobile app to provide better insight into waste collection, whether it be when the truck is arriving or garbage is being separated.
Customers can see all the trucks on a live map. They can see the routes and what’s collected. They will receive automatic, online notifications if there are problems on the route, enabling them to give better service to the residents. For example, if a particular resident produces less trash, their waste management provider may choose to offer a lower rate on waste collection fees.
Additionally, alternate routes can be calculated that will help trash trucks avoid adding to traffic congestion in the morning or interrupting bus service to schools.
Analytics for optimization
Clients can analyze data about a particular bin, a neighborhood or the entire city. They can decide whether they want to optimize the routes to save mileage or prevent traffic congestion, give better service to residents or reduce emissions. GreenQ uses the Internet of Garbage to pull together data about these things, calculate them and offer recommendations for how to optimize the waste collection.
It might be a matter different routing, different scheduling, different waste bin mapping or what trucks are used. GreenQ has started testing Watson visual recognition technology into the implementation. There is a camera on the truck that takes a picture of the waste bin during the collection to get better and more accurate real-time information about what is happening during the waste pickup process. It is something that is expected to be in production in the near future.
Read the case study for more information.
The post OpenWhisk on IBM Bluemix powers the “Internet of Garbage” appeared first on Cloud computing news.
Quelle: Thoughts on Cloud