Steve Harvey fans to embark on a new digital experience built on IBM Cloud

Steve Harvey is an entertainment legend. You might be one of the millions that tunes into the radio and TV shows he hosts. Maybe you’re a fan of his legendary stand-up comedy specials. Or maybe you’ve read one of his best-selling books, or heard about his growing business and philanthropic efforts.
So where can fans find Steve? Look to a new digital experience powered by a mobile app and a brand new website built on the IBM Cloud.
Steve Harvey World Group (SHWG), the business behind the world-famous entertainer, just announced they are building a new digital experience on IBM Cloud for Steve Harvey fans. The experience spans digital and mobile, delivering new ways to engage hundreds of millions of fans with exclusive video and more from Steve.
The new digital experience will be the central hub connecting all of Steve Harvey’s companies. And it’s serious business. The digital experience is leveraging IBM Cloud’s infrastructure as a service, content delivery network and microservices, including API Connect, Analytics and Mobile Foundation.
SHWG will use the experience to gather business intelligence through data-driven insights that will help identify new revenue opportunities and potential partnerships.
For more details on what SHWG is building on IBM Cloud, check out the announcement. Want to learn how IBM can deliver business intelligence to your company? Explore IBM Cloud.
The post Steve Harvey fans to embark on a new digital experience built on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Distributed TensorFlow and the hidden layers of engineering work

By Brad Svee, Staff Cloud Solutions Architect

With all the buzz around Machine Learning as of late, it’s no surprise that companies are starting to experiment with their own ML models, and a lot of them are choosing TensorFlow. Because TensorFlow is open source, you can run it locally to quickly create prototypes and deploy fail-fast experiments that help you get your proof-of-concept working at a small scale. Then, when you’re ready, you can take TensorFlow, your data, and the same code and push it up into Google Cloud to take advantage of multiple CPUs, GPUs or soon even some TPUs.

When you get to the point where you’re ready to take your ML work to the next level, you will have to make some choices about how to set up your infrastructure. In general, many of these choices will impact how much time you spend on operational engineering work versus ML engineering work. To help, we’ve published a pair of solution tutorials to show you how you can create and run a distributed TensorFlow cluster on Google Compute Engine and run the same code to train the same model on Google Cloud Machine Learning Engine. The solutions use MNIST as the model, which isn’t necessarily the most exciting example to work with, but does allow us to emphasize the engineering aspects of the solutions.

We’ve already talked about the open-source nature of TensorFlow, allowing you to run it on your laptop, on a server in your private data center, or even a Raspberry PI. TensorFlow can also run in a distributed cluster, allowing you divide your training workloads across multiple machines, which can save you a significant amount of time waiting for results. The first solution shows you how to set up a group of Compute Engine instances running TensorFlow, as in Figure 1, by creating a reusable custom image, and executing an initiation script with Cloud Shell. There are quite a few steps involved in creating the environment and getting it to function properly. Even though they aren’t complex steps, they are operational engineering steps, and will take time away from your actual ML development.

Figure 1. A distributed TensorFlow cluster on Google Compute Engine.

The second solution uses the same code with Cloud ML Engine, and with one command you’ll automatically provision the compute resources needed to train your model. This solution also delves into some of the general details of neural networks and distributed training. It also gives you a chance to try out TensorBoard to visualize your training and resulting model as seen in Figure 2. The time you save provisioning compute resources can be spent analyzing your ML work more deeply.

Figure 2. Visualizing the training result with TensorBoard.

Regardless of how you train your model, the whole point is you want to use it to make predictions. Traditionally, this is where the most engineering work has to be done. In the case where you want to build a web-service to run your predictions, at a minimum, you’ll have to provision, configure and secure some web servers, load balancers, monitoring agents, and create some kind of versioning process. In both of these solutions, you’ll use the Cloud ML Engine prediction service to effectively offload all of those operational tasks to host your model in a reliable, scalable, and secure environment. Once you set up your model for predictions, you’ll quickly spin up a Cloud Datalab instance and download a simple notebook to execute and test the predictions. In this notebook you’ll draw a number with your mouse or trackpad, as in Figure 3, which will get converted to the appropriate image matrix format that matches the MNIST data format. The notebook will send your image to your new prediction API and tell you which number it detected as in Figure 4.

Figure 3.

Figure 4.

This brings up one last and critical point about the engineering efforts required to host your model for predictions, which is not deeply expanded upon in these solutions, but is something that Cloud ML Engine and Cloud Dataflow can easily address for you. When working with pre-built machine learning models that work on standard datasets, it can be easy to lose track of the fact that machine learning model training, deployment, and prediction are often at the end of a series of data pipelines. In the real world, it’s unlikely that your datasets will be pristine and collected specifically for the purpose of learning from the data.

Rather, you’ll usually have to preprocess the data before you can feed it into your TensorFlow model. Common preprocessing steps include de-duplication, scaling/transforming data values, creating vocabularies, and handling unusual situations. The TensorFlow model is then trained on the clean, processed data.

At prediction time, it is the same raw data that will be received from the client. Yet, your TensorFlow model has been trained with de-duplicated, transformed, and cleaned-up data with specific vocabulary mappings. Because your prediction infrastructure might not be written in Python, there is a significant amount of engineering work necessary to build libraries to carry out these tasks with exacting consistency in whatever language or system you use. Many times there is too much inconsistency in how the preprocessing is done before training versus how it’s done before prediction. Even the smallest amount of inconsistency can cause your predictions to behave poorly or unexpectedly. By using Cloud Dataflow to do the preprocessing and Cloud ML Engine to carry out the predictions, it’s possible to minimize or completely avoid this additional engineering work. This is because Cloud Dataflow can apply the same preprocessing transformation code to both historical data during training and real-time data during prediction.

Summary 
Developing new machine learning models is getting easier as TensorFlow adds new APIs and abstraction layers and allows you to run it wherever you want. Cloud Machine Learning Engine is powered by TensorFlow so you aren’t locked into a proprietary managed service, and we’ll even show you how to build your own TensorFlow cluster on Compute Engine if you want. But we think that you might want to spend less time on the engineering work needed to set up your training and prediction environments, and more time tuning, analyzing and perfecting your model. With Cloud Machine Learning Engine, Cloud Datalab, and Cloud Dataflow you can optimize your time. Offload the operational engineering work to us, quickly and easily analyze and visualize your data, and build preprocessing pipelines that are reusable for training and prediction.
Quelle: Google Cloud Platform

Nvidia: Keine Volta-basierten Geforces in 2017

Laut Nvidia-CEO Jensen Huang wird es in diesem Jahr keine neuen Grafikkarten mit Volta-Technik geben. Spieler müssen vorerst dahin mit den Pascal-basierten Geforce-Modellen vorliebnehmen, auch weil AMDs Vega nur bedingt konkurrenzfähig ist. (Nvidia Volta, Grafikhardware)
Quelle: Golem

IBM Voice Gateway: New features revolutionize your cognitive call center

Improving customer support is a never-ending job. You must continually listen to your customers and pivot your business to adapt to their needs.
One way to improve customer service is to use cognitive virtual agents. The use of these agents has been growing in the online space for a few years. And with the arrival of the IBM Voice Gateway, you can bring those agents to your call centers.
We recently introduced IBM Voice Gateway, a cognitive call center solution that signals the future of where technology is heading: cognitive services. Voice Gateway enhances your call center operations by connecting Watson services to act as a self-service agent handling calls instead of a live contact center agent. Voice Gateway also uses IBM Watson to assist contact center agents in real-time. This is artificial intelligence in action, delivered through cognitive capabilities. Essentially, Voice Gateway and Watson services creates a cognitive interactive voice response (IVR) system, improving customer support and helping reduce the strain of live agents during peak hours.
As part of our continuous delivery model we’re constantly improving both the Watson services and IBM Voice Gateway’s capabilities. Our recent 1.0.0.2 release added the following capabilities:

Support for configuring a multi-tenant Voice Gateway environment, so that you can host multiple phone numbers and have them connecting to different Watson services—all through the same Voice Gateway deployment
Enhancements to the Voice Gateway API, including action tags which you can use to trigger a single action or sequence of actions in the Voice Gateway from the conversation service
Support for Watson Virtual Agent. You can use this agent instead of the conversation service when creating self-service agents to provide automated service to customers. Watson Virtual agent allows you to get to market faster, and learn more about how your cognitive agents are being utilized by your customers
Additional resiliency by providing the ability to configure whether to disconnect calls when transfers fail, or allow the conversation dialog to decide on the next steps, such as routing them to a new destination

For additional details, you can read about the latest features here.
Clients can expect to experience overall benefits of Voice Gateway. From improved telephone-based customer service to driving down costs and deployment flexibility, Voice Gateway with Watson services brings you the next-generation, cognitive automation into your business.
Interested in learning more? Check out the demo. And if you’re ready to start integrating and building a revolutionary call center solution contact us for details.
The post IBM Voice Gateway: New features revolutionize your cognitive call center appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Facebook's Challenge Of A Gag Order Over Search Warrants Will Get A Public Hearing

Thomas White / Reuters

An appeals court in Washington, DC, has scheduled a public hearing next month for arguments on Facebook’s challenge to an order blocking the company from alerting users about search warrants for account information.

The gag order is sealed, as is most information about the case. The issue came to light earlier this summer, after the District of Columbia Court of Appeals issued an order with limited details seeking input from outside groups on the dispute. Tech companies, civil liberties groups, and consumer advocacy organizations filed public briefs in June supporting Facebook’s challenge and arguing that users should have a right to challenge search warrants for their information.

In an Aug. 14 order obtained by BuzzFeed News, the court alerted lawyers that it had scheduled arguments for Sept 14. The court said the hearing would be public, and that it planned to live-stream video of arguments through the court’s website because the case “raises an issue of public interest.”

With much of the case still under seal, the court reminded lawyers to be careful about avoiding any mention of confidential or privileged information. The court also denied a request by the Electronic Frontier Foundation, one of the groups that filed a brief, to participate.

Federal prosecutors served search warrants on Facebook for three account records over a three-month period, seeking “all contents of communications, identifying information, and other records,” according to the public notice the court allowed Facebook to send out to interested groups.

A lower court judge signed off on a nondisclosure order that barred Facebook from notifying account users before complying with the warrants, which Facebook is challenging on First Amendment grounds.

The Electronic Frontier Foundation suggested in its earlier brief that, based on what little information is publicly known about the search warrants and their timing, the case likely relates to the mass arrests in Washington during President Trump's inauguration. More than 200 people were charged with rioting and property destruction, and the bulk of those cases are pending, with trials set for the fall and throughout 2018.

“Reading the tea leaves of an appellate panel is often futile but we hope the court will quickly dispose of the Trump Administration's absurd argument that its pursuit of the January 20 protesters is secret in any sense. The fact that the argument will be public encourages us that the court is going to take the First Amendment seriously,” Nate Cardozo, a lawyer with the Electronic Frontier Foundation, said in an email to BuzzFeed News on Friday.

The briefs filed by Facebook and the US attorney’s office in Washington are sealed, and lawyers in the case have previously declined to comment on whether the search warrants relate to the Jan. 20 arrests.

Quelle: <a href="Facebook's Challenge Of A Gag Order Over Search Warrants Will Get A Public Hearing“>BuzzFeed