Vernetztes Fahren: Wer hat uns verraten? Autodaten

An den Daten vernetzter Autos sind viele Branchen und Firmen interessiert. Die Vorschläge zu Speicherung und Zugriff auf die Daten sind jedoch noch nebulös. Und könnten den Fahrzeughaltern große Probleme bereiten. Eine Analyse von Friedhelm Greis (Autonomes Fahren, Vorratsdatenspeicherung)
Quelle: Golem

Physik: Den Quanten beim Sprung zusehen

Quantensprünge sind niemals groß und nicht vorhersehbar. Forschern ist es dennoch gelungen, den Vorgang zuverlässig zu beobachten, wenn er einmal angefangen hatte – und sie konnten ihn sogar umkehren. Die Fehlerkorrektur in Quantencomputern soll in Zukunft genau so funktionieren. Von Frank Wunderlich-Pfeiffer (Quantenphysik, Internet)
Quelle: Golem

Helping enterprises in India transform their businesses in the cloud

In the last year, there’s been an upward trend in cloud adoption in India. In fact, NASSCOM findsthat cloud spending in India is estimated to grow at 30% per annum to cross the US$7 billion mark by 2022.In my conversations with customers, discussions have evolved beyond cost savings and efficiencies. While those are still very relevant reasons for adopting cloud technologies, Indian enterprises are looking to Google Cloud to help them drive digital transformation, identify new revenue generating business models, reach previously untapped consumer markets, and build customer loyalty through greater insight and personalization.To help more enterprises in India take advantage of the cloud, today we’re kicking off our Google Cloud Summit in Mumbai and next week we take the show on the road to customers in New Delhi and Bangalore. More of a community gathering than a conference, our Cloud Summits are where conversations start, partnerships form and problems are solved; and where customers convene to learn from their peers and experts about how the cloud is transforming business. It’s also our opportunity to better understand the needs of Indian businesses, and to get inspired by our customers’ success stories. Here are a few highlights.Tata Steel: Mining data and maximizing its powerTata Steel is a great example of an established enterprise from a traditional industry that is modernizing and embracing cloud computing. With an ambition to be a leader in manufacturing in India and a digital-first organization by 2022, Tata Steel believes smart analytics is key to enhancing operational efficiency and gaining business advantage. To organize data from siloed systems across the organization and make it easily accessible to all employees, Tata Steel is using Cloud Search and plans to scale it to more than one million documents and 28 disparate enterprise content sources including enterprise resource planning (ERP) and SharePoint. In fact, Tata Steel is one of the first Indian enterprises to harness the power of Cloud Search to meet some of the most aggressive ingestion demands, with indexing durations reduced from weeks to seconds.They are also leveraging Google Cloud Platform (GCP) services like Google Cloud Storage and BigQuery to build their data lake and enterprise data warehouse so they can take advantage of advanced analytics and machine learning. Managed services such as AI Platform further enable Tata Steel to manage end-to-end AI/ML workflows within the GCP console. This complements their existing on-premise reporting and analytics tools, and brings data management to the forefront of everything they do—from forecasting market demand to predictive equipment maintenance.“Digital is not just a goal, it’s become a way of life. We are digitizing everything from the deployment of factory vehicles to improving material throughput to marketing and sales. As a result, we have petabytes of structured and unstructured data that is not only waiting to be mined, but that we can generate intelligence from to create opportunities across our multiple lines of business using GCP,” said Sarajit Jha, Chief Business Transformation & Digital Solutions at Tata Steel.Helping L&T Financial Services reach customers in rural communitiesIn rural communities, quick access to financial services can make a tremendous difference to livelihoods. L&T Financial Services provides farm-equipment finance, micro loans and two-wheeler finance to consumers across rural India backed by a strong digital and analytics platform. Their digital-loan approval app, which runs on GCP, makes it significantly faster and easier for people to apply for financial assistance to purchase important things such as farming equipment and two-wheelers. It also helps rural women entrepreneurs get quicker access to funds for their businesses through micro loans.L&T Financial found G Suite to be a far better collaborative tool to help staff work together efficiently. Employees can interact with each other in real time using Hangouts Meet, and the task of information sharing is more seamless and secure through Drive. BigQuery also helps L&T Financial Services generate behavior scorecards to track credit quality of its micro-loan customers.“Cloud is the technology that enables us to achieve scale and reach. Today there are countless data points available about rural consumers which enable us to personalize our products to serve them better. With access to faster compute power, we can also on-board consumers more efficiently. Our rural businesses have clocked a disbursement CAGR of 60% over the past three years.” said Sunil Prabhune, Chief Executive-Rural Finance, and Group Head-Digital, IT and Analytics, L&T Financial Services.Creating conversational connections for Digitate’s customersDigitate, a venture of TCS (Tata Consultancy Services), has integrated Dialogflow into its flagship brand ignio, an award-winning artificial intelligence platform for driving IT operations, workload operations and ERP operations for diverse enterprises. This integration is the next step in ignio’s product development journey, and will enable users to chat or talk with ignio to detect issues, triage problems, resolve them and even predict system behavior.“ignio combines its unique self-healing AIOps capabilities for enterprise IT and business operations with Dialogflow’s AI/ML-based, easy to use, natural and rich conversational capabilities to create an unparalleled, intuitive and feature-rich experience for our customers,” says Akhilesh Tripathi, Head of Digitate.Indian enterprises going G SuiteThe base of Indian enterprises that are making the switch to G Suite to streamline their productivity and collaboration also continues to grow. Sharechat, BookMyShow, Hero MotorCorp, DB Corp and Royal Enfield are now able to move faster within their organizations, using intelligent, cloud-based apps to transform the way they work.A hybrid and multi-cloud future in IndiaCustomers want and deserve choice and flexibility, and openness continues to be a major differentiator for Google Cloud. Since we announced Anthos, our hybrid, multi-cloud solution at Next ‘19, customer feedback has been overwhelmingly positive. That’s because Anthos embraces open standards, and lets customers run their applications, unmodified, on existing on-prem hardware investments or in the public cloud.IDC predicts that by 2023, 55% of India 500 organizations will have a multi-cloud management strategy that includes integrated tools across public and private clouds. (IDC FutureScape: Worldwide Cloud 2019 Predictions  — India Implications (# AP43922319). So when we hold our flagship Cloud Summits in India in 2020, I look forward to sharing more success stories of Indian enterprises that have taken the next step in their digital transformation journey.
Quelle: Google Cloud Platform

Using Text Analytics in call centers

Azure Cognitive Services provides Text Analytics APIs that simplify extracting information from text data using natural language processing and machine learning. These APIs wrap pre-built language processing capabilities, for example, sentiment analysis, key phrase extraction, entity recognition, and language detection.

Using Text Analytics, businesses can draw deeper insights from interactions with their customers. These insights can be used to create management reports, automate business processes, for competitive analysis, and more. One area that can provide such insights is recorded customer service calls which can provide the necessary data to:

Measure and improve customer satisfaction
Track call center and agent performance
Look into performance of various service areas

In this blog, we will look at how we can gain insights from these recorded customer calls using Azure Cognitive Services.

Using a combination of these services, such as Text Analytics and Speech APIs, we can extract information from the content of customer and agent conversations. We can then visualize the results and look for trends and patterns.

The sequence is as follows:

Using Azure Speech APIs, we can convert the recorded calls to text. With the text transcriptions in hand, we can then run Text Analytics APIs to gain more insight into the content of the conversations.
The sentiment analysis API provides information on the overall sentiment of the text in three categories positive, neutral, and negative. At each turn of the conversation between the agent and customer, we can:

See how the customer sentiment is improving, staying the same, or declining.
Evaluate the call, the agent, or either for their effectiveness in handling customer complaints during different times.
See when an agent is consistently able to turn negative conversations into positive or vice versa and identify opportunities for training.

Using the key phrase extraction API, we can extract the key phrases in the conversation. This data, in combination with the detected sentiment, can assign categories to a set of key phrases during the call. With this data in hand, we can:

See which phrases carry negative or positive sentiment.
Evaluate shifts in sentiment over time or during product and service announcements.

Using the entity recognition API, we can extract entities such as person, organization, location, date time, and more. We can use this data, for example, to:

Tie the call sentiment to specific events such as product launches or store openings in an area.
Use customer mentions of competitors for competitive intelligence and analysis.

Lastly, Power BI can help visualize the insights and communicate the patterns and trends to drive to action.

Using the Azure Cognitive Services Text Analytics, we can gain deeper insights into customer interactions and go beyond simple customer surveys into the content of their conversations.

A sample code implementation of the above workflow can be found on GitHub.
Quelle: Azure

Competing with supercomputers: HPC in the cloud becomes reality

Migrating applications to the cloud usually requires significant planning, but some apps, such as data-intensive, tightly coupled high-performance computing (HPC) apps, pose a particular challenge. HPC data growth used to be bound by compute capabilities, but data now comes from many more sources, including sensors, cameras and instruments. That data growth is outpacing corresponding improvements in computer processing, network throughput, and storage performance. This spells trouble for the AI and ML algorithms that need to keep up and process this data to derive insights. This data growth means that many traditional HPC on-premises data centers have to start migrating at least some of the application load into the cloud.With all that in mind, Google Cloud and DDN have developed an in-cloud file system suitable for HPC: DDN’s EXAScaler, a parallel file system designed to handle high concurrency access patterns to shared data sets. Such I/O patterns are typical of tightly coupled HPC applications such as computational simulations in the areas of oil and gas explorations, astrophysics, national defense, and finance. DDN provides expertise in deploying data-at-scale solutions for the most challenging data-intensive applications, and Google provides expertise in delivering global-scale data center solutions.  EXAScaler is DDN’s branded Lustre product. Lustre is an open-source file system, with more than 15 years of demonstrated stability and performance in commercial and research environments at the largest scales. Lustre performs well, but tuning it for maximum efficiency can be challenging. Google and DDN recently used IO-500, the top HPC storage benchmark, to demonstrate the joint system’s ease of use.Competing in the HPC storage challengeIO-500 is an emerging HPC storage benchmark that seeks to capture the full picture of an HPC storage system by calculating a score based upon a wide range of storage characteristics, rather than capturing one narrow performance dimension (such as sequential read performance). Designed to give users realistic performance expectations and to help administrators select their next HPC storage system, the IO-500 competition releases two ranked lists per year: one in the U.S. in November at Supercomputing, the premiere HPC conference, and one in June at ISC, the European equivalent. We were excited to join this competition for the first time to show how easy it is to use this joint solution.Our top-performing configuration achieved #8 on the IO-500, and was limited solely by our allocated resources (next time we’ll request more). This success shows that anyone can now deploy a high-end storage system, not just those with a large budget and extensive expertise.The key benefits of using EXAScaler really came through during the benchmarking process itself:Fast deploymentThe entire Lustre file system, including the VMs and storage it used, was deployed in minutes and shut down immediately afterwards. This is in contrast with on-premises HPC deployments, where it can take weeks just to deploy the hardware alone.Resource efficiencyEXAScaler could generally saturate Google Cloud block storage IOPS and bandwidth limits, which is a testament to Lustre efficiency, EXAScaler tuning, and Google Cloud capabilities.Easy configurationWe submitted three separate configurations to evaluate the effect of changing the number of clients, storage servers, and metadata servers. Deploying each configuration required only a few changes in the deployment script. In fact, this flexibility made it harder to narrow down our configuration choices, since we were only limited by our allotted resources and our imagination of how many different ways Lustre can be deployed.Integrated monitoringWe found that combining client and storage server monitoring using Google Stackdriver with the native DDN EXAScaler monitoring tools allowed for quick diagnosis and resolution of performance bottlenecks. Further, these tools allowed us to identify opportunities to reduce the cost of the system (such as reducing the number of allocated storage server vCPUs).Predictable performanceWhile the IO-500 benchmark only requires the results of a single run, we found benchmark performance extremely consistent between runs.Finally, Lustre on Google Cloud is the only pay-as-you-go storage system on the list, and so the actual cost of the system is simply the per-second cost of running the benchmark (approximately one hour). It would be very interesting if the IO-500 included the cost of each system, as the balance between price and performance is in many cases more important for users than raw performance.IO-500 write bandwidth performance with three different configurationsTips for running HPC in the cloudBenchmarks can be a useful way to get to know the overall market and narrow down your HPC storage choices. However, it’s also important to know what to watch out for when you’re moving important workloads to cloud, particularly if your business relies on them.If you’re considering moving traditional HPC workloads to the cloud, first identify a set of applications to migrate and their sustained performance requirements to create economically efficient hybrid solutions. Executing HPC workloads in the cloud can simply be a lift and shift of HPC software, but it’s also a chance for you to tailor your infrastructure to the needs of the workload. Compute can be scaled up when it’s needed, and scaled down when it’s not. Workloads that need GPUs can provision them and those that do not can simply customize the compute and memory that is required. Storage can also be allocated when in use and either stopped or de-allocated when idle.The goal when you’re running HPC in the cloud should be to simply focus on what applications need, and leave the execution to Google Cloud. For example, one common storage deployment issue is that a single parallel file system is supposed to handle workloads with conflicting requirements such as high metadata/IOPS performance and high bandwidth. Optimizing for metadata/IOPS performance requires more expensive SSDs, which is unnecessary for workloads that simply need high bandwidth. Also, running both types of workloads at the same time can drastically increase the runtime of both due to the mixing of I/O requests. A better way is to customize a parallel file system for each workload type, which decreases overall workload runtime by reducing I/O contention and can even reduce cost by making better use of the provisioned storage devices.While Lustre is proven to scale in traditional on-premises HPC infrastructures, its ability to adapt and deliver the benefits of the cloud has so far not been demonstrated—which is why DDN focuses on adapting, deploying and running Lustre at extreme scales in the cloud. Lustre in the cloud should continue to deliver extreme scale and performance, but be easy to manage and not blow up your storage budget.Using Google Cloud and DDN together can create the right balance of compute and storage for each workload. For active hot data, the Lustre file system brings high performance. For inactive cold data, Google Cloud Storage can be used as an extreme scale archive that delivers high availability with multi-regional, dual-regional, and regional classes and low cost with standard, nearline, and coldline classes. Combining Lustre with Google Cloud storage means that hot data can be fast and cold data can be stored cheaply and brought to bear quickly when needed.Compute and store data differently in Google CloudIf you’re running HPC workloads on-prem, EXAScaler on Google Cloud can help model on-premises Lustre deployments in advance to ensure they are provisioned appropriately before the hardware is ordered and deployed. Due to the complexity of workloads, it can be hard to know the best economic blend of capacity, performance, bandwidth, IOPS, metadata and more before you deploy. Quickly prototyping different configurations quickly and cheaply can ensure a good experience from the start.What’s next for cloud HPCTry the Google Cloud and DDN Lustre solution from the GCP Marketplace. Keep an eye out for exciting new features as we upgrade to EXAScaler in the coming months.For more tips on running HPC apps in Google Cloud, watch our talk from Google Cloud Next ’19. You can also learn more about DDN’s other products for data migration to GCP and workload analysis to fine-tune your deployment.
Quelle: Google Cloud Platform