Azure Stream Analytics now supports MATCH_RECOGNIZE

MATCH_RECOGNIZE in Azure Stream Analytics significantly reduces the complexity and cost associated with building, modifying, and maintaining queries that match sequence of events for alerts or further data computation.

What is Azure Stream Analytics?

Azure Stream Analytics is a fully managed serverless PaaS offering on Azure that enables customers to analyze and process fast moving streams of data and deliver real-time insights for mission critical scenarios. Developers can use a simple SQL language, extensible to include custom code, in order to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with milli-second latencies.

Traditional way to incorporate pattern matching in stream processing

Many customers use Azure Stream Analytics to continuously monitor massive amounts of data, detecting sequence of events and deriving alerts or aggregating data from those events. This in essence is pattern matching.

For pattern matching, customers traditionally relied on multiple joins, each one detecting a single event in particular. These joins are combined to find a sequence of events, compute results or create alerts. Developing queries for pattern matching is a complex process and very error prone, difficult to maintain and debug. Also, there are limitations when trying to express more complex patterns like Kleene Stars, Kleene Plus, or Wild Cards.

To address these issues and improve customer experience, Azure Stream Analytics provides a MATCH_RECOGNIZE clause to define patterns and compute values from the matched events. MATCH_RECOGNIZE clause increases user productivity as it is easy to read, write and maintain.

Typical scenario for MATCH_RECOGNIZE

Event matching is an important aspect of data stream processing. The ability to express and search for patterns in a data stream enable users to create simple yet powerful algorithms that can trigger alerts or compute values when a specific sequence of events is found.

An example scenario would be a food preparing facility with multiple cookers, each with its own temperature monitor. A shut down operation for a specific cooker need to be generated in case its temperature doubles within five minutes. In this case, the cooker must be shut down as temperature is increasing too rapidly and could either burn the food or cause a fire hazard.

Query
SELECT * INTO ShutDown from Temperature
MATCH_RECOGNIZE (
LIMIT DURATION (minute, 5)
PARTITON BY cookerId
AFTER MATCH SKIP TO NEXT ROW
MEASURES
1 AS shouldShutDown
PATTERN (temperature1 temperature2)
DEFINE
temperature1 AS temperature1.temp > 0,
temperature2 AS temperature2.temp > 2 * MAX(temperature1.temp)
) AS T

In the example above, MATCH_RECOGNIZE defines a limit duration of five minutes, the measures to output when a match is found, the pattern to match and lastly how each pattern variable is defined. Once a match is found, an event containing the MEASURES values will be output into ShutDown. This match is partitioned over all the cookers by cookerId and are evaluated independently from one another.

MATCH_RECOGNIZE brings an easier way to express patterns matching, decreases the time spent on writing and maintaining pattern matching queries and enable richer scenarios that were practically impossible to write or debug before.

Get started with Azure Stream Analytics

Azure Stream Analytics enables the processing of fast-moving streams of data from IoT devices, applications, clickstreams, and other data streams in real-time. To get started, refer to the Azure Stream Analytics documentation.
Quelle: Azure

Overcoming language difficulties with AI and Azure services

Ever hear the Abbot and Costello routine, “Who’s on first?” It’s a masterpiece of American English humor. But what if it we translated it into another language? With a word-by-word translation, most of what English speakers laugh at, would be lost. Such is the problem of machine translation (translation by computer algorithm.) If a business depends on words to have an impact on the user, then translation services need to be seriously evaluated for accuracy and effect. This is how Lionbridge approaches the entire world of language translation—but now they can harness the capabilities of artificial intelligence (AI). The result is to ensure the translations reach a higher bar.

The Azure platform offers a wealth of services for partners to enhance, extend and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Efficient partners for communication in life sciences

For those who deal in healthcare or life sciences, language should not be a barrier to finding the right information. The world of research and reporting is not limited to a few human languages. Life science organizations need to be able to find data from anywhere in the world. And for that, a translation service is needed that preserves not just the facts, but the effect of the original data. This is the goal of Lionbridge, a Microsoft partner dedicated to efficient translation.

In addition to localization, Lionbridge also serves as a guard against other dangers related to document handling. For example, there may be insufficient information provided to get a patient’s informed consent. Or a patient’s data can be disclosed by mistake. The penalties for any privacy violations can be steep. Having a third party whose sole business is to govern the documentation provides additional security against data mishandling.

The company can’t do this work on its own. It stresses a collaborative partnership approach to achieve the results needed. That begins with having fluency with human languages as well as with the technical domains. From their literature:

“Our team partners with yours to turn sensitive, complex, and frequently-changing content into words that resonate with every end user—from regulatory boards to care providers to patients—around the world. Our clients include pharmaceutical, medical device, medical publishing, and healthcare companies as well as Contract Research Organizations (CROs). Each demands strict attention to detail, expert understanding of nuanced requirements, and the utmost care for the end user.”

It comes as no surprise that Lionbridge depends on a host of skilled, professional translators—10,000 translators across 350 languages.

Specialized solutions

Due to the highly specialized service needs, the company operates as a consultant. After a meeting and evaluation of existing documentation and workflows, they will deliver a new workflow that includes technical services built on Azure. The company also creates a secure document exchange portal for managing translation into 350+ languages. The portal integrates with advanced workflow automation and AI powered translation. This advanced language technology enables far greater speed and volumes to be translated with increasing efficiency, opening up new languages, markets, and constituents for customers.

Lionbridge’s portal and translation management system have the appropriate controls in place in order to support a HIPAA-compliant workflow and are supported by globally distributed “Centers of Excellence.” The staff of the centers ensure adherence to ISO standards and are trained in supporting sensitive content, including personal health information (PHI).

The graphic shows the processes that are involved in creating a translation project. The project must first be defined. The project is then handed off to Lionbridge through their “Freeway Platform.” From there, it undergoes the translation process, with quality checks. The customer can see progress and results at a dashboard until the project is deemed complete.

Azure services used in solution

Azure App Service is used as a compute resource to host applications and is valued for its automated scaling and proactive monitoring.
Azure SQL Database is appreciated for its automated backup, geo-replication, and failover features.
Azure Service Fabric supports the need for a microservices oriented platform.
Azure Storage (mostly blobs) is used in many applications, including for CDN purposes to allow users to access application content in many parts of the words with high speed.
Azure Cognitive Services is used by some applications to provide AI capabilities.

Next steps

To find out more, go to the Lionbridge offering on the Azure Marketplace and click Contact me.

To learn more about other healthcare solutions, go to the Azure for health page.
Quelle: Azure

Introducing NVv4 Azure Virtual Machines for GPU visualization workloads

Azure offers a wide variety of virtual machine (VM) sizes tailored to meet diverse customer needs. Our NV size family has been optimized for GPU-powered visualization workloads, such as CAD, gaming, and simulation. Today, our customers are using these VMs to power remote visualization services and virtual desktops in the cloud. While our existing NV size VMs work great to run graphics heavy visualization workloads, a common piece of feedback we receive from our customers is that for entry-level desktops in the cloud, only a fraction of the GPU resources is needed. Currently, the smallest sized GPU VM comes with one full GPU and more vCPU/RAM than a knowledge worker desktop requires in the cloud. For some customers, this is not a cost-effective configuration for entry-level scenarios.

Announcing NVv4 Azure Virtual Machines based on AMD EPYC 7002 processors and virtualized Radeon MI25 GPU.

The new NVv4 virtual machine series will be available for preview in the fall. NVv4 offers unprecedented GPU resourcing flexibility, giving customers more choice than ever before. Customers can select from VMs with a whole GPU all the way down to 1/8th of a GPU. This makes entry-level and low-intensity GPU workloads more cost-effective than ever before, while still giving customers the option to scale up to powerful full-GPU processing power.

NVv4 Virtual Machines support up to 32 vCPUs, 112GB of RAM, and 16 GB of GPU memory.

 

Size
vCPU
Memory
GPU memory
Azure network

Standard_NV4as_v4
4
14 GB
2 GB
50 Gbps

Standard_NV8as_v4
8
28 GB
4 GB
50 Gbps

Standard_NV16as_v4
16
56 GB
8 GB
50 Gbps

Standard_NV32as_v4
32
112 GB
16 GB
50 Gbps

With our hardware-based GPU virtualization solution built on top of AMD MxGPU and industry standard SR-IOV technology, customers can securely run workloads on virtual GPUs with dedicated GPU frame buffer. The new NVv4 Virtual Machines will also support Azure Premium SSD disks. NVv4 will have simultaneous multithreading (SMT) enabled for applications that can take advantage of additional vCPUs.

For customers looking to utilize GPU-powered VMs as part of the desktop as a service (DaaS) offering, Windows Virtual Desktop provides a comprehensive desktop and application virtualization service running in Azure. The new NVv4-series Virtual Machines will be supported by Windows Virtual Desktop as well as Azure Batch  for cloud-native batch processing.

Remote display application and protocols are key to a good end user experience with VDI/DaaS in the cloud. The new virtual machine series will work with Windows Remote Desktop (RDP) 10, Teradici PCoIP, and HDX 3D Pro. The AMD Radeon GPUs support DirectX 9 through 12, OpenGL 4.6, and Vulkan 1.1.

Customers can sign up for NVv4 access today by filling out this form. NVv4 Virtual Machines will initially be available later this year in the South Central US and West Europe Azure regions and will be available in additional regions soon thereafter.
Quelle: Azure

Introducing the new HBv2 Azure Virtual Machines for high-performance computing

Announcing the second-generation HB-series Azure Virtual Machines for high-performance computing (HPC). HBv2 Virtual Machines are designed to deliver leadership-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads.

HBv2 Virtual Machines feature 120 AMD EPYC™ 7002-series CPU cores, 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading (SMT). HBv2 Virtual Machines provide up to 350 GB/sec of memory bandwidth, which is 45-50 percent more than comparable x86 alternatives and three times faster than what most HPC customers have in their datacenters today.

Size
CPU cores
Memory: GB
Memory per CPU Core: GB
Local SSD: GiB
RDMA network
Azure network

Standard_HB120rs
120
480 GB
4 GB
1.6 TB
200 Gbps
40 Gbps

‘r’ denotes support for RDMA. ‘s’ denotes support for Premium SSD disks.

Each HBv2 virtual machine (VM) also features up to 4 teraFLOPS of double-precision performance, and up to 8 teraFLOPS of single-precision performance. This is a four times increase over our first generation of HB-series Virtual Machines, and substantially improves performance for applications demanding the fastest memory and leadership-class compute density.

Below are preliminary benchmarks on HBv2 across several common HPC applications and domains:

To drive optimal at-scale message passing interface (MPI) performance, HBv2 Virtual Machines feature 200 Gb/s HDR InfiniBand from our technology partners at Mellanox. The InfiniBand fabric backing HBv2 Virtual Machines is a non-blocking fat-tree with a low-diameter design for consistent, ultra-low latencies. Customers can use standard Mellanox/OFED drivers just as they would on a bare metal environment. HBv2 Virtual Machines officially support RDMA verbs and hence support all InfiniBand based MPIs, such as OpenMPI, MVAPICH2, Platform MPI, and Intel MPI. Customers can also leverage hardware offload of MPI collectives to realize additional performance, as well as efficiency gains for commercially licensed applications.

Across a single virtual machine scale set, customers can run a single MPI job on HBv2 Virtual Machines at up to 36,000 cores. For our largest customers, HBv2 Virtual Machines support up to 80,000 cores for single jobs.

Customers can also maximize the Ethernet interface of HBv2 Virtual Machines by using the SRIOV-based accelerated networking in Azure, which will yield up to 40 Gb/s of bandwidth, consistent, and low latencies.

Finally, the new H-series Virtual Machines feature local NVMe SSDs to deliver ultra-fast temporary storage for the full range of file sizes and I/O patterns. Using modern burst-buffer technologies like BeeGFS BeeOND, the new H-series Virtual Machines can deliver more than 900 GB/sec of peak injection I/O performance across a single virtual machine scale set. The new H-series Virtual Machines will also support Azure Premium SSD disks.

Customers can accelerate their HBv2 deployments with a variety resources optimized and pre-configured by the Azure HPC team. Our pre-built HPC image for CentOS is tuned for optimal performance and bundles key HPC tools like various MPI libraries, compilers, and more. The AzureHPC Project helps customers deploy an end-to-end Azure HPC environment reliably and quickly, and includes deployment scripts for setting up building blocks for networking, compute, schedulers, and storage. Also included is a growing list of tutorials for running HPC applications themselves.

For customers familiar with HPC schedulers and who would like to use these with HBv2 Virtual Machines, Azure CycleCloud is the simplest way to orchestrate autoscaling clusters. Azure CycleCloud supports schedulers such as Slurm, PBSPro, LSF, GridEngine, and HTCondor, and enables hybrid deployments for customers wishing to pair HBv2 Virtual Machines with their existing on-premises clusters. The new H-series Virtual Machines will also be supported by Azure Batch for cloud-native batch processing. HBv2 Virtual Machines will be available to all Azure platform partners.

Customers can sign up for HBv2 access today by filling out this form. HBv2 Virtual Machines will initially be available in the South Central US and West Europe Azure regions, with availability in additional regions soon thereafter.
Quelle: Azure

Portworx Enterprise Operator on Red Hat OpenShift

This is a guest blog by Vick Kelkar from Portworx. Vick is a Product person at Portworx. With over 15 years in the technology and software industry, Vick’s focus is on developing new data infrastructure products for platforms like PCF, PKS, Docker, and Kubernetes. Kubernetes adoption initially was powered by stateless applications. As the project […]
The post Portworx Enterprise Operator on Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

The age of Deployed AI is here: See how Google Cloud customers transform their businesses with AI

AI continues to transform industries across the globe, and business decision makers of all kinds are taking notice. But there’s a problem: although 80% of today’s enterprises recognize that AI is critical to their future, only 14% succeed in harnessing it (source). In other words, a gap remains between the potential of AI and its ease of deployment.At Google Cloud AI, closing this gap is perhaps my most important responsibility. After years of breakthroughs, AI has stabilized with the emergence of sophisticated tools, best practices, and a rapidly growing community of builders. Finally, we’ve answered the question of what AI is. Now it’s time to ask what it can do—for you, and your business. We call it the era of Deployed AI.At the heart of Deployed AI is a vision for transforming your business. It’s a clear objective that can bring an entire organization together, aligning teams and engaging stakeholders. This diverse range of perspectives means a deeper understanding of risks and benefits, as well as the cost of change—both practical and emotional. Once deployed, success should be measurable with clear, objective metrics. This encourages an ongoing cycle of refinement, allowing you to continually optimize results while reinforcing trust with your users.Now, let’s see how some of the world’s biggest brands are using Deployed AI to make their goals possible.Unilever: Personal Marketing at a Global ScaleTake Unilever, for instance. They’re among the biggest consumer goods companies in the world, with 400 brands ranging from Dove personal care products to Ben & Jerry’s ice cream. Despite their remarkable reach, however, Unilever is a company built on social consciousness and authenticity. So when it came time to align around a Deployed AI strategy, they asked an audacious question, to put it mildly: what if it were possible to maintain true, one-to-one relationships with each of their customers?…all one billion of them?In any other era, this would have been impossible. But in the age of Deployed AI, trade-offs that were once inescapable are becoming win-wins—including the age-old tension between global reach and a personal touch. Using a broad range of consumer insights alongside Google Cloud AI tools such as translation, visual analytics, and natural language processing, Unilever is generating insights faster than ever before and gaining an entirely new understanding of what their customers care about.For example, the Cloud Vision API made it possible to understand the content of photos submitted by Closeup toothpaste customers for a campaign spanning South Asia. In addition, the Natural Language API revealed audience sentiment by analyzing social media comments referencing the campaign. Together, these insights continually reshaped the campaign, giving millions of users a genuine sense of participation.The vision paid off, and the numbers speak for themselves. The campaign reached nearly 500 million people across multiple continents, generating measurably positive uplift in brand engagement and consideration in the process. And it demonstrated Unilever’s commitment to each customer’s experience, even at a global scale.Unilever’s story demonstrates the power of a tech-savvy company putting the best of Cloud AI to use. But what about companies just beginning their journey with technologies like machine learning? Deployed AI is about bridging the expertise gap, which is why we’re continually investing in a technology stack that takes the risk out of an AI strategy—sparing you the complexity of implementation and letting you focus on how state-of-the-art tools can solve the problems that matter most to you.Meredith Corporation: Individualized Curation, Available on TapMeredith Corporation is one of the biggest names in media, with brands including People, InStyle and Martha Stewart Living, and an audience of over 175 million readers in the United States. They also own recipe website AllRecipes, where they perfected a manual review process that made content easy to classify and target based on reader preferences. Although highly effective, it was slow, costly, and took years to implement—making it near impossible to replicate for their more than 40 other brands.This challenge gave Meredith a clear objective for their Deployed AI strategy: build an automated solution with the insight of a manual curation process. Lacking the in-house expertise to do it themselves, however, they turned to Google Cloud’s AutoML Natural Language, which made it easy to generate a custom, ready-to-use content classifier with data they already had. The results weren’t just immediate, but transformative: from day one, the model matched—and sometimes exceeded—the best work of their content review team.Now, Cloud AutoML has replicated what took Meredith years to build manually, transforming the rest of their properties in a matter of months. The result is tailored content and a consistently elevated experience, impacting tens of millions of readers in a time frame impossible without AI.Making Trust a Fundamental Part of Every AI DeploymentFinally, there’s the question of trust. Users have never been more socially conscious, and that’s encouraging all of us to make our technology more fair, more accessible, and more secure. AI presents special challenges, however, with bias, privacy and transparency among the most pressing examples.These are complex problems that demand a mixture of technical, social and institutional solutions. A long journey lies ahead, but it’s one we look forward to sharing with you. We took our first step last year, with the publication of the principles that guide our work in this field. Since then, we’ve greatly expanded our efforts to solve problems and share what we’ve learned in the form of ever-evolving Responsible AI Practices.At Google Cloud, we’re committed to giving you more than technology. We’re sharing everything we’ve learned, along with the tools, insights and practices you need to deploy it responsibly.ConclusionDeployed AI is putting the power to transform your business within reach. It’s about identifying business problems that defy traditional solutions, crafting specialized AIs from Cloud AI components to solve them in ways never before possible, and establishing the results as a new standard across the enterprise. And its focus on real-world metrics means success can be continually measured and optimized in an ongoing cycle of improvement. Over time, this means an ever-growing value, user trust and business impact.Deployed AI is what’s possible when technologies like machine learning mature, and it’s why we believe the shift from the breakthroughs of AI to the applications of AI will be the most exciting chapter in its history. At Google Cloud, we’re creating a foundation that everyone can build on, regardless of industry or expertise. Let’s create something incredible together.
Quelle: Google Cloud Platform

Streaming data from Cloud Storage into BigQuery using Cloud Functions

Using Cloud Storage from Google Cloud Platform (GCP) helps you securely store your data and integrate storage into your apps. For real-time analysis of Cloud Storage objects, you can use GCP’s BigQuery. There are a bunch of options for streaming data from a Cloud Storage bucket into a BigQuery table. We’ve put together a new solution guide to help you stream data quickly into BigQuery from Cloud Storage. We’ll discuss in this post how to continually copy newly created objects in Cloud Storage to BigQuery using Cloud Functions.Using Cloud Functions lets you automate the process of copying objects to BigQuery for quick analysis, allowing you near real-time access to data uploaded to Cloud Storage. This means you can get better information faster, and respond more quickly to events that are happening in your business.Cloud Functions is GCP’s event-driven, serverless compute platform, which provides automatic scaling, high availability, and fault tolerance with no servers to provision, manage, update, or patch. Streaming data using Cloud Functions lets you connect and extend other GCP services, while paying only when your app is running. Note that it’s also possible to stream data into BigQuery using Cloud Dataflow. Cloud Dataflow uses the Apache Beam framework, which provides windowing and session analysis primitives, as well as an ecosystem of source and sink connectors in Java, Python, and some other languages. However, if you’re not fluent in the Apache Beam API and you’re trying to ingest files without considering windowing or complex transformations, such as streaming small files directly into tables, Cloud Functions is a simple and effective option. You can also choose to use both Cloud Dataflow (Apache Beam) for complex ETL and large data sets, and Cloud Functions for small files and simpler transformations. How this Cloud Functions solution worksThe following architecture diagram illustrates the components and flow of a streaming pipeline created with Cloud Functions. This pipeline assumes that you’re uploading JSON files into Cloud Storage, so you’ll have to make minor changes to support other file formats.You can see in Step 1 that JSON files are uploaded to Cloud Storage. Every time a new file is added to the FILES_SOURCE bucket, the streaming Cloud Function is triggered. This function parses the data, streams inserts into BigQuery (Step 3), logs the ingestion status into Cloud Firestore for deduping (Step 4), and publishes a message in one of the two Cloud Pub/Sub topics: streaming_success_topic, if everything went right; or streaming_error_topic, if any issue has happened (Step 5). At the end, either the streaming_error or the streaming_success Cloud Functions move the JSON file from the source bucket to either FILES_ERROR bucket or FILES_SUCCESS bucket (Step 6).Using Cloud Functions means this architecture is not only simple but also flexible and powerful. Beyond lightweight scaling up and down to fit your file uploading capacity, Cloud Functions also allows you to implement custom functionalities, such as the use of Cloud Firestore.Get started by visiting the solutions page tutorial, where you can get detailed information on how to implement this type of architecture. And try GCP to explore further.
Quelle: Google Cloud Platform

See Docker Enterprise 3.0 in Action in Our Upcoming Webinar Series

Docker Enterprise 3.0 represents a significant milestone for the industry-leading enterprise container platform. It is the only end-to-end solution for Kubernetes and modern applications that spans from the desktop to the cloud.  With Docker Enterprise 3.0, organizations can build, share, and run modern applications of any language or technology stack, on their choice of infrastructure and operating system.

To showcase all of the capabilities of the platform and highlight what is new in this release, we invite you to join our 5-part webinar series to explore the technologies that make up Docker Enterprise 3.0. You’ll see several demos of the platform and gain a better understanding of how Docker can you help your organization deliver high-velocity innovation while providing you the choice and security you need. We designed the webinar both for those new to containers and Kubernetes, as well as those who are just here to learn more about what’s new. We’re excited to share what we’ve been working on.

Sign Up for the Series

Here’s an overview of what we’ll be covering in each session.

Part 1: Content Management

Tuesday, August 13, 2019 @ 11am PDT / 2pm EDT

This webinar will cover the important aspects of tracking the provenance of and  securing your container images.

Part 2: Security

Wednesday, August 14, 2019 – 11am PDT / 2pm EDT

Learn how Docker Enterprise uses a multi-layered approach to security in delivering a secure software supply chain. 

Part 3: Docker Applications

Thursday, August 15, 2019 @ 11am PDT / 2pm EDT

Find out how you can accelerate developer productivity with the use of application templates and Docker Applications – a new way to define multi-service applications based on the CNAB specification.

Part 4: Operations Management

Tuesday, August 20, 2019 – 11am PDT / 2pm EDT

Discover how Docker Enterprise makes Day 1 and Day 2 operations simple and repeatable.

Part 5: Docker Kubernetes Service

Wednesday, August 21, 2019 – 11am PDT / 2pm EDT

See how Docker Kubernetes Service (DKS) makes Kubernetes easy to use and more secure for the entire organization.

Register for our upcoming 5-part #webinar series to see #Docker Enterprise 3.0 in action:Click To Tweet

To learn more about Docker Enterprise 3.0:

Register for the webinar seriesTest drive Docker Enterprise 3.0 with a free, hosted trial

The post See Docker Enterprise 3.0 in Action in Our Upcoming Webinar Series appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/