Preview: Live transcription with Azure Media Services

Azure Media Services provides a platform with which you can broadcast live events. You can use our APIs to ingest, transcode, and dynamically package and encrypt your live video feeds for delivery via industry-standard protocols like HTTP Live Streaming (HLS) and MPEG-DASH. You can also use our APIs to integrate with CDNs and deliver to millions of concurrent viewers. Customers are using this platform for scenarios ranging from multi-day sporting events and entire seasons of professional sports, to webinars and town-hall meetings.

Live transcriptions is a new preview feature in our v3 APIs, wherein you can enhance the streams delivered to your viewers with machine-generated text that is transcribed from spoken words in the audio feed. This feature is an option you can enable for any type of Live Event that you create in our service, including pass-through Live Events, where you configure a live encoder upstream to generate and push a multiple bitrate live feed into the service (visualized in the diagram below).
  

Figure 1. Schematic diagram for live transcription

When a live contribution feed is sent to the service, it extracts the audio signal, decodes it, and calls to the Azure Cognitive Services speech-to-text APIs to get the speech transcribed. The resultant text is then packaged into formats that are suitable for delivery via streaming protocols. For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.

You can use Azure Media Player (version 2.3.3 or newer) to play the video, as well as display the text on a wide variety of browsers and devices. You can also play back the streams on the iOS native player. If building an app for Android devices, playback of transcriptions has been verified by NexPlayer. You can contact them to request a demo.

Figure 2. Display of live transcription on Azure Media Player

For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.

The live transcription feature is now available in preview in the West US 2 region. Read the full article here to learn how to get started with this preview feature.
Quelle: Azure

Exploring container security: Day one Kubernetes decisions

Congratulations! You’ve decided to go with Google Kubernetes Engine (GKE) as your managed container orchestration platform. Your first order of business is to familiarize yourself with Kubernetes architecture, functionality and security principles. Then, as you get ready to install and configure your Kubernetes environment (on so-called day one), here are some security questions to ask yourself, to help guide your thinking.How will you structure your Kubernetes environment? What is your identity provider service and source of truth for users and permissions?How will you manage and restrict changes to your environment and deployments? Are there GKE features that you want to use that can only be enabled at cluster-creation time?Ask these questions before you begin designing your production cluster, and take them seriously, as it’ll be difficult to change your answers after the fact. Structuring your environmentAs soon as you decide on Kubernetes, you face a big decision: how should you structure your Kubernetes environment? By environment, we mean your workloads and their corresponding clusters and namespaces, and by structure we mean what workload goes in what cluster, and how namespaces map to teams. The answer, not surprisingly, depends on who’s managing that environment. If you have an infrastructure team to manage Kubernetes (lucky you!), you’ll want to limit the number of clusters to make it easier to manage configurations, updates and consistency. A reasonable approach is to have separate clusters for production, test, and development. Separate clusters also make sense for sensitive or regulated workloads that have substantially different levels of trust. For example, you may want to use controls in production that would be disruptive in a development environment. If a given control doesn’t apply broadly to all your workloads, or would slow down some development teams, segment them out into separate clusters and give each dev team or service its own namespace within a cluster.If there’s no central infrastructure team managing Kubernetes–if it’s more “every team for itself”—then each team will typically run its own cluster. This means more work and responsibility for them enforcing minimum standards—but also much more control over which security measures they implement, including upgrades. Setting up permissionsMost organizations use an existing identity provider, such as Google Identity or Microsoft Active Directory, consistently across the environment, including for workloads running in GKE. This allows you to manage users and permissions in a single place, avoiding potential mistakes like accidentally over-granting permissions, or forgetting to update permissions as users’ roles and responsibilities change.What permissions should each user or group have in your Kubernetes environment? How you set up your permission model is strongly tied to how you segmented your workloads. If  multiple teams share a cluster, you’ll need to use Role-Based Access Control (RBAC) to give each team permissions in their own namespaces (some services automate this, providing a self-service way for a team to create and get permissions for its namespace). Thankfully, RBAC is built into Kubernetes, which makes it easier to ensure consistency across multiple clusters, including different providers. Here is an overview of access control in Google Cloud. Deploying to your Kubernetes environmentIn some organizations, developers are allowed to deploy directly to production clusters. We don’t recommend this. Giving developers direct access to a cluster is fine in test and dev environments, but for production, you want a more tightly controlled continuous delivery pipeline. With this in place, you can set up steps to run tests, ensure that images meet your policies, scan for vulnerabilities, and finally, deploy your images. And yes, you really should set up these pipelines on day one; it’s hard to convince developers who have always deployed to production to stop doing so later on. Having a centralized CI/CD pipeline in place lets you put additional controls on which images can be deployed. The first step is to consolidate your container images into a single registry such as Container Registry, typically one per environment. Users can check images into a test registry, and once tests pass and they’re promoted to the production registry, push them to production.We also recommend that you only allow service accounts (not people) to deploy images to production and make changes to cluster configurations. This lets you audit service account usage as part of a well-defined CI/CD pipeline. You can still give someone access if necessary, but in general it’s best to follow the principle of least privilege when granting service account permissions, and ensure that all administrative actions are logged and audited. Features to turn on from day oneA common day-one misstep is failing to enable certain security features that you might need down the road at cluster-creation time, because you’ll have to migrate your cluster once it’s up and running to turn them on. There are some GKE security features that you can’t turn on in an existing cluster that aren’t turned on by default, for example private clusters and Google Groups for GKE. Rather than trying to make a cluster you’ve used to experiment with these different features production-ready,  a better plan is to create a test cluster, make sure its features work as intended, resolve issues, and only then make a real cluster with your desired configurations. As you can see, there’s a lot to keep in mind when setting up GKE. Keep up to date with the latest advice and day two to-dos with the GKE hardening guide.
Quelle: Google Cloud Platform

Black Friday Deals on Docker + Kubernetes Courses

In honor of Black Friday, America’s favorite shopping holiday, we’ve rounded up the best deals on Docker + Kubernetes learning materials from Docker Captains. Docker Captain is a distinction that Docker awards to select members of the community that are both experts in their field and are committed to sharing their Docker knowledge with others. 
Books:

Learn Docker in a Month of Lunches, Elton Stoneman (Save 40% with the code webdoc40).

Docker in Action Second Edition (2019), Jeff Nickeloff (Save 50% with the code tsdocker).

Manning publications is also offering half off when you spend $50 this week.

Nigel Poulton’s The Kubernetes Book and Docker Deep Dive ebook bundles is $7 (for both!) through December 1st with this link.

Self-Paced Online Courses:

All of Bret Fisher’s courses are $9.99 through Friday, November 29th. Choose from Docker Mastery, Kubernetes Mastery, Swarm Mastery, and Docker for Node.js.

Elton Stoneman has a wealth of courses, from Handling Data and Stateful Applications in Docker to Modernizing .Net Framework Apps with Docker on Pluralsight. Get 40% an annual or premium subscription through Friday November 29th.

Nick Janetakis’s Dive into Docker and Build Web Applications with Flask and Docker courses will be 50% off (no code needed) through December 2nd.

Nigel Poulton’s Kubernetes 101 course is $9.99 with the code K8S101 through Dec 1st.
And his 20 Pluralsight courses – including Docker and Kubernetes: The Big Picture; Docker Deep Dive; Getting Started with Kubernetes and more are also 40% off with an annual or premium subscription through the 29th.

Docker + Kubernetes in French:

Luc Juggery creates Docker + Kubernetes courses in French. Both his Introduction to Kubernetes and The Docker Platform courses are $9.99 through Friday November 29th.

This Thanksgiving, take advantage of leveling up your skills at a great price or check out all educational resources here.
The post Black Friday Deals on Docker + Kubernetes Courses appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How Cloud AI is shaping the future of retail—online and in-store

Technology has played a key role in retail for decades, from early innovations like barcode scanning and digital point of sale devices, to the global frontier of modern logistics. Through it all, however, the fundamentals remain the same: retailers generate huge quantities of data, face unpredictable environments, and need to continually adapt to the ever-evolving needs of the customer. Throw in the chaos of Black Friday and Cyber Monday, and you’ve got one of the most complex enterprise challenges in the world.It’s also a challenge tailor-made for AI: a technology that thrives on big data, adapts to change fluidly, and can deliver personalized experiences at scale. With the holiday rush upon us, let’s take a look at how two Cloud AI customers—3PM for online shoppers and Tulip for in-store—are helping make retail more efficient, more personal, and more trustworthy.Tulip is helping brands across the world bring the flexibility and personalization of e-commerce to their in-store experiences. Online, 3PM continuously tracks millions of sellers across a range of e-commerce marketplaces, helping to turn the tide against predatory practices like counterfeit products and trademark infringement.3PM: Safeguarding online marketplaces at a global scaleTrust is the foundation of every retail experience, and that’s especially true online. With the proliferation of online marketplaces like Amazon, eBay, and Walmart.com, however, trademarks, copyrighted content, and other brand assets are often spread across too many places to be effectively monitored.Particularly disconcerting is the fast-growing world of counterfeit products. It’s not just knock-off sneakers and handbags, either. Fraudulent supplements, prescription drugs, and even baby food are readily available online, presented in convincing detail intended to fool customers and could pose a danger to consumer health. Small merchants and global brands alike have found it difficult to contain counterfeiting, largely due to its decentralized nature. This calls for a solution that lies outside marketplaces. 3PM Solutions saw an opportunity to help. By combining the power of advanced analytics with data at a global scale, 3PM’s suite of tools can detect counterfeit goods automatically, monitor a brand’s reputation over time, and help the brand understand its customers more deeply.But getting such an ambitious vision off the ground presented some significant technical challenges for 3PM. Online marketplaces routinely change the format and structure of their listings, quickly confounding hand-written rules and filters. To make matters worse, the content within those listings is notoriously unreliable. For example, counterfeiters often intentionally misspell brand and product names to keep their goods under the radar. It’s a level of complexity that calls for a particularly flexible solution that’s capable of ingesting massive quantities of data, while also evolving as the nature of that data changes.These challenges prompted 3PM to migrate to Google Cloud Platform, bringing the company’s data and infrastructure—and, more importantly, a state-of-the-art AI toolkit—into a single environment.Google Cloud’s flexibility helped 3PM implement a creative, agile development process. The company’s developers designed a TensorFlow-based image classifier and trained it on billions of examples, forming the basis of a self-serve tool that lets brands accurately detect improper use of product photography, logos, and other trademarks. They built custom machine-learning models to intelligently analyze product listings. These models can look past the basics like  image and title to incorporate a wide range of data points to detect subtle features correlated with fraud that rule-based systems—not to mention humans—would miss. 3PM even used the Cloud Translate API to transcend language barriers automatically.Tulip: Bringing digital personalization to the in-store experienceOf course, brick-and-mortar remains fundamental to the identity of countless brands, with 80% of all sales still taking place in physical stores. Nevertheless, the speed, flexibility, and extreme personalization of e-commerce is influencing customer expectations everywhere—even when shopping in person—and retailers are scrambling to keep up.Tulip helps retailers keep up with these demands with a suite of powerful mobile apps that gives retail workers the power of the digital world anywhere in their store, whether they’re looking up products, managing customer information, checking out shoppers, or communicating with customers. Tulip helps physical stores establish deeper relationships with their patrons based on their preferences, behaviors, and purchases—just as they would online—and it’s changing the way global brands do business.A major challenge in any retail application is forecasting. Whether it’s an unexpected fashion craze or an annual event like Black Friday, retail’s surges and lulls can make traditional allocation of compute resources extremely challenging. “Because we had to scale for peak demand, we had to buy capacity up front, which sat idle much of the time when sales demand was lower,” explains Jeff Woods, director of software for infrastructure at Tulip. “It became difficult and expensive. We were constantly asking the vendor to waive arbitrary limits. We had to use massive instances, and it was difficult to scale down.”After migrating to Google Cloud, Tulip could deploy on an infrastructure capable of scaling to any size at a moment’s notice—and only pay for what they used. In the process, they also gained access to some of the world’s most advanced machine learning technologies. Now, wIth their data, infrastructure, and AI tools in one place, the stage was set for Tulip to build an entirely new level of intelligence into their solutions.Tulip’s solutions use a set of custom TensorFlow models running on AI Platform to identify customer insights and sales opportunities based on data from a customer’s in-store mobile applications. This drives recommendations on when to connect with customers and how to engage them with highly personal and relevant communications. Tulip’s solution is a textbook example of what makes Deployed AI so powerful: using previously unseen patterns in large quantities of data to solve a clearly defined business challenge, all at the speed of retail. “Every day, Tulip collects millions of data points from customer interactions across its channels,” says Ali Asaria, Tulip’s founder and CEO. “By integrating Google machine learning and big data products into our core platform, we can now use that data to provide intelligent insights and recommendations to retail associates.”ConclusionJust a few years ago, AI seemed too expensive and complex for companies like 3PM and Tulip. In both cases, however, moving to Google Cloud has demonstrated this technology’s affordability, interoperability, and ease of use. And the results have been transformative.Whether the crowds are in stores or online, companies like Tulip and 3PM are demonstrating the power—and sometimes, the necessity—of using AI to make every retail interaction safer and more engaging. It’s another example of Deployed AI in action: using state-of-the-art technology to overcome age-old business challenges.
Quelle: Google Cloud Platform