Announcing Face Redaction for Azure Media Analytics

Azure Media Redactor is a part of Azure Media Analytics, and offers scalable redaction in the cloud. This Media Processor (MP) will perform anonymization by blurring the faces of selected individuals, and is ideal for use in public safety and news media scenarios. The use of body worn cameras in policing and public spaces is becoming increasing commonplace, which places a larger burden on these departments when videos are requested for disclosure through Freedom of Information or Public Records acts. Responding to these requests take time and money as the faces of minors or bystanders must be blurred out. A video with multiple faces can take hours to manually redact just a few minutes of footage. This service can reduce the labor intensive task of manual redaction to just a few simple touchups. Azure Media Analytics Azure Media Analytics is a collection of speech and vision services offered at enterprise scale, compliance, security, and global reach. For other Media Analytics processors offered by Azure, see Milan Gadas blog post Introducing Azure Media Analytics. You can access these features in our new Azure portal, through our APIs with the presets below, or using the free Azure Media Services Explorer tool. Redaction will be a free public preview for a limited time, and will be in all public datacenters starting around mid September. China and US Gov datacenters will be included in the GA release. Face Redaction Facial redaction works by detecting faces in every frame of video and tracking the face object both forwards and backwards in time, so that the same individual can be blurred from other angles as well. Redaction is still a difficult for computers to solve and accuracy is not at the level of a real person. It is expected to find false positives and false negatives especially with difficult video such as low light or high movement scenes. Since automated redaction may not be 100%, we provide a couple of ways to modify the final output. In addition to a fully automatic mode, there is a two pass workflow which allows the selection/de-selection of found faces via a list of IDs, and to make arbitrary per frame adjustments using a metadata file in JSON format. This workflow is split into ‘Analyze’ and ‘Redact’ modes, as well as a single pass ‘Combined’ mode that runs both in one job. Combined mode This will produce a redacted mp4 automatically without any manual input. Media Processor Name: “Azure Media Redactor” Stage File Name Notes Input asset foo.bar Video in WMV, MOV, or MP4 format Input config Job configuration preset {‘version':’1.0′, ‘options': {‘Mode':’combined’}} Output asset foo_redacted.mp4 Video with blurring applied Input example: Output example:   Analyze mode The “analyze” pass of the two pass workflow will take a video input and produce a JSON file of face locations, and jpg images of each detected face. Stage File Name Notes Input asset foo.bar Video in WMV, MPV, or MP4 format Input config Job configuration preset {‘version':’1.0′, ‘options': {‘Mode':’analyze’}} Output asset foo_annotations.json Annotation data of face locations in JSON format. This can be edited by the user to modify the blurring bounding boxes. See sample below. Output asset foo_thumb%06d.jpg [foo_thumb000001.jpg, foo_thumb000002.jpg] A cropped jpg of each detected face, where the number indicates the labelId of the face Output Example: Download full {  “version”: 1,  “timescale”: 50,  “offset”: 0,  “framerate”: 25.0,  “width”: 1280,  “height”: 720,  “fragments”: [    {      “start”: 0,      “duration”: 2,      “interval”: 2,      “events”: [        [            {            “id”: 1,            “x”: 0.306415737,            “y”: 0.03199235,            “width”: 0.15357475,            “height”: 0.322126418          },          {            “id”: 2,            “x”: 0.5625317,            “y”: 0.0868245438,            “width”: 0.149155334,            “height”: 0.355517566          }        ]      ]    }, … truncated Redact Mode The second pass of the workflow takes a larger number of inputs that must be combined into a single asset. This includes a list of IDs to blur, the original video, and the annotations JSON. This mode uses the annotations to apply blurring on the input video. Stage File Name Notes Input asset foo.bar Video in WMV, MPV, or MP4 format. Same video as in step 1. Input asset foo_annotations.json annotations metadata file from phase one, with optional modifications. Input asset foo_IDList.txt (Optional) Optional new line separated list of face IDs to redact. If left blank, this will blur all faces. Input config Job configuration preset {‘version':’1.0′, ‘options': {‘Mode':’redact’}} Output asset foo_redacted.mp4 Video with blurring applied based on annotations Example Output This is the output from an IDList with one ID selected.   Understanding the annotations The Redaction MP provides high precision face location detection and tracking that can detect up to 64 human faces in a video frame. Frontal faces provide the best results, while side faces and small faces (less than or equal to 24×24 pixels) are challenging. The detected and tracked faces are returned with coordinates  indicating the location of faces, as well as a face ID number indicating the tracking of that individual. Face ID numbers are prone to reset under circumstances when the frontal face is lost or overlapped in the frame, resulting in some individuals getting assigned multiple IDs. For detailed explanations for each attribute, visit the Face Detector blog. Getting started To use this service, simply create a Media Services account within your azure subscription and use our REST API/SDKs or with the  Azure Media Services Explorer (v3.44.0.0 or higher). For sample code, check out the sample code on our documentation page and replace the presets with the ones above and the Media Processor name “Azure Media Redactor”. Contact us Keep up with the Azure Media Services blog to hear more updates on the Face Detection Media Processor and the Media Analytics initiative! Send your feedback and feature requests to our UserVoice page. If you have any questions about any of the Media Analytics products, send an email to amsanalytics@microsoft.com.
Quelle: Azure

LinkedIn Co-Founder Says He'll Pay To See Trump’s Tax Returns

BuzzFeed News / Getty Images

LinkedIn co-founder Reid Hoffman would like to see Donald Trump&;s tax returns, and he&039;s willing to pay up to $5 million for the opportunity.

On Monday, Hoffman pledged his support for a Crowdpac.com crowdfunding campaign aimed at pressuring Trump into releasing his tax returns. The campaign was started by a US military veteran named Pete Kiernan, who says he’ll donate the cash — almost $5,000 so far — to 10 veterans affairs groups if Trump releases his tax returns.

“Trump claims to love veterans,” reads Kiernan’s Crowdpac.com page, “and so we’re asking him to put his money where his mouth is.”

If Kiernan succeeds and Trump releases his tax returns, Hoffman — whose net worth increased by $800 million in a single day this year when Microsoft acquired LinkedIn — says he will quintuple the sum raised. So, if the Crowdpac campaign meets its goal of $25,000, and if Trump makes public documents that he’s so far vehemently insisted on keeping private, Hoffman would donate $125,000; the more money the campaign raises, the more Hoffman will donate, with a cap at $5 million.

In a post on Medium published Monday, Hoffman noted that $5 million is the same amount that Trump himself pledged to donate to charity during the 2012 election if President Obama agreed to his request to release college records and passport documents.

“Given Trump&039;s vocal support of veterans, I imagine he will recognize the great good that can come from Kiernan&039;s proposal,” Hoffman writes. “But taking Trump&039;s own 2012 offer to President Obama into account, I&039;d like to assist Kiernan in his campaign.”

It’s worth noting that Hoffman was an early investor in Crowdpac, which bills itself as a crowdfunding platform designed for political campaigns. Hoffman is a partner at Greylock Partners, but participated in Crowdpac’s $6 million Series A in early 2016 as an independent investor.

It’s also worth noting that this is not the first Crowdpac campaign Hoffman has publicly involved himself in — or even the first this month. Last week, Hoffman announced that he would donate $25,000 to a campaign to recall Judge Aaron Persky, who presided over the Stanford sexual assault case. As of this writing, that campaign has amassed $40,000 toward its $250,000 goal.

Though Hoffman, as an investor, stands to profit from Crowdpac’s success, some Silicon Valley luminaries see his investment in progressive political causes as a worthy use of his wealth:

The deadline for the Crowdpac campaign is Oct. 19, the date of the final presidential debate.

Quelle: <a href="LinkedIn Co-Founder Says He&039;ll Pay To See Trump’s Tax Returns“>BuzzFeed

Running Powershell on Google Cloud SDK

Posted by Mete Atamel, Developer Advocate

It’s exciting to see so many options for .NET developers to manage their cloud resources on Google Cloud Platform. Apart from the usual Google Cloud Console, there’s Cloud Tools for Visual Studio, and the subject of this post: Cloud Tools for PowerShell.

PowerShell is a command-line shell and associated scripting language built on the .NET Framework. It’s the default task automation and configuration management tool used in the Windows world. A PowerShell cmdlet is a lightweight command invoked within PowerShell.

Cloud Tools for PowerShell is a collection of cmdlets for accessing and manipulating GCP resources. It’s currently in beta and allows access to Google Compute Engine, Google Cloud Storage, Google Cloud SQL and Google Cloud DNS —with more to come! For other services, you can still use the gcloud command line tool inside Google Cloud SDK Shell.

Installation

PowerShell cmdlets come as part of the Cloud SDK for Windows installation, so make sure that you’ve checked the PowerShell option when installing Cloud SDK.

If you want to add PowerShell cmdlets into an existing Cloud SDK installation, you’ll need to do a little more work.

First, you need to install cmdlets using gcloud:

$ gcloud components install powershell

Second, you need to register cmdlets with your PowerShell environment. This is done by running a script named AppendPsModulePath.ps1 (provided by Cloud SDK) in PowerShell. Depending on whether Cloud SDK was installed per user or for all users, you can find this script either in

%AppData%..LocalGoogleCloudSDKgoogle-cloud-sdkplatformGoogleCloudPowerShell

or

C:Program Files (x86)GoogleCloudSDKgoogle-cloud-sdkplatformGoogleCloudPowerShell

Authentication
As with any other Google Cloud APIs, you need to be authenticated before you can use cmdlets. Here’s the gcloud command to do that:

$ gcloud auth login

PowerShell cmdlets

Once authenticated, you’re ready to use GCP cmdlets within PowerShell. Here’s an example of using Get-GceInstance cmdlet to list properties of a Compute Engine instance:

Here’s another example of creating a Google Cloud Storage bucket using New-GcsBucket cmdlet:

Here are some of the tasks you can perform with PowerShell cmdlets against a Google Compute Engine instance:

Create a Compute Engine VM instance.
Start, stop and restart an instance.
Add or remove a firewall rule.
Create a disk snapshot.

Some of the tasks you can perform against Google Cloud Storage are:

Create a storage bucket.
List all the buckets in the project.
List the contents of a bucket.
Get, or delete an item in a bucket.

We also have guides on how to administer Google Cloud SQL instances and how to configure the DNS settings for a domain using Cloud DNS.

The full list of Cloud Storage cmdlets can be found here.

Summary
With Cloud Tools for PowerShell, .NET developers can now script and automate their Compute Engine, Cloud Storage, Cloud SQL and Cloud DNS resources using PowerShell. Got questions? Let us know. Bugs? Report them here. Want to contribute? Great! Care to be part of a UX study? Click here! We’re ramping up our efforts for Windows developers, and would love to hear from you about the direction you want us to take.

Quelle: Google Cloud Platform

Docker at Tech Field Day 2016

Save the date!  This coming Thursday Docker is excited to host the delegates of Cloud Field Day at our headquarters for a deep dive into the Docker platform. Cloud Field Day is part of a series of Tech Field Day events that bring together technology companies and IT thought leaders to talk shop with technology and insights.
Cloud Field Day will be live and in person at Docker HQ but anyone can join in by participating in the live stream. Docker will be featured at 1pm on Thursday Sept 15th. Join us by visiting the Cloud Field Day event page.
Cloud field day is just one in a series of Tech Field Day sessions coordinated by IT industry veterans Stephen Foskett and Tom Hollingsworth. Learn more about the whole Tech Field Day series here.
 
ICYMI:  Our very own Mike Coleman, spoke at the Tech Field Day Express at VMworld.  In this one hour session, Mike walked a group of vExperts through an introduction to containers, what the new end to end application workflow looks like and an overview of Docker 1.12 with built in orchestration.
 
Intro to Docker

Build, Ship, Run with Docker

Docker 1.12 with built in Orchestration

See you online!

Join us for the Cloud Field Day live stream
Read the ebook: Docker for the Virtualization Admin
Learn more about Docker Datacenter
Try Docker Datacenter free for 30 days

The post Docker at Tech Field Day 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Instagram Rolls Out Custom And Default Keyword Filtering To Combat Harassment

Instagram

Today Instagram is rolling out two keyword filters to help limit abuse and harassment on its platform.

The tools, a custom keyword filter and a default keyword filter, allow you to filter out comments with either your own specific words (if you don&;t want to see comments with “asshole” in them, you add the word “asshole” to the custom filter) or comments with any of hundreds of offensive words Instagram built into its “default” filter.

Keyword filtering is becoming a popular anti-harassment tool. Twitter is working on a similar keyword filtering tool, according to a Bloomberg report. And Gab.ai, a new social network growing fast among conservatives, uses keyword filtering and muting as its only content moderation tools.

Filtering is lauded by some, since it puts editorial decisions in the hands of users, and not the platforms. However, for Instagram, the filters are an addition to its current terms of service — which explicitly prohibit hateful, discriminatory and other objectionable forms of content — and will not replace them.

The new filters, which are already available to verified users, roll out globally today.

Here&039;s Instagram CEO Kevin Systrom&039;s full blog post announcing the change:

When Mike and I first created Instagram, we wanted it to be a welcoming community where people could share their lives. Images have the ability to inspire and bring out the best in us, whether they are funny, sad or beautiful. Over the past five years, I&039;ve watched in wonder as this community has grown to 500 million, with stories from every corner of the world. With this growth, we want to work diligently to maintain what has kept Instagram positive and safe, especially in the comments on your photos and videos.

The beauty of the Instagram community is the diversity of its members. All different types of people — from diverse backgrounds, races, genders, sexual orientations, abilities and more — call Instagram home, but sometimes the comments on their posts can be unkind. To empower each individual, we need to promote a culture where everyone feels safe to be themselves without criticism or harassment. It&039;s not only my personal wish to do this, I believe it&039;s also our responsibility as a company. So, today, we&039;re taking the next step to ensure Instagram remains a positive place to express yourself.

The first feature we’re introducing is a keyword moderation tool that anyone can use. Now, when you tap the gear icon on your profile, you&039;ll find a new Comments tool. This feature lets you list words you consider offensive or inappropriate. Comments with these words will be hidden from your posts. You can choose your own list of words or use default words we&039;ve provided. This is in addition to the tools we&039;ve already developed such as swiping to delete comments, reporting inappropriate comments and blocking accounts.

We know tools aren&039;t the only solution for this complex problem, but together, we can work towards keeping Instagram a safe place for self-expression. My commitment to you is that we will keep building features that safeguard the community and maintain what makes Instagram a positive and creative place for everyone.

Quelle: <a href="Instagram Rolls Out Custom And Default Keyword Filtering To Combat Harassment“>BuzzFeed

Thoughts on Red Hat OpenStack Platform and certification of Tesora Database as a Service Platform

When I think about open source software, Red Hat is first name that comes to mind. At Tesora, we’ve been working to make our Database as a Service Platform available to Red Hat OpenStack Platform users, and now it is a Red Hat certified solution. Officially collaborating with Red Hat in the context of OpenStack, one of the fastest growing open source projects ever, is a tremendous opportunity.
This week, we announced that Red Hat has certified the Tesora Database as a Service (DBaaS) Platform on Red Hat OpenStack Platform. Mutual customers can operate database as a service with 15 different database types knowing that they have been extensively tested in the Red Hat environment. They also have the confidence of knowing that their database software is running on Red Hat Enterprise Linux (RHEL) in an environment that is supported by Red Hat.

This announcement is a great milestone in our relationship with Red Hat. Tesora has been collaborating with Red Hat folks in the OpenStack community since we launched Tesora in early 2014. Last year, we were excited to have Red Hat come on board as an investor in our company. We feel that this announcement is great news for our joint customers since it enhances the combined solution.
We also share a common philosophy with Red Hat in that the work we do on OpenStack Trove is done “upstream first”. This is evidenced by the fact that even as a relatively small startup, Tesora has become not just the number 1 contributor to the Trove project, but also one of the top 25 contributors in all of OpenStack, as measured by Stackalytics.
At the same time, Tesora is focused on providing the best DBaaS software in the industry while working with Red Hat and others for the underlying infrastructure and various database vendors, such as Oracle, IBM, MongoDB, and DataStax for the core database technology.
To make this possible, we run a robust set of integration testing across all of these database technologies in both single instance and clustered configurations. This enables them to be easily deployed in a Red Hat OpenStack cloud without requiring extensive, database-specific knowledge.
Of course all of this great collaboration and technical innovation would be useless without a drive towards customer success. While we already have some users operating the Tesora platform to deliver DBaaS in Red Hat-based environments, we expect even greater interest now that we are a Red Hat certified solution. We certainly hope that Red Hat OpenStack Platform users looking for a simple way to offer databases on-demand to your users and will consider giving the solution, certified for Red Hat OpenStack Platform, a try.
Quelle: RedHat Stack