OpenStack Developer Mailing List Digest May 20-26

SuccessBot Says

clarkb 1 : infra added city cloud to the pool of test nodes.
pabelanger 2 : opensuse-422-infracloud-chocolate-8977043 launched by nodepool.
All: 3

etcd 3.x as a Base Service

A devstack review 4 that adds a new etcd3 service.
Two options to enable the DLM use case with Tooz (for eventless based services) 5 6
Full thread: 7

Do We Want to be Publishing Binary Container Images?

During the Forum, the discussion on collaboration between various teams building or consuming container images.
Decide how to publish images from the various teams to docker hub or other container registries.
The community has refrained from publishing binary packages in other formats such as debs and RPMs. Instead we have left this to the responsibility of the downstream consumers to build production packages.
This would require more tracking of upstream issues (bugs, CVEs, etc) to ensure the images are updated as needed.

Given our security and stable team resources, this might not be a good idea at this time.

Kolla is interested in doing this for daily builds. Everything is licensed with ASL which gives no guarantees.

Even if you mark something to not be used in production, people still use it. Take the recent user survey with DevStack being used in production.
Kolla today publishes build instructions. Manually every release they provide built containers.
Built containers would run through our CI gate, so others don’t have to have a local CI build pipeline.

Things we publish to Pypi are different from this proposal:

The formats published by pypi are source formats (sdist) and developer friend but production ready format (wheel).
Most of our services are not packaged and published to PyPi. The libraries are to make them easy to consume in our CI.
The artifacts in PyPi contain references to dependencies, the dependencies are not built into the packages themselves.

Iterating on the infra-spec review for publishing to DockerHub has started 8
Full thread: 9

RFC Cross Project Request ID Tracking

In the logging Forum session, it was brought up how much effort operators are having to put into reconstructing flows for things like server boot when they go wrong.

Jumping from service to service, the request-id is reset to something new.
Being able to query in elastic search for the same request-id in communication between services would be useful.

There is a concern of trusting the request-id on the wire, because it’s coming from a random user.

We have a new concept of “service users” which are set of higher privilege services that we are using to wrap user requests.

Basic idea is:

services will optionally take an inbound X-OpenStack-Request-ID which we’ll strongly validate req-$uuid format.

They will continue to generate one as well.
When the context is built we’ll check the service user was involved, and if not, reset the request-id to the local generated one.
Both request-ids will be logged.

Python clients and callers will need to be augmented to pass the request-id in on requests.
Servers will opt into calling other services this way.

Oslo spec for this has been merged 10.
Full thread: 11

Can We Stop Global Requirements Update (Cont.)

Gnocchi has gate issues with Babel this time. Julien plans to remove all oslo dependencies over the next few months.
The project Cotyledon was presented at some summit ago as an alternative to oslo.service and getting rid of eventless. The library lives under the telemetry umbrella for now.

The project doesn’t live under oslo so that it’s encouraged for the greater python ecosystem to adopt and help maintain it.

Octavia is also using Cotyledon.
Full thread: 12

Revised Postgresql Deprecation Patch for Governance

In the Forum session we agreed to the following:

Explicitly warn in operator facing documentation Postresql is less supported than MySQL.
Sure is the process of investigating migration from Postgresql to Gallera for future versions of OpenStack products.
TC governance patch is updated 13.

Current sticking points:

It’s important that the operator community largely is already in one camp or not.
Future items listed that are harder are important enough to justify a strict trade off here.
It’s ok to have the proposal have a firm lean in tone, even though it’s set of concrete actions are pretty reversible and don’t commit to future removal of Postgresql.

What has been raised as being hard by an abstraction layer like SQLAlchemy:

OpenStack services taking a more active role in managing DBMS.

See Active or passive role with our database layer summary below for this discussion.

The ability to have zero down time upgrade for services such as Keystone.

Expand/contract with code and carefully dancing around the existence of two schema concepts simultaneously (e.g. Nova and Neutron).
This shouldn’t be a problem because we use alembic or sqlalchemy-migrate to abstract away ALTER TABLE types.
Expand/contract using server side triggers to reconcile the two schema. This is more difficult because there is no abstraction layer that exists in SQLAlchemy. It could be feasible to build one specific to OpenStack.

Consistent UTF-8 4 & 5 byte support in our APIs

Unicode itself only needs 4 bytes and that is as far as any database supports right now. This problem has been solved by SQLAlchemy well before Python 3 existed.

The requirement that Postgresql libraries are compiled for new users trying to just run unit tests.

New developers who aren’t concerned with Postgresql don’t have to run these tests.
OpenStack went all the way with Kilo using the native python-MySQL driver which required compiling.
This is OpenStack. We are the glue to thousands of c-compiled libraries and packages.

Consistency around case sensitivity collation.

MySQL is defaulting to case-insensitive.
Postgresql almost has no support for case-insensitive.
SQLAlchemy supports things like ilike().
String datatype in SQLAlchemy guarantees case-insensitive.

Top concerns that remain:

A1) Do not surprise users late by them only finding out they are on less traveled once they are so deeply committed. It’s fine for users to choose the path, as long as they are informed they are going to need to be more self reliant.
A2) Do not prevent features like zero downtime in Keystone making forward progress with a MySQL only solution.

Orthogonal concerns:

B1) Postgresql was chosen by people in the past, maybe more than we realized, that’s real users we don’t want to throw under the bus. Whole sale delete is off the table. There’s no clear path off and missing data of who’s on it.
B2) The upstream code isn’t so irreparably changed (e.g. delete the SQLAlchemy layer) that it’s not possible to have alternative database backends.

The current proposal 13 addresses A1 and B1.
Full thread: 14

[1] – http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-05-24.log.html
[2] – http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-05-24.log.html
[3] – https://wiki.openstack.org/wiki/Successes
[4] – https://review.openstack.org/#/c/445432/
[5] – https://review.openstack.org/#/c/466098/
[6] – https://review.openstack.org/#/c/466109/
[7] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#117370
[8] – https://review.openstack.org/447524
[9] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
[10] – https://review.openstack.org/#/c/464746/
[11] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116619
[12] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116736
[13] – https://review.openstack.org/#/c/427880/
[14] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116642
Quelle: openstack.org

Trolls Are Targeting Indian Restaurants With A Create-Your-Own Fake News Site

Shrina Begum, the owner of Karri Twist, and the fake news story that's ruining her business.

Laura Gallant / BuzzFeed

Shrina Begum couldn’t understand why people were calling her Indian restaurant to accuse it of selling human meat. The calls started on May 11, and by the next day Begum says she and her staff had answered hundreds of them.

“Both of our phone lines went off and people starting screaming, ‘Why are you selling human meat?’” she told BuzzFeed News.

Business at Karri Twist, her restaurant in London, soon dropped by half. Begum had to reduce hours for some staff, and she feared the business might not survive the false rumor. “During one of the calls, [my employee] managed to calm a person down to find out where they’d seen this, and they were like, it’s been sent to them via Facebook. I just couldn’t believe it whatsoever.”

Begum eventually tracked down the origin of the false rumor: A website called Channel23news.com had published a story claiming that her restaurant, Karri Twist, was caught selling human meat and that its owner had been arrested. The completely fake report, replete with spelling mistakes and the wrong name of the owner, featured a picture of Karri Twist and said nine bodies had been found on the premises in the freezer.

The story looked like any other news report when shared on Facebook, and it quickly spread on the site, as well as on Twitter and WhatsApp. People who clicked on the link were brought to a page with the story, and beside it was text that read, “You've Been Pranked! Now Create A Story & Trick Your Friends!” Channel23News.com’s homepage is in fact a form that enables anyone to create a fake news story, add an image, and instantly share it on Facebook.

Channel23News.com

Thanks to a fake article someone had created on the site, an Indian restaurant that has been in business since 1957 was in danger of closing.

“I had planned to do some renovation work — which we had saved for — and now I’d had to cut some staff hours because on the weekend I basically had nobody in,” Begum said.

A search of Channel23News.com’s archives also found that Begum’s restaurant was one of at least six Indian restaurants targeted with fake stories claiming they served human meat. Five of the stories used almost the exact same text as the original hoax about Karri Twist.

Channel23news.com isn’t an isolated make-your-own-fake-news site. Using domain registration records, BuzzFeed News identified two separate networks that together own at least 30 nearly identical “prank” news sites and that published more than 3,000 fake articles in six languages over the past 12 months. They’re also generating significant engagement on Facebook: The sites collectively earned more than 13 million shares, reactions, and comments on the social network in the last 12 months.

Some of the sites’ biggest viral hits of the past year in English include fake stories about a Popeyes manager being arrested for “dipping chicken in cocaine-based flour to increase business” (over 429,000 Facebook engagements), Beyoncé giving birth to twin boys (141,000 engagements), the FBI announcing it found evidence of collusion between the Trump campaign and Russia (38,000 engagements), two great white sharks being found near St. Louis (201,000 engagements), and President Obama passing a law that requires grandparents to care for their grandchildren each weekend (515,000 engagements).

Begum is also by no means the first business owner or organization to scramble to deal with the aftermath of a fake story generated on one of these sites. The mayor of Annapolis, Maryland, was the subject of a fake story claiming he had made racist statements, and a park in Colorado was targeted with rumors that it was closing on June 1. “The post was shared thousands of times, so now officials are doing damage control to stop the rumor from spreading any further,” according to a local news report.

Police in Middlesbrough, UK, recently spent time looking into false rumors about a high school after teens there began creating and spreading hoaxes about each other and at least one teacher using one of the sites.

“I think people are using it to bully people,” one unnamed mother told a newspaper. She added, “My worry is people will not realise it is fake and something bad will happen to my son.”

Meanwhile, officials in Joplin, Missouri, also had to deal with a spate of false stories created about the area on Channel22News.com, a sister site of the one that hosted the hoax about Begum’s restaurant.

A Facebook spokesperson told BuzzFeed News it will continue to roll out programs and product updates to make it harder for spammers and fake-news creators to make money from its platform.

“A huge motivation for the spammers who trade in false news is their own profit — and we’ve recently launched new updates to disrupt their financial incentives and curb the spread of this type of material,” they said. “There’s more work to do, and people should know we remain absolutely committed to it.”

Recent fake stories published on Channel23News.com.

Channel23News.com

The owner of Channel23News.com and at least 18 other sites like it is listed in domain registration records as Korry Scherer. He’s a 25-year-old based in Milwaukee who told BuzzFeed News he prefers to go by the name Korry Tye. In a phone interview he said he’s spent the past five years figuring out ways to make money from the internet. He started by using MySpace pages to advertise products, then eventually shifted his focus to Facebook. At the beginning of this year, Tye decided to launch his first so-called prank news site.

“I just thought it could be something that might do well and would be fun and user-driven and take off on its own,” he said.

The first site's success led him to launch more. He now owns 19 prank news websites with domains such as Channel23News.com, Channel22News.com, and Channel45News.com. Since February they’ve published at least 724 fake news stories, generating a total of more than 2.5 million shares, reactions, and comments on Facebook.

Tye says for the most part “people make pranks about their schools or their coworkers.”

“There’s times that people abuse the platform, like all platforms get abused, and at that point people reach out to me and I have things removed right away,” he said. “It’s not meant for people to slander people’s names or bully people or do disrespectful things that could negatively affect someone’s life or ruin their day — that’s not cool.” (Tye did not respond to a subsequent email noting that the story about Begum’s restaurant was still online nearly three weeks after being published.)

He acknowledged that on Facebook the prank stories from his sites look like any other news article. But Tye said most people will click on the stories they’re inclined to believe.

“By the time they actually go check it out they’re gonna realize it’s all in fun,” he said. “Not everyone is as savvy as everyone else on the internet, but it’s pretty much there before your eyes.”

He says the vast majority of stories posted on his prank sites are created by users, though in the early days he sometimes posted fake news stories gathered from other sites to try to raise awareness for his. Hoaxes from elsewhere continue to be copied and uploaded to his sites. The fake story “Man accused of ejaculating in his boss’ coffee everyday for 4 years” was first published on World News Daily Report and appeared on Channel34News.com a few days later. (Tye also owns other sites that often publish viral hoaxes that originated elsewhere.)

“Initially I never really set out trying to mess with fake news,” he said. “This prank site for the most part is people making stories that affect them and their friends … I definitely took advantage of online hoaxes and viral hoaxes over the years, I can’t deny that. It’s a way to make money.”

Popular fake stories from two of Nicolas Gouriou's sites.

Media Vibes

Though he’s quickly built up a large network of make-your-own-fake-news sites, Tye isn’t the originator of what he calls the “prank news” concept. That may be Nicolas Gouriou, a man based in Belgium who owns at least 11 prank news sites that publish in English, Spanish, French, German, Portuguese, and Italian. The oldest of his sites has been online since at least March of 2015. Gouriou did not respond to multiple emails from BuzzFeed News requesting an interview, or to a list of questions.

Both men’s sites feature similar forms for uploading a fake news story, as well as instructions that are almost word for word. One difference is that Gouriou’s sites feature a disclaimer: “Any bullying, racist, homophobic or pornographic jokes are prohibited. Do not hesitate to report any inappropriate content by contacting us via the Contact Form.”

In spite of the warning, Gouriou’s sites have been the subject of critical news stories in several countries where he offers language-specific versions. A website run by El Pais, one of the largest newspapers in Spain, published a story about the Spanish-language hoax site 12minutos.com. It noted that the site is a source of political hoaxes, and one fake story even caused a real journalist to ask a politician about it. France TV has examined Gouriou’s French-language fake news operation, and BuzzFeed Germany recently published a story to warn people about his German-language hoax site.

Gouriou’s operation generates significant engagement on Facebook. Using data from Buzzsumo, BuzzFeed News found more than 2,300 stories published on his 11 sites in the past 12 months alone. Together they generated more than 10.5 million shares, reactions, and comments on Facebook. Those same stories generated more than 22,000 shares on Twitter during the same time period.

These sites continue to see strong engagement on Facebook in spite of the social network’s efforts to crack down on what it calls “false news” and clickbait. Based on his experience with Facebook, Tye said he thinks his sites' success probably won’t last.

“Facebook does a lot of stuff to combat anything that’s doing well in the world, period,” he said. “As quick as it does good, Facebook damages the reach and affects the way it propagates.”

He said the reasons for this is partly the company’s crackdown on fake news, and partly because he believes Facebook diminishes the organic reach of content in order to push publishers to pay to promote their content.

“Facebook’s changed a lot and made it hard on a lot of people, but at the same time they created an opportunity and a space for people like me and others to make a ton of money, and it’s life-changing in some cases,” he said. “It might not be as sweet as it used to be, but it’s still great.”

Tye said he’d be happy to follow whatever rules Facebook has for his pages and sites, but he’s been unable to speak with anyone from the company about it. “I aim to, and would like to, establish more of a working relationship with Facebook,” Tye said. “I have a healthy budget to spend with them.”

A photo of the original Indian restaurant opened by Shrina Begum's father in 1957.

Laura Gallant / BuzzFeed

His complaint about not being able to reach Facebook was also echoed by Begum, the restaurant owner whose business suffered after a hoax on his sites.

“I was really angry because I had no way of getting in touch with Facebook — no way whatsoever to tell them that they need to do something to take this down or stop it from spreading,” she said.

She suggested the company create a hotline that people being affected by fake news or scams can call. “They make literally billions and billions of dollars globally, and the cost of this would be small.”

Today, a little more than two weeks since the story first went viral, Begum says her business is still suffering and she continues to receive angry phone calls accusing her of selling human meat.

“It’s been a very, very slow process of recovery, and at the moment my year-on-year sales are completely shot to pieces, it's really terrible,” she said. “People are still believing this story — it's still being propagated.

“For people, it's like one screenshot they’re passing on to each other,” she continued. “It’s a couple of clicks and they don’t think anything more of it, but the human cost is horrible. I'm not sleeping or eating because of this — I don’t know what I'm going to do.”

Quelle: <a href="Trolls Are Targeting Indian Restaurants With A Create-Your-Own Fake News Site“>BuzzFeed

Oregon region (us-west1) adds third zone, Cloud SQL, and Regional Managed Instance Groups

By Dave Stiver, Product Manager, Google Cloud Platform

Last summer we launched the Oregon region (us-west1) with two zones and a number of Google Cloud Platform (GCP) services. The region quickly became popular with developers looking to place applications close to users along the west coast of North America.

Today we’re opening a third zone in Oregon (us-west1-c) and two additional services: Cloud SQL and Regional Managed Instance Groups (MIGs). Cloud SQL is a fully managed service supporting relational PostgreSQL BETA and MySQL databases in the cloud. Regional MIGs make it easy to improve application availability by spreading virtual machine instances across three zones.

All three zones in Oregon (us-west1) contain the following services:

Compute Engine
Container Engine
Dataflow
Dataproc
Datalab

As with all GCP zones, the following services are available to support compute workloads:

In addition to Oregon, we’ll soon be opening new regions in North America in Montreal and California. Our locations page provides the latest updates to GCP regions, zones and the services available in each. Give us a shout to request early access to new regions and help us prioritize what we build for you next.

Quelle: Google Cloud Platform

Uber Fires Engineer Accused Of Stealing Self-Driving Car Secrets From Google

Anthony Levandowski

Afp / AFP / Getty Images

Uber has fired Anthony Levandowski, the engineer at the center of a self-driving lawsuit from Alphabet's autonomous vehicle unit Waymo, an Uber spoksperson confirmed.

Levandowski's termination, which is effective immediately, was earlier reported by The New York Times.

Levandowski's dismissal comes amid a bitter trade secrets lawsuit from Waymo, where he worked before departing to start his own self-driving truck company called Otto, which Uber acquired last year. Waymo alleges Levandowski downloaded thousands of files related to its self-driving program before departing, and that Uber is now benefitting from that information. Levandowski has pleaded the 5th Amendment and for months was not complying with the company's investigation into Waymo's claims. Uber has maintained in court documents and hearings that Waymo's information has not crossed into its systems.

Uber first demoted Levandowski on April 27, citing the need to remove him from leadership over work involving LiDAR – the technology at hand in the lawsuit – pending a trial. (LiDAR, which stands for light detection and ranging, is a laser system that helps self-driving cars see.) Uber then installed Eric Meyhofer as its self-driving program's leader. With Levandowski now out of the company, his direct reports will also fall under Meyhofer. US District Judge William Alsup told Uber that it had no excuse to “pull any punches” to force Levandowski to comply with a legal investigation into Waymo's claims that he stole its trade secrets on May 15.

The ride-hail company took the court's directive to heart. Earlier this month, legal filings showed that the ride-hail giant threatened to fire Levandowski if he did not cooperate with an investigation into allegations that he stole trade secrets from Alphabet's Waymo, his former employer. An Uber spokesperson said the company for months pressed Levandowski to comply with its internal investigation into the allegations, and set a deadline the engineer failed to meet.

Here's Uber's full termination letter to Levandowski:

This is a developing story. Check back for updates.

Quelle: <a href="Uber Fires Engineer Accused Of Stealing Self-Driving Car Secrets From Google“>BuzzFeed

Announcing the preview of Azure’s Largest Disk sizes

At Build Conference, we announced the addition of new Azure Disks sizes – which provide up to 4TB of disk space. These new sizes allow you to perform up to 250 MBps of storage throughput and 7,500 IOPS. The details of the announcement are captured in the Build session here. We introduced two new disk sizes, P40 (2TB) and P50 (4TB) for Managed/unmanaged Premium Disks; S40 (2TB) and S50 (4TB) for Standard Managed Disks. For Standard unmanaged disks, you can create disks with maximum size of 4095 GB. These new sizes are available to use now in our West US Central Region using Azure Powershell and CLIs through ARM. You’ll see us continue to expand availability and roll out the Azure Portal support around the world in more regions in the coming month. Along with that, we will release new versions of the Azure tools to support upload of VHDs more than 1TB. New Disk Sizes Details The below table provides more details on the exact capabilities of the new disk sizes:   P40 P50 S40 S50 Disk Size 2048 GB 4095 GB 2048 GB 4095 GB Disk IOPS 7,500 IOPS 7,500 IOPS Up to 500 IOPS Up to 500 IOPS Disk Bandwidth 250 MBps 250 MBps Up to 60 MBps Up to 60 MBps
Quelle: Azure

Getting Started with the Video Indexer API

Earlier this month at BUILD 2017, we announced the public preview of Video Indexer as part of Microsoft Cognitive Services. Video Indexer enables customers with digital and audio content to automatically extract metadata and use it to build intelligent innovative applications. You can quickly sign up for Video Indexer from https://vi.microsoft.com/ and try the service out for free during our preview period.

On top of using the portal, developers can easily build custom apps using the Video Indexer API. In this blog, I will walk you through an example of using the Video Indexer API to do a search on a keyword, phrase, or detected person’s name across all public videos in your account as well as sample videos and then to get the deep insights from one of the videos in the search results.

Getting Access to the Video Indexer API

To get started with the Video Indexer API, you must sign in using a Microsoft, Google, or Azure Active Directory account. Once signed in with your preferred account, you can easily subscribe to our free preview of the Video Indexer API. The following steps will walk you through the process of registering for access.

To subscribe to the API, go to the Products tab and click Free Preview. On the next page, click the Subscribe button. You should now have access to the API. If you find that you do not have access, contact visupport@microsoft.com.

After getting access, you can then return to the Products tab, and go to the Video Indexer APIs – Production link.

 

You should now see the Video Indexer API documentation page. On the left side of the page, you will see a list of several action options. Each action page contains information about that request including which parameters are optional and which ones are required. You can test any of these by clicking Try it, setting the appropriate parameters, and then clicking Send.

 

To use an external tool like Postman to test the API, you will need to download the Video Indexer APIs – Production Swagger .json file. You can do this by selecting the API definition download button on the top right of the page and choosing Open API to get the Swagger .json file. Save the file somewhere locally on your machine to use in the next section.

Here, I will demonstrate how to use Postman to test the API. To follow along, you can download and install Postman here. Launch Postman and click Import in the top left.

 

Navigate to and choose the Video Indexer APIs – Production Swagger .json file that you previously downloaded and saved locally.

 

You should now see the API actions under Collections.

 

To submit calls to any of the actions, you need a key that is specific to your subscription. This can be found by going back to Video Indexer APIs – Production page and clicking Try it. Once it gets to the next page, you can scroll down to the Headers section and find where it says Ocp-Apim-Subscription-Key. You can see your key by clicking on the eye icon.

 

Copy both the name of the key (Ocp-Apim-Subscription-Key) and the key itself because you will need both for Postman.

Running a Search Call Across Videos

Going back to Postman, go to the action call that you want to test out. In this case, I will start with search, which is going to be a typical user interaction with the API. In particular, I’m showing a search on a keyword across all public videos in your account as well as sample videos. Search is an HTTP GET call with the request URL https://videobreakdown.azure-api.net/Breakdowns/Api/Partner/Breakdowns/Search

 

Go to Headers and enter in the name of the key (Ocp-Apim-Subscription-Key) where it says key and the key itself where it says value. You can set a Header Preset on Postman using the key and value to prevent having to type them in every time. It saves time and is easy to set up, so it is definitely worth doing!

 

To set the parameters of the action, click Params and set the values for the parameters you wish to set. Remove the unchanged parameters by hovering to the right corner of the parameter and selecting the x.

In this search example, I’m setting the privacy to “Public”, language to “English”, textScope to “Transcript”, and searchInPublicAccount to “true”. I am also clearing out all the parameters that I have not changed. For query, enter in the word that you would like to search for across the videos. In this example, let’s search for the keyword “Azure”.

 

Upon selecting Send, you will get a JSON response with the results of the search.

 

The JSON response of the search contains a results section that gives back the videos that contain your query term and the relevant time ranges from each resulting video. The section is an array in which each element is a resulting video along with its basic information, social likes, number of views, and search matches with start times.

Below is an outline of some of what to expect in the JSON response for search.

JSON Response for Search

Results (Array – each element has the following info)

Basic video and user information

accountId
id
name
description
userName
createTime
privacyMode
state

social

likes
views

searchMatches (Array – each element has the following info)

startTime
type (tells the user whether match is from audio based transcript or OCR)
text

You should also test this out by uploading and processing a few of your own videos in your account. You can do this on the Video Indexer Preview site if you are logged in, or you can use the Upload HTTP POST call from the Video Indexer API in Postman.  For your search request, set searchInPublicAccount to “false” to only search through the videos on your account. Set the query to a keyword that is more relevant to your videos and privacy to either “Public” or “Private” based on the settings of your video.

Next, I will show how to take the results of a search and get the expanded insights of the video.

Running a Breakdown Call on a Video

Take the id of the first result of your search.

 

Now go to the breakdown action. Breakdown is the HTTP GET call with the request URL https://videobreakdown.azure-api.net/Breakdowns/Api/Partner/Breakdowns/:id

You will need to put in your subscription key name and key again. If you have a preset set up with the key, you will just need to select it.

Click the Params button and enter in the id from the search result for the id parameter. Set the language of the breakdown to “English” and click Send.

You should now see the JSON response for the breakdown request.

 

The JSON response of the breakdown contains general information on the video and the account that uploaded it in addition to three sections called summarizedInsights, breakdowns, and social.

The summarizedInsights section holds information on the distinct faces, topics, and audio effects in the video as well as the different time ranges in which each appear. In addition, the section  provides information on positive, negative, and neutral sentiments throughout the video as well as the time ranges for each.

The breakdowns section serves as a more expansive version of the summarizedInsights. Here, you will find transcript blocks, categories of audio effects, and information to allow for content moderation. The breakdowns section also provides more details on topics, faces, and voice participants of the video.

The trasncriptBlocks section within breakdowns serves as a timeline of the video. You will find information on lines, OCRs, faces, etc. for each time block.  The social section provides data on likes and number of views.

Below is an outline of some of what to expect in the JSON response for breakdown.

JSON Response for Breakdown

Basic video and user information

This section has extensive information on the name, owner, id, etc. of the video.

summarizedInsights

faces
topics
sentiments
audio effects

breakdowns

general information on video

accountID
id
state
processingProgress
externalURL

insights

trancsriptBlocks
topics
faces
contentModeration
audioEffectsCategories

social

likes
viewsYou can get a more information on the JSON response for breakdown here.You have the data and are now well on your way to having more insights on your videos and the opportunity to further innovate. Try a few more examples with your own content!

For more details, please take a look at the Video Indexer Documentation. Follow us on Twitter @Video_Indexer to get the latest news on the Video Indexer.

If you have any questions or need help, contact us at visupport@microsoft.com.
Quelle: Azure