Make A Chipotle Order And We'll Tell You Which Man To Unfollow On Twitter

July 27 is the official Unfollow A Man Day. For those who aren’t sure who to unfollow, we’ll help!

What is #UnfollowAMan? It's a special day when you take a moment to think about your Twitter feed and whom you follow. Are some of them people whose tweets really drive you nuts and make you miserable? Are some people clogging up your feed because they tweet 80 times a day? Is there someone whom you followed years ago after meeting at a party, and don't really care about, but keep following only out of politeness?

Now's the day to smash that unfollow button and make your life on Twitter a little less miserable.

Why only unfollow men? Well, you already know the answer to this, right? Twitter is only as good as those whom you follow. The mix of voices we listen to on Twitter is meaningful, and if you have a feed that has way more male voices than female, you're probably not being exposed to a good balance of ideas and thoughts. During this last election cycle, we talked a lot about our “bubbles” on social media, and how they're not always a good thing. Change it up a little and see what your Twitter experience feels like.

Isn't this sexist? First of all, no. Second of all, OK, so 2014 was a different time when straightforward misandrist jokes played a little better on Twitter (think the popularity of “ban all men” etc…). In 2017, I think we're all so overwhelmed with what feels like a much larger culture war that jokes about banning men are kinda meh.

But the goal of #UnfollowAMan Day is perhaps more important than ever: if Twitter feels like a drag, try changing up your feed — unfollow someone!

What about [long list of exceptions]? You're a grown up, you can figure this out, c'mon.

I'm a man. Can I participate? Yes!!! It's for EVERYONE!

Is this sponsored by Chipotle? Definitely not, and probably they'll be kind of annoyed to be associated with this. If you're some chud who thinks a fast casual taco joint supports reverse sexism, I don't know what to tell you other than, like, log off or whatever.

Quelle: <a href="Make A Chipotle Order And We'll Tell You Which Man To Unfollow On Twitter“>BuzzFeed

Training a neural network to play Hangman without a dictionary

Authors: Mary Wahl, Shaheen Gauher, Fidan Boylu Uz, Katherine Zhao

Summary

We used reinforcement learning and CNTK to train a neural network to guess hidden words in a game of Hangman. Our trained model has no reliance on a reference dictionary: it takes as input a variable-length, partially-obscured word (consisting of blank spaces and any correctly-guessed letters), and a binary vector indicating which letters have already been guessed. In the git repository associated with this post, we provide sample code for training the neural network and deploying it in an Azure Web App for gameplay.

Motivation

In the classic children's game of Hangman, a player's objective is to identify a hidden word of which only the number of letters is originally known. In each round, the player guesses a letter of the alphabet: if the letter is present in the word, all instances of the letter are revealed; otherwise, one of the hangman's body parts is drawn in on a gibbet. The game ends in a win if the word is entirely revealed by correct guesses, and ends in loss if the hangman's body is completely revealed instead. To assist the player, a visible record of all letters guessed so far is typically maintained.

A common Hangman strategy is to compare the partially-revealed word against all of the words in a player’s vocabulary. If a unique match is found, the player simply guesses the remaining letters; if there are multiple matches, the player can guess a letter that distinguishes between the possible words while minimizing the expected number of incorrect guesses. Such a strategy can be implemented algorithmically (without machine learning) using a pre-compiled reference dictionary as the vocabulary. Unfortunately, this approach will likely give suboptimal guesses or fail outright if the hidden word is not in the player’s vocabulary. This issue occurs commonly in practice, since children selecting hidden words often choose proper nouns or commit spelling errors that would not be present in a reference dictionary.

An alternative strategy robust to such issues is to make guesses based on the frequencies of letters and letter combinations in the target language. For an English-language game, such strategies might include beginning with vowel guesses, guessing the letter U when a Q has already been revealed, recognizing that some letters or n-grams are more common than others, etc. Because of the wide array of learnable patterns and our own a priori uncertainty of which would be most useful in practice, we decided to train a neural network to learn appropriate rules for guessing hidden words without relying on a reference dictionary.

Model Design and Training

Our model has two main inputs: a partially-obscured hidden word, and a binary vector indicating which letters have already been guessed. To accommodate the variable length of hidden words in Hangman, the partially-obscured word (with “blanks” representing any letters in the word that have not yet been guessed) is fed into a Long Short Term Memory (LSTM) recurrent network, from which only the final output is retained. The LSTM’s output is spliced together with the binary vector indicating previous guesses, and the combined input is fed into a single dense layer with 26 output nodes that represent the network’s possible guesses, the letters A-Z. The model’s output “guess” is the letter whose node has the largest value for the given input.

We created a wrapper class called HangmanPlayer to train this model using reinforcement learning. The hidden word and model are provided when an instance of HangmanPlayer is created. In the first round, HangmanPlayer queries the model with an appropriately-sized series of blanks (since no letters have been revealed yet in the hidden word) and an all-zero vector of previous guesses. HangmanPlayer stores the input it provided to the model, as well as the model’s guess and feedback on the guess’s quality. Based on the guess, HangmanPlayer updates the input (to reveal any correctly-guessed letters and indicate which letter has been guessed), then queries the model again… and so forth until the game of Hangman ends. Finally, HangmanPlayer uses the input, output, and feedback it stored to further train the model. Training continues when a new game of Hangman is created with the next hidden word in the training set (drawn from Princeton’s WordNet).

Operationalization

Instructions and sample files in our Git repository demonstrate how to create an Azure Web App to operationalize the trained CNTK model for gameplay. This Flask web app is heavily based on Ilia Karmanov’s template for deploying CNTK models using Python 3. The human user visiting the Web App selects their own hidden word – which they never reveal directly – and provides feedback to the model after each guess until the game terminates in either a win or a loss.

For more information on this project, including sample code and instructions for reproducing the work, please see the Azure Hangman git repository.
Quelle: Azure

Amazon Inspector adds event triggers to automatically run assessments

Amazon Inspector is excited to announce the launch of Assessment Events. Through an integration with Amazon CloudWatch Events, customers can now create events that automatically trigger Amazon Inspector assessments to run against your environments. Within Amazon CloudWatch Events, you can now create event rules that target your Amazon Inspector assessment templates. When that CloudWatch Event occurs, Amazon Inspector will automatically be notified to run the specified assessment. 
Quelle: aws.amazon.com

App Service Environment v2 release announcement

We are happy to announce an upgrade to the App Service Environment. The App Service Environment (ASE) is a powerful feature offering of the Azure App Service that gives network isolation and improved scale capabilities. It is essentially a deployment of the Azure App Service into a subnet of a customer’s Azure Virtual Network (VNet). While the feature gave customers what they were looking for in terms of network control and isolation, it was not as “PaaS like” as the App Service was normally. We took the feedback to heart and for ASEv2 then we focused on making the user experience the same as it was in the multi-tenant App Service while still providing the benefits that the ASE provided. To make things clearer I will use the abbreviation ASEv2 to refer to the new App Service Environment and the initial version as ASEv1.

App Service Plan based scaling

The App Service Plan (ASP) is the scaling container that all apps are in. When you scale the ASP you are also scaling all of the apps in the ASP. This is true for the multi-tenant App Service as well as the ASE. This means that to create an app you need to either choose an ASP or create an ASP. When you wanted to create an ASP in ASEv1 you needed to pick an ASE as your location and then select a worker pool. If the worker pool you wanted to deploy into didn’t have enough capacity then you would have to add more workers to it before you could create your ASP in it.

With ASEv2, when you create an ASP you still select the ASE as your location but instead of picking a worker pool you use the pricing cards just like you do outside of the ASE. There are no more worker pools to manage. When you create or scale your ASP we automatically add the needed workers. To distinguish between ASPs that are in an ASE and those in the multi-tenant service we created a new pricing plan named Isolated. When you pick an Isolated pricing plan during ASP creation, it means that you want the associated ASP to be created in an ASEv2. If you already have an ASEv2 you simply pick the ASE as the location and the size of worker you wish to use.

ASE creation

One of the other things that limited ASE adoption was feature visibility. Many customers did not even know that the ASE feature existed. To create an ASE you had to look for the ASE creation flow which was completely separate from app creation. In ASEv1 customers need to add workers to their worker pools in order to create ASPs. Now that workers are added automatically when ASPs are created or scaled, we are able to place an ASEv2 creation experience squarely in the ASP creation flow.

To create a new ASEv2 during the ASP creation experience, select a location that is not an ASE and select one of the new Isolated SKU cards. When you do this the ASE creation UI is displayed which enables you to create a brand new ASEv2 in a new VNet or in a pre-existing VNet.

Additional benefits

Due to the changes that were made with the system architecture, the ASEv2 has a few additional benefits over ASEv1. With an ASEv1 the maximum default scale was 50 workers. With ASEv2 the maximum default scale is now 100. That means that you can have up to 100 ASP instances hosted in an ASEv2. That can be anything from 100 instances of an ASP to 100 individual ASPs, with anything in between.

The ASEv2 also now uses Dv2 based dedicated workers which have faster CPU’s, twice the memory per core and SSDs. The new ASE dedicated workers sizes are 1 core 3.5 GB, 2 core 7 GB, and 4 core 14 GB. The end result is that 1 core on ASEv2 performs better than 2 cores in ASEv1.

To learn more about the ASEv2 you can start with the Introduction to the App Service Environment. For a list of the ASE related documents you can also look at App Service Documentation.
Quelle: Azure