Bernie Sanders And Elizabeth Warren Want To Restore Internet Privacy Rules

Scott Eisen / Getty Images

Just days after President Trump dealt a final blow to Obama-era internet privacy rules, Senate Democrats are trying to bring them back to life.

Sen. Ed Markey introduced legislation Thursday that would prohibit Internet service providers like Charter and Comcast from selling personal information about their customers, like web browsing history and app usage data, without first getting their permission. Backed by 10 of his Democratic colleagues — including Senators Elizabeth Warren, Bernie Sanders, and Al Franken — the bill would re-establish landmark privacy rules passed in the final months of the Obama administration.

But Sen. Markey and his co-sponsors face a determined Republican majority that fiercely opposes such rules and just voted to overturn them.

Last week, the House of Representatives moved to repeal the ISP privacy rules largely along party lines, in a 215–205 vote. And in March, all 50 votes in favor of stripping the rules were cast by Republicans.

Despite the recent loss in Congress, Democratic lawmakers and consumer advocates believe that the fight over internet privacy has mobilized voters, highlighting previously obscured privacy practices that nonetheless affect every American who uses the internet.

The new proposal, like the internet privacy rules passed by the Federal Communications Commission at last year, would make it harder for ISPs to collect and sell the information of its customers. A key provision would designate web browsing history as “sensitive” information, meaning that internet providers would first need to get your permission before they could share or sell that data.

The Obama-era FCC rules were scheduled to take effect later this year. But with the Congressional repeal and Trump&;s signature, the rules were scrapped and never kicked in. Now, your ISP can do some pretty invasive things with your private data.

“Thanks to Congressional Republicans, corporations, not consumers, are in control of sensitive information about Americans’ health, finances, and children,” Sen. Markey said in a statement to BuzzFeed News. “This legislation will put the rules back on the books to protect consumers from abusive invasions of their privacy. Americans should not have to forgo their fundamental right to privacy just because their homes and phones are connected to the internet.”

For their part, Senate Republicans saw the privacy rules as onerous regulations that unfairly targeted ISPs. Other internet companies, like Facebook and Google, for instance, didn&039;t have the same restrictions placed on them limiting what they could do with customer data. Sen. Jeff Flake, who led the Senate&039;s efforts to repeal the rules, has argued for a more “light touch” approach from the government. “What we need with the internet is uniform rules, and not to regulate part of the internet one way and another part of the internet another way,” he said after the Senate vote in March.

Shortly after Congress voted, Comcast, Verizon, and AT&T each defended their commitments to privacy, and claimed in posts on their corporate websites that little would change following the repeal of the ISP privacy rules. But policy experts and privacy advocates were quick to reject their assurances.

On Wednesday, Sen. Markey sent letters to AT&T, Comcast, Charter, Verizon, Sprint, T-Mobile, and CenturyLink asking them share details on their data collection and privacy practices. Among the 16 questions listed, Sen. Markey asked whether they get consent before collecting their customer browsing history and if they had changed their privacy policies since Trump signed the repeal. Sen. Markey asked that they respond by May 1.

Quelle: <a href="Bernie Sanders And Elizabeth Warren Want To Restore Internet Privacy Rules“>BuzzFeed

Mirantis Cloud Platform: Stop wandering in the desert

The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
There&;s no denying that the last year has seen a great deal of turmoil in the OpenStack world, and here at Mirantis we&8217;re not immune to it.
In fact, some would say that we&8217;re part of that turmoil. Well, we are in the middle of a sea change in how we handle cloud deployments, moving from a model in which we focused on deploying OpenStack to one in which we focus on achieving outcomes for our customers.  
And then there&8217;s the fact that we are changing the architecture of our technology.
It&8217;s true. Over the past few months, we have been moving from Mirantis OpenStack to Mirantis Cloud Platform (MCP), but there&8217;s no need to panic. While it may seem a little scary, we&8217;re not moving away from OpenStack – rather, we are growing up and tackling the bigger picture, not just a part of it. In early installations with marquee customers, we’ve seen MCP provide a tremendous advantage in deployment and scale-out time. In just a few days, we will publicly launch MCP, and you will have our first visible signpost leading you out of the desert. We still have lots of work to do, but we&8217;re convinced this is the right path for our industry to take, and we&8217;re making great progress in that direction.
Where we started
To understand what&8217;s going on here, it helps to have a firm grasp of where we started.
When I started here at Mirantis four years ago, we had one product, Mirantis Fuel, and it had one purpose: deploy OpenStack. Back then that was no easy feat. Even with a tool like Fuel, it could be a herculean task taking many days and lots of calls to people who knew more than I did.
Over the intervening years, we came to realize that we needed to take a bigger hand in OpenStack itself, and we produced Mirantis OpenStack, a set of hardened OpenStack packages.  We also came to realize that deployment was only the beginning of the process; customers needed Lifecycle Management.
The Big Tent
And so Fuel grew. And grew. And grew. Finally, Fuel became be so big that we felt we needed to involve the community even more than we already had, and we submitted Fuel to the Big Tent.
Here Fuel has thrived, and does an awesome job of deploying OpenStack, and a decent job at lifecycle management.
But it&8217;s not enough.
Basically, when you come right down to it, OpenStack is nothing more than a big, complicated, distributed application. Sure, it&8217;s a big, complicated distributed application that deploys a cloud platform, but it&8217;s still a big complicated distributed application.
And let&8217;s face it: deploying and managing big, complicated, distributed applications is a solved problem.
The Mirantis Cloud Platform architecture
So let&8217;s look at what this means in practice.  The most important thing to understand is that where Mirantis OpenStack was focused on deployment, MCP is focused on the operations tasks you need to worry about after that deployment. MCP means:

A single cloud that runs VMs, containers, and bare metal with rich Software Defined Networking (SDN) and Software Defined Storage (SDS) functionality
Flexible deployment and simplified operations and lifecycle management through a new DevOps tool called DriveTrain
Operations Support Services in the form of enhanced StackLight software, which also provides continuous monitoring to ensure compliance to strict availability SLAs

OK, so that&8217;s a little less confusing than the diagram, but there&8217;s still a lot of &;sales&; speak in there.
Let&8217;s get down to the nitty gritty of what MCP means.
What Mirantis Cloud Platform really means
Let&8217;s look at each of those things individually and see why it matters.
A multi-platform cloud
There was a time when you would have separate environments for each type of computing you wanted to do. High performance workloads ran on bare metal, virtual machines ran on OpenStack, containers (if you were using them at all) ran on their own dedicated clusters.
In the last few years, bare metal was brought into Openstack, so that you could manage your physical machines the same way you managed your virtual ones.
Now Mirantis Cloud Platform brings in the last remaining piece. Your Kubernetes cluster is part of your cloud, enabling you to easily manage your container-based applications in the same environment and with the same tools as your traditional cloud resources.
All of this is made possible by the inclusion of powerful SDN and SDS components. Software Defined Networking for OpenStack is handled by OpenContrail, providing the benefits of commercial-grade networking without the lock-in, with Calico stepping in for the container environment. Storage takes the form of powerful open source Ceph clusters, which are used by both OpenStack and container applications.
These components enable MCP to provide an environment where all of these pieces work together seamlessly, so your cloud can be so much more than just OpenStack.
Knowing what&8217;s happening under the covers
With all of these pieces, you need to know what&8217;s happening &; and what might happen next. To that end, Mirantis Cloud Platform includes an updated version of StackLight, which gives you a comprehensive view of how each component of your cloud is performing; if an application on a particular VM acts up, you can isolate the problem before it brings down the entire node,
What&8217;s more, the StackLight Operations Support System analyzes the voluminous information it gets from your OpenStack cloud and can often let you know there&8217;s trouble &8212; before it causes problems.
All of this enables you to ensure uptime for your users &8212; and compliance with SLAs.
Finally solving the operations dilemma
Perhaps the biggest change, however, is in the form of DriveTrain. DriveTrain is a combination of various open source projects, such as Gerrit and Jenkins for CI/CD and Salt for configuration management, enabling a powerful, flexible way for you to both deploy and manage your cloud.
Because let&8217;s face it: the job of running a private cloud doesn&8217;t end when you&8217;ve spun up the cloud &8212; it&8217;s just begun.
Upgrading OpenStack has always been a nightmare, but DriveTrain is designed so that your cloud infrastructure software can always be up-to-date. Here&8217;s how it works:
Mirantis continually monitors changes to OpenStack and other relevant projects, providing extensive testing and making sure that no errors get introduced, in a process called &8220;hardening&8221;.  Once we decide these changes are ready for general use, we release them into the DriveTrain CI/CD infrastructure.
Once changes hit the CI/CD infrastructure, you pull them down into a staging environment and decide when you&8217;re ready to push them to production.
In other words, no more holding your breath every six months &8212; or worse, running cloud software that&8217;s year old.
Where do you want to go?
OpenStack started with great promise, but in the last few years it&8217;s become clear that the private cloud world is more than just one solution; it&8217;s time for everyone &8212; and that includes us here at Mirantis &8212; to step up and embrace a future that includes virtual machines, bare metal and containers, but in a way that makes both technological and business sense.
Because at the end of the day, it&8217;s all about outcomes; if your cloud doesn&8217;t do what you want, or if you can&8217;t manage it, or if you can&8217;t keep it up to date, you need something better. We&8217;ve been working hard at making MCP the solution that gets you where you want to be. Let us know how we can help get you there.
The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Boosting Helm with AppController

The post Boosting Helm with AppController appeared first on Mirantis | Pure Play Open Cloud.
Helm is emerging as a standard for Kubernetes application packaging. While researching it we discovered that its orchestration part can be improved. We did just that by injecting AppController right into the Helm orchestration engine. Check out our video from KubeCon EU to get insights into the advanced orchestration capabilities that AppController aims to introduce in Helm.

The post Boosting Helm with AppController appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

2017 trend lines: When DevOps and hybrid collide

Author’s preface: The term Site Reliability Engineering (SRE) is a job function that brings DevOps into infrastructure in a powerful way. Talking about hybrid infrastructure and DevOps without mentioning SRE would be a major omission. For further reading about SRE thinking, explore this series of blog posts.
What happens when DevOps methods meet hybrid environments? Following are some emerging trends and my commentary on each.
There are two major casualties as the pace of innovation in IT continues to accelerate: manual processes (non-DevOps) and tightly-coupled software stacks (non-hybrid).
We are changing some things much too quickly for developers and operators to keep up using processes that require human intervention in routine activities like integrated testing or deployment. Furthermore, monolithic platforms—our traditional “duck-and-cover” protection from pace of change—are less attractive for numerous reasons, including slower pace, vendor lock-in and lack of choice.
The necessary complexity of hybrid development can make it harder to build robust, portable DevOps automation.
Necessary complexity? Yes, that’s 2017 in a nutshell. Traditionally, people consider hybrid to mean operating in split infrastructures such as on-premises and cloud simultaneously. But the challenges of splitting operations is about much more than two or more infrastructures. The reality of hybrid is that there are variations throughout the IT stack that force users to code with hybrid issues, even without straddling clouds.
The pace of innovation guarantees that we will be constantly in a hybrid operations mode.
Robust, portable DevOps automation? We not only need it, automation will absolutely will be required. DevOps is generally presented as cultural and process transformations. But the consequence of that transformation is the need to automate processes to improve system performance.
Modern software development is built using massive collections of reusable modules and services—many open source—that developers carefully assemble into working applications. Since each component has its own release cycle and dependency graph, we must continuously integrate (CI) our applications to make sure that they continue to operate correctly.
It’s foolish to manage your application stack as a manual integration problem. You must automate it.
A critical defense against integration challenges is to constantly patch and update your software. The faster and smaller you can deploy software creates more protection from inevitable changes that cause issues created by both external and internal sources. The practice of continuous deployment (CD) ensures that your development and operations teams can respond quickly—or better yet, automatically—to the inevitable issues in creating software. This means that your applications are more resilient and your teams collaborating on development to production pipelines.
DevOps drive for CI and CD creates robust applications.
Hybrid creates a unique set of challenges for DevOps practices between the exploding complexity of choice. Change makes automation a moving target. I’ve been watching teams’ CI/CD pipelines grow from simple linear flows into Rube Goldberg machines that test multiple operating systems on multiple infrastructures.
These are not gratuitous tests. Teams have a very real need to manage their applications across a rapidly changing IT landscape. This creates a dilemma as they have to invest more and more time chasing issues created by hybrid requirements. But those requirements are not going away because they have commercial ROI and technical rationale.
Variation created by hybrid creates challenges for automation.
We must find a way to contain the work demanded by hybrid, but we cannot simply ignore the hybrid imperative. The clear answer for 2017 is to find good abstractions that protect teams from differences introduced by hybrid. The most popular abstraction, containers, is already revolutionizing workload portability by hiding many infrastructure details and providing the small delivery units desired by CI/CD pipelines.
We must invest in abstractions to help with hybrid DevOps because complexity is increasing.
We’ve clearly learned that DevOps automation pays back returns in agility and performance. Originally, small-batch, lean thinking was counter-intuitive. Now it’s time to make similar investments in hybrid automation so that we can leverage the most innovation available in IT today.
If you like these ideas, please subscribe to my blog RobHirschfeld.com where I explore SRE Ops, DevOps and open hybrid infrastructure challenges.
The post 2017 trend lines: When DevOps and hybrid collide appeared first on news.
Quelle: Thoughts on Cloud

Using xEvents to monitor Azure Analysis Services

In addition to providing BI queries at the speed of thought and a user friendly BI semantic model, Azure Analysis Services supports many manageability features.  One such feature is a rich set of Extended Events (xEvents) which can be used for scenarios ranging from trouble shooting and diagnostics to in depth auditing and usage analysis.

You can use SQL Server Management Studio (SSMS) to configure a xEvents for Azure Analysis Services. Today, you can only configure Azure Analysis services to log to a stream or ring buffer and not to a file. In some cases, you may want to log events for offline analysis or to retain historically. We have provided an example of using the Tabular Object Modeling APIs to create an xEvents session and logging the data to disk and a richer sample to trace to a database with a windows service. The xEvents Logging for Azure Analysis Services sample and ASTrace samples are available on GitHub at https://github.com/Microsoft/Analysis-Services.

The easiest way to use this sample is to use SSMS to configure streaming xEvents to see which events you would like to log. First, create an xEvents session in SSMS. Then pick which events you like to record and set the data mode to streaming. Run some queries or do other operations, and then look at the xEvents in the “Watch Live Data” option on the trace session in SSMS to verify the data. If these events are the ones you want, then you can script these out to a file. 

The sample program takes the TMSL script file to define the events it will record. Then you can run the sample program to create a new session, and it will trace these events to a file.  Be sure to install the latest Azure Analysis Services client libraries to ensure you have support to integrated authentication. 

Let us know how it works for you or check out the other Azure Analysis Serivces samples on GitHub!
Quelle: Azure

RBAC Support in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6One of the highlights of the Kubernetes 1.6 release is the RBAC authorizer feature moving to beta. RBAC, Role-based access control, is an an authorization mechanism for managing permissions around Kubernetes resources. RBAC allows configuration of flexible authorization policies that can be updated without cluster restarts.The focus of this post is to highlight some of the interesting new capabilities and best practices.RBAC vs ABACCurrently there are several authorization mechanisms available for use with Kubernetes. Authorizers are the mechanisms that decide who is permitted to make what changes to the cluster using the Kubernetes API. This affects things like kubectl, system components, and also certain applications that run in the cluster and manipulate the state of the cluster, like Jenkins with the Kubernetes plugin, or Helm that runs in the cluster and uses the Kubernetes API to install applications in the cluster. Out of the available authorization mechanisms, ABAC and RBAC are the mechanisms local to a Kubernetes cluster that allow configurable permissions policies.ABAC, Attribute Based Access Control, is a powerful concept. However, as implemented in Kubernetes, ABAC is difficult to manage and understand. It requires ssh and root filesystem access on the master VM of the cluster to make authorization policy changes. For permission changes to take effect the cluster API server must be restarted.RBAC permission policies are configured using kubectl or the Kubernetes API directly. Users can be authorized to make authorization policy changes using RBAC itself, making it possible to delegate resource management without giving away ssh access to the cluster master. RBAC policies map easily to the resources and operations used in the Kubernetes API.Based on where the Kubernetes community is focusing their development efforts, going forward RBAC should be preferred over ABAC.Basic ConceptsThe are a few basic ideas behind RBAC that are foundational in understanding it. At its core, RBAC is a way of granting users granular access to Kubernetes API resources.The connection between user and resources is defined in RBAC using two objects.RolesA Role is a collection of permissions. For example, a role could be defined to include read permission on pods and list permission for pods. A ClusterRole is just like a Role, but can be used anywhere in the cluster.Role BindingsA RoleBinding maps a Role to a user or set of users, granting that Role’s permissions to those users for resources in that namespace. A ClusterRoleBinding allows users to be granted a ClusterRole for authorization across the entire cluster.Additionally there are cluster roles and cluster role bindings to consider. Cluster roles and cluster role bindings function like roles and role bindings except they have wider scope. The exact differences and how cluster roles and cluster role bindings interact with roles and role bindings are covered in the Kubernetes documentation.RBAC in KubernetesRBAC is now deeply integrated into Kubernetes and used by the system components to grant the permissions necessary for them to function. System roles are typically prefixed with system: so they can be easily recognized.➜  kubectl get clusterroles –namespace=kube-systemNAME                    KINDadmin ClusterRole.v1beta1.rbac.authorization.k8s.iocluster-admin ClusterRole.v1beta1.rbac.authorization.k8s.ioedit ClusterRole.v1beta1.rbac.authorization.k8s.iokubelet-api-admin ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:auth-delegator ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:basic-user ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:controller:attachdetach-controller ClusterRole.v1beta1.rbac.authorization.k8s.iosystem:controller:certificate-controller ClusterRole.v1beta1.rbac.authorization.k8s.io…The RBAC system roles have been expanded to cover the necessary permissions for running a Kubernetes cluster with RBAC only.During the permission translation from ABAC to RBAC, some of the permissions that were enabled by default in many deployments of ABAC authorized clusters were identified as unnecessarily broad and were scoped down in RBAC. The area most likely to impact workloads on a cluster is the permissions available to service accounts. With the permissive ABAC configuration, requests from a pod using the pod mounted token to authenticate to the API server have broad authorization. As a concrete example, the curl command at the end of this sequence will return a JSON formatted result when ABAC is enabled and an error when only RBAC is enabled.➜  kubectl run nginx –image=nginx:latest➜  kubectl exec -it $(kubectl get pods -o jsonpath='{.items[0].metadata.name}’) bash➜  apt-get update && apt-get install -y curl➜  curl -ik  -H “Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)”  https://kubernetes/api/v1/namespaces/default/podsAny applications you run in your Kubernetes cluster that interact with the Kubernetes API have the potential to be affected by the permissions changes when transitioning from ABAC to RBAC.To smooth the transition from ABAC to RBAC, you can create Kubernetes 1.6 clusters with both ABAC and RBAC authorizers enabled. When both ABAC and RBAC are enabled, authorization for a resource is granted if either authorization policy grants access. However, under that configuration the most permissive authorizer is used and it will not be possible to use RBAC to fully control permissions.At this point, RBAC is complete enough that ABAC support should be considered deprecated going forward. It will still remain in Kubernetes for the foreseeable future but development attention is focused on RBAC.Two different talks at the at the Google Cloud Next conference touched on RBAC related changes in Kubernetes 1.6, jump to the relevant parts here and here. For more detailed information about using RBAC in Kubernetes 1.6 read the full RBAC documentation.Get InvolvedIf you’d like to contribute or simply help provide feedback and drive the roadmap, join our community. Specifically interested in security and RBAC related conversation, participate through one of these channels:Chat with us on the Kubernetes Slack sig-auth channelJoin the biweekly SIG-Auth meetings on Wednesday at 11:00 AM PTThanks for your support and contributions. Read more in-depth posts on what’s new in Kubernetes 1.6 here.– Jacob Simpson, Greg Castle & CJ Cullen, Software Engineers at GooglePost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

Amazon QuickSight now supports KPI charts, Export to CSV and AD Connector

You can now create charts to track Key Performance Indicators (KPIs), export data from analyses as CSV files, define custom ranges when importing Microsoft Excel spreadsheets, and create aggregate filters for SPICE (Super-fast, Parallel, In-memory, Calculation, Engine) data sets. In Amazon QuickSight Enterprise Edition we’ve added support for AD Connector, providing another option for on-premises Active Directory connectivity. For more information, see the latest AWS Big Data Blog post on QuickSight.
Quelle: aws.amazon.com

Is This Dog An Ad?

Welcome to “Is This an Ad?” — a column in which we take a celebrity’s social media post about a brand or product and find out if they’re getting paid to post about it or what. Because even though the FTC recently came out with rules on this, it’s not always clear. Send a tip for ambiguous tweets or ’grams to katie@buzzfeed.com.

THE CASE:

Meet Agador. He is an extremely Good Boy. Yes, you are, Agador, you are such a good boy.

Agador is a maltipoo with an amazing coat. So fluffy&; So soft&033; I love him.

But Agador does some strange things… like, posing with products and tagging the brand.

Like this one with a DELIGHTFULLY chubby baby wearing Honest Company diapers:

Instagram: @poochofnyc

And this with Budweiser:

Instagram: @poochofnyc

So, are these ads?

THE EVIDENCE:

Agador is owned by Allan Monteron of New York City. Monteron and his partner also run an account for Agador’s brother Fred. The level of quality of the photos — they’re shot on a real camera instead of a phone — and the styling, props and clothing are all very commercial-friendly. They look like ads.

It’s not unheard of for celebrity dogs to do stuff. Marnie, the dog with a delightfully waggly tongue, brings in enough revenue through a handful of different streams that her owner was able to quit her day job and focus on Marnie full time. Jiff, the fluffy Pomeranian star, does Instagram ads. And of course felines like Grumpy Cat and Lil’ Bub have been monetizing cuteness since way back to the early 2010s.

Plus, Budweiser has used dogs in its ad campaigns before, right? Remember a little guy named Spuds MacKenzie? They even brought him back recently&033;

Spuds Mac

Bud Lite

But on the other hand, big brands like Budweiser don’t typically do this kind of lowkey advertising on Instagram. And while Agador is on his way to viral dogdom, he’s not exactly so insanely popular that you’d imagine Starbucks doing vaguely slippery ads with him.

THE VERDICT:

Improbably, neither of these are ads, Monteron confirmed. He said that Agador HAS done other ads, and that those are clearly marked. And they are&033;

Note the sponsored tag in this one:

Instagram: @poochofnyc

Sometimes, a viral dog tagging a diaper company in an Instagram is JUST a viral dog tagging a diaper company pro bono. It&;s also kind of a good strategy if you&039;re hoping to increase your follows. “We tag these major brands in the hopes that they appreciate how we use their products and creativity to provide content that will spark interest to their followers, and repost our content,” explained Monteron.

People enjoy genuinely interacting with brands on the internet — you may not get it (I don’t) — but hey. Some people like pineapple on pizza; I don’t judge.

Quelle: <a href="Is This Dog An Ad?“>BuzzFeed

Cloud Translation API adds more languages and Neural Machine Translation enters GA

By Apoorv Saxena, Product Manager, Cloud Machine Learning

For many years now Google has been successfully offering language translation to its users in 50+ languages. To bring this technology to businesses, Google Cloud introduced Cloud Translation API in 2011.

Since then, we’ve continuously invested in the API by improving service scalability and expanding it to cover 100+ languages today. As a result, the Cloud Translation API has been widely adopted and deployed in scaled production environments by thousands of customers in the travel, finance and gaming verticals.

As part of Google’s continued investment in machine translation, we recently announced the beta launch of our Google Neural Machine Translation system (GNMT) that uses state-of-the-art training techniques and runs on TPUs to achieve some of the largest improvements for machine translation of the past decade. We had over 1,000 customers sign up to test the API and provide us valuable feedback. For example, Grani VR Studio uses the high accuracy and low latency offered by neural machine translation to build interactive VR/AR experiences in different languages.

Today we’re pleased to announce the general availability of the neural machine translation system to all our customers under the Standard Edition. The Premium Edition beta is now closed for new sign-ups and will re-open in the coming months as we roll out new features.

Here’s what you get with Neural Machine Translation:

Access to the highest-quality translation model, reducing translation errors by 55%-85% on several generally available language pairs
Support for seven new languages: English to and from Russian, Hindi, Vietnamese, Polish, Arabic, Hebrew and Thai. This is in addition to eight existing languages (English to and from Chinese, French, German, Japanese, Korean, Portuguese, Spanish and Turkish)
More languages in coming weeks. Please visit this page to keep track of new language support.

Standard Edition customers paying the list online price can access the neural translation system at no additional charge. As part of the announcement, we’re also offering offline discounted pricing for usage of more than one billion characters per month. Please visit our pricing page for more information.

We look forward to working with you as we continue to invest in bringing the best of Google technology to serve your translation needs.

Quelle: Google Cloud Platform