Gwyneth Paltrow’s Goop Just Got Slammed For Deceptive Advertising

Mat Hayward

Gwyneth Paltrow’s Goop routinely draws criticism for its promotion of crystals, supplements, vaginal jade eggs, and all manner of other health products. Now, a consumer watchdog group says that Goop can’t back up many of its promises about improving health — and it wants regulators to investigate what it says is deceptive advertising.

On Tuesday, Truth in Advertising said that it had catalogued more than 50 instances of the e-commerce startup claiming that its products — along with outside products it promotes on its blog and in its newsletter — could treat, cure, prevent, alleviate, or reduce the risk of ailments such as infertility, depression, psoriasis, anxiety, and even cancer. “The problem is that the company does not possess the competent and reliable scientific evidence required by law to make such claims,” the advocacy group wrote in a blog post.

Truth in Advertising says Goop made these explicit and indirect claims both on its website and at its inaugural wellness summit in June.

The group, which has previously slammed the Kardashians for their allegedly deceptive Instagram ads, brought its Goop complaints to two California district attorneys who are part of a state task force that prosecutes matters related to product safety and food, drug, and medical device labeling. Goop, founded as a newsletter by the Academy Award-winning actress in 2008, is headquartered in Los Angeles. It has raised $20 million in venture capital.

Here are a few examples of Goop’s deceptive marketing, according to and as captured by Truth in Advertising.

This crystal “eases period cramps, tempers PMS, regulates menstrual cycles, treats infertility.”

This crystal “eases period cramps, tempers PMS, regulates menstrual cycles, treats infertility.”

Truth in Advertising / Via truthinadvertising.org

One post promoted walking barefoot, a.k.a. “earthing“: “Several people in our community,” including Paltrow, “swear by earthing — also called grounding — for everything from inflammation and arthritis to insomnia and depression.”

One post promoted walking barefoot, a.k.a. "earthing": “Several people in our community,” including Paltrow, “swear by earthing — also called grounding — for everything from inflammation and arthritis to insomnia and depression.”

Truth in Advertising / Via truthinadvertising.org

Rose extract for panic attacks, among other things.

Rose extract for panic attacks, among other things.

“Cooling and moistening, it's used in traditional Chinese medicine to combat yin-deficient heat — some of the manifestations being restlessness, insomnia, hot flashes, or hyperactive states; it’s even been used traditionally to help stop panic attacks.”

Truth in Advertising / Via truthinadvertising.org

Truth in Advertising also condemned Goop for allegedly saying body stickers reduce inflammation, vitamin D3 guards against autoimmune diseases and cancer, a hair treatment treats anxiety and depression, and vaginal eggs increase hormonal balance and preventing the uterus from slipping.

The group said that it told Goop about what it saw as its problematic health claims on Aug. 11, and if Goop didn’t fix its language by Aug. 18, it would alert regulators. The day before the deadline, it said, it also provided a list of web pages with unsubstantiated claims. “Despite being handed this information, Goop to date has only made limited changes to its marketing,” the group wrote Tuesday.

A Goop spokesperson told BuzzFeed News in a statement, “Goop is dedicated to introducing unique products and offerings and encouraging constructive conversation surrounding new ideas. We are receptive to feedback and consistently seek to improve the quality of the products and information referenced on our site.”

The spokesperson said that the company “responded promptly and in good faith to the initial outreach from representatives of TINA and hoped to engage with them to address their concerns. Unfortunately, they provided limited information and made threats under arbitrary deadlines which were not reasonable under the circumstances.

“Nevertheless, while we believe that TINA’s description of our interactions is misleading and their claims unsubstantiated and unfounded, we will continue to evaluate our products and our content and make those improvements that we believe are reasonable and necessary in the interests of our community of users,” the spokesperson added.

This isn’t the first time Goop has faced criticism of its marketing. In August 2016, it said it would voluntarily stop making certain claims about its Moon Juice dietary supplements, such as “brain dust” and “action dust.” That came after an investigative unit of the advertising industry said Goop was required to verify its claims that the products could improve customers’ energy, stamina, thinking ability, and capacity for stress.

But Goop has started to respond to some of its critics. In July, it fired back at Jen Gunter, a San Francisco obstetrician-gynecologist who rails against many of Goop’s health claims — the kinds Truth in Advertising is now taking to task.

LINK: This Doctor Says Gwyneth Paltrow’s Goop Promotes Bullshit. Goop Just Clapped Back.

LINK: Advocacy Group Files FTC Complaint Over Kardashians’ Instagram Ads

Quelle: <a href="Gwyneth Paltrow’s Goop Just Got Slammed For Deceptive Advertising“>BuzzFeed

Security and Compliance in Azure Stack

Security posture and compliance validation roadmap for Azure Stack

Security considerations and compliance regulations are important drivers for people that choose to control their infrastructure using private/hybrid clouds while using IaaS and PaaS technologies to modernize their applications. Azure Stack was designed for these scenarios, and as a result, security and compliance are areas of major investment for Azure Stack.

Before starting implementing, we asked our customers what they expect from security for a solution like Azure Stack. Not surprisingly, the majority of the people we talked to told us that they would strongly favor a solution that comes already hardened and with security features specifically designed and validated for that solution. Additionally, people said that compliance paperwork is a top frustration.

We listened.

This post will walk you through the security posture of the Azure Stack infrastructure and how it addresses the feedback we received. It also describes the work that we’ve done to accelerate the compliance certification process and reduce paperwork overhead.

Security posture

The security posture of Azure Stack is designed based on two principles:

Assume breach.
Hardened by default.

Assume breach is the modern approach to security, where the focus extends from not only trying to prevent an intrusion, but to also detect and contain a breach. In other words, we not only included measures to prevent a breach, but we also focused on solid detection capabilities. We built the system so that if one component gets compromised, it does not directly result in the entire system getting taken over.

Because of the elevated set of permissions associated with it, the administrator role is typically the most often attacked. Following the assume breach principle, we created a predefined, constrained administration experience so that if an admin credential is compromised, the attacker can only perform actions for which the system is designed, instead of having unrestricted access to every component in the infrastructure. Doubling down on the same concept, we removed any customer-facing domain admin accounts in Azure Stack. If you want to further break down the admin role, Azure Stack offers very fine-grained, role-based access control (RBAC), allowing for complete control of the capabilities exposed to each role.

The Azure Stack infrastructure is completely sealed both from a permission (there is no account to log into it1) and network point of view, making it very hard for unauthorized users to get in. Network access control lists (ACLs) are applied at multiple levels of the infrastructure, blocking all the unauthorized incoming traffic and all unnecessary communications between infrastructure components. The ACL policy is very simple, block everything unless it is necessary.

To boost detection capabilities, we enabled security and audit logs of each infrastructure component and we centrally store them in a storage account. These logs offer great visibility into the infrastructure and security solutions (e.g. Security Information and Event Management) can be attached to monitor for intrusion.

Since we define the hardware and software configuration of Azure Stack, we were able to harden the infrastructure by design, the hardened-by-default principle. In addition to following industry best practices like the Microsoft SDL, Azure Stack comes with encryption at-rest for both infrastructure and tenant data, and encryption in-transit with TLS 1.2 for infrastructure network, Kerberos-based authentication of infrastructure components, military-level OS security baseline (based on the DISA STIG), automated rotation of internal secrets, disabled legacy protocols (such as NTLM, SMBv1), Secure Boot, UEFI, and TPM 2.0. Additionally, we enabled several Windows Server 2016 security features like Credential Guard (credential protection against Pass-the-Hash attacks), Device Guard (software whitelisting), and Windows Defender (antimalware).

There is no security posture without a solid, continuous servicing process. For this reason, in Azure Stack, we strongly invested in an orchestration engine that can apply patches and updates seamlessly across the entire infrastructure.

Thanks to the tight partnership with the Azure Stack OEM partners, we were also able to extend the same security posture to the OEM-specific components, like the Hardware Lifecycle Host and the software running on top of it. This ensures that Azure Stack has a uniform, solid security posture across the entire infrastructure, on top of which customers can build and secure their application workloads.

To add an additional layer of due diligence, we brought in two external vendors to perform extended penetration testing of the entire integrated system, including the OEM-specific components.

We also understand that our customers will want to protect their Azure Stack deployments with additional 3rd party security software. We are working with some of the major players in the industry to make sure that their software can easily interoperate with Azure Stack. If there is specific security software that you want Azure Stack to work with, please fill out this survey.

Regulatory compliance

Customers told us that compliance paperwork is a major frustration. To alleviate that, Azure Stack is going through a formal assessment with a 3PAO (3rd-Party Assessor Organization). The outcome of this effort will be documentation on how the Azure Stack infrastructure meets the applicable controls from several major compliance standards. Customers will be able to use this documentation to jump-start their certification process. For the first round of assessments, we are targeting the following standards: PCI-DSS and the CSA Cloud Control Matrix. The former addresses the payment card industry while the latter gives comprehensive mapping across multiple standards.

Concurrently, we have also started the process to certify Azure Stack for Common Criteria. Given the length of the process, this certification will be completed sometime early next year.

It is important to clarify that Microsoft will not certify Azure Stack for those standards, except for Common Criteria, because several controls within those standards are the customer’s responsibility, that is, people- and process-related controls. Microsoft is formally validating that Azure Stack meets the applicable controls. As a result of this validation, Microsoft, via the 3PAO, will produce pre-compiled documentation that explains how Azure Stack meets the applicable controls.

In the coming months, Azure Stack will continue to expand the portfolio of standards to validate against. The decision about which standard to prioritize will be based on customer demand. To express your preference about which standard Azure Stack should prioritize, please fill out this survey.

More information

Are you attending Microsoft Ignite this year in Orlando? Do not miss the session on Azure Stack Security and Compliance with our chief architect Jeffrey Snover and myself:

BRK3089 – Microsoft Azure Stack security and compliance

Lastly, the Azure Stack team is extremely customer focused and we are always looking for new customers to talk too. If you are passionate about Hybrid Cloud and want to talk with team building Azure Stack at Ignite please sign up for our customer meetup.

1Except for some predefined privileged operations exposed via a PowerShell JEA endpoint
Quelle: Azure

Ford Is Trying To Figure Out How To Not Get Newspaper’d

Ford CEO Jim Hackett speaks in San Francisco

Kelly Sullivan / Getty Images

Ford knows it needs to change. So much so that when it recently held a symposium on the future of urban transportation in San Francisco, the automaker didn't display a single car.

The setup may have seemed odd, but these are strange times for Ford. Ride-hailing, and the autonomous vehicles that once seemed like science fiction, are now real forces with the potential to do to Ford and its peers what the internet did to newspapers. Just as the internet made it unnecessary to pay for a paper subscription, ride-hailing services offer people a convenient and affordable alternative to car ownership. So Ford must develop a new playbook, and sometimes that means leaving the car at home.

“We get to author a lot of new technology, technology that my father never would have imagined possible, said Ford CEO Jim Hackett told the crowd at the symposium. “I am for the new technologies.” Ford named Hackett, a turnaround specialist, its CEO in May after the company’s stock dropped 40% over a three-year period.

Many experts believe the “new technologies” may well sever Ford’s relationship with its customers; In a future world where self driving cars can be summoned via smartphone, they say, the prospect of owning one’s own car, and being responsible for its upkeep, will be far less appealing than it is today. Eventually, the Wall Street Journal wrote, car ownership may become the equivalent of owning a horse: “a rare luxury.”

The end of car ownership likely wouldn’t be a fatal blow for Ford, since someone still has to make the cars, and tech companies like Apple have struggled to make their own. But it very well could make it more difficult for the company to sell its highly profitable, status symbol vehicles, and it would shift economic power into the hands of ride hailing networks like Uber and Lyft that can operate with any style of car, as long as it meets a basic standard. “If you look at any value chain, where ultimately are you deriving value? It’s at the end,” Ford’s City Solutions VP John Kwant told BuzzFeed News. “That revenue is paying for everything else previous to that, whether it’s the manufacturing, the design, the parts, the supply.”

If ride-hailing becomes that “end,” or the predominant way people access cars, Ford’s core business would be threatened. And the ride-hailing startups that have become Ford’s rivals are salivating at the possibility. “We know where the passengers are and where the demand is going,” Lyft president John Zimmer told BuzzFeed News last December, explaining why he thinks Lyft, not companies like Ford, will be the “end.” Keen to this challenge, longtime Ford competitor General Motors invested $500 million in Lyft last year.

Ford isn’t standing still. Last September, the company paid more than $65 million to acquire the shuttle service Chariot — “an extension of our value chain,” Kwant said — which picks up and drops off riders in four US cities, competing with Lyft and Uber. Ford also announced plans to invest $1 billion in Argo AI, an autonomous driving technology company. And if you walk the streets of San Francisco, you’ll see hundreds of “Ford Go Bikes” lining the streets, a bike share similar to CitiBike’s in New York except that Ford’s version is not simply a branding exercise. The company is attempting to understand how people get around in cities, which are where people are likely to phase out car ownership before those living in suburban and rural areas will.

Ford doesn’t appear ready to write off its traditional business of selling cars to consumers, either. Hackett, for instance, described a robot cars as “agents” that might be able to to go out into the world and complete tasks on our behalf (food pickup, anyone?), rather than staying parked without a human behind the wheel. “It can go somewhere and do something, it can be your delegate,” he said. That model sounds like one where ownership could persist, and Kwant said there's no reason privately owned vehicles and shared models can't operate alongside each other, with higher end vehicles still serving the shared model, like Uber Black today.

Though the symposium in San Francisco is a sign Ford knows where the future is heading, the hard part will be making the moves necessary to compete in that future word. In 2011, Ford executive Chairman Bill Ford delivered a TED talk outlining his vision for the automotive future. In it, he described an app where you push a button to summon a car that takes you where you need to go. Ford never launched that app, but Uber raised its Series A funding the same year. And six years later, Uber’s valuation is nearly $70 billion, compared to Ford’s $43 billion market cap.

Newspapers once faced a similar situation. In 2011, they saw online readers exceed print for the first time, yet they continued to devote the majority of their resources to the “paper,” which was still bringing in the cash — much like cars meant for private ownership are for Ford today. By 2012, Pew found newspapers knew they needed to change, but entrenched newspaper culture stood in the way, holding back companies that sought to prepare for the future. The newspaper companies sank, with a few notable exceptions. Ford faces a similar battle.

“It’s a journey and it’s not an easy one,” Kwant said. At the very least though, Ford showed up in the backyard of its would-be disruptors, trying to find the answers to the questions facing its business. “To ignore the fact that in urban settings fewer and fewer people are going to own vehicles,” Kwant said, “is to ignore reality.”

Quelle: <a href="Ford Is Trying To Figure Out How To Not Get Newspaper’d“>BuzzFeed

NFV & Carrier SDN

The post NFV & Carrier SDN appeared first on Mirantis | Pure Play Open Cloud.
The post NFV & Carrier SDN appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Azure Log Analytics – Container Monitoring Solution general availability, CNCF Landscape

Docker container is an emerging technology to help developers and devops with easy provisioning and continuous delivery in modern infrastructure. As containers can be ubiquitous in an environment, monitoring is essential. We've developed a monitoring solution which provides deep insights into containers supporting Kubernetes, Docker Swarm, Mesos DC/OS, and Service Fabric container orchestrators on multiple OS platforms. We are excited to announce the general availability for the Container Monitoring management solution on Azure Log Analytics, available in the Azure Marketplace today. 

"Every community contribution helps DC/OS become a better platform for running modern applications, and the addition of Azure Log Analytics Container Monitoring Solution into DC/OS Universe is a meaningful contribution, indeed," said Ravi Yadav, Technical Partnership Lead at Mesosphere. "DC/OS users are running a lot of Docker containers, and having the option to manage them with a tool like Azure Log Analytics Container Monitoring Solution will result in a richer user experience."

Microsoft recently joined the Cloud Native Computing Foundation (CNCF) and we continue to invest in open source projects. Azure Log Analytics is now part of the Cloud Native Computing Foundation (CNCF) Landscape under Monitoring Category.

Here's what the Container Monitoring Solution supports:

With this solution, you can:

See information about all container hosts in a single location
Know which containers are running, what image they’re running, and where they’re running
See an audit trail for actions on containers
Troubleshoot by viewing and searching centralized logs without remote login to the Docker hosts
Find containers that may be “noisy neighbors” and consuming excess resources on a host
View centralized CPU, memory, storage, and network usage and performance information for containers

 

New features available as part of the general availability include:

We've added new features to provide better insights to your Kubernetes cluster. With the new features, you can more easily narrow down container issues within a Kubernetes cluster. Now you can use search filters on you own custom pod labels and Kubernetes cluster hierarchies. With container process information, you can quickly see container process status for deeper health analysis. These features are only for Linux—additional Windows features are coming soon.

Kubernetes cluster awareness with at-a-glance hierarchy inventory from Kubernetes cluster to pods
New Kubernetes events
Capture custom pod labels and provides custom complex search filters
Provides container process information
Container Node Inventory including storage, network, orchestration type, and Docker version

For more information about how to use Container Monitoring solution, as well as the insights you can gather, see Containers solution in Log Analytics.

Learn more by reading previous blogs on Azure Log Analytics Container Monitoring.

How do I try this?

You can get a free subscription for Microsoft Azure so that you can test the Container Monitoring solution features.

How can I give you guys feedback?

There are a few different routes to give feedback:

UserVoice: Post ideas for new Azure Log Analytics features to work on. Visit the UserVoice page.
Forums: Visit the Azure Log Analytics Forums.
Email: Tell us whatever is on your mind by emailing us at OMScontainers@microsoft.com.

We plan on enhancing monitoring capabilities for containers. If you have feedback or questions, please feel free to contact us!
Quelle: Azure

Announcing new DockerCon Europe tracks, sessions, speakers and website!

The DockerCon Europe website has a fresh look and new sessions added. The DockerCon Review Committee is still working through announcing final sessions in each breakout track, but below is an overview of the tracks and content you’ll find this year in Copenhagen. To view abstracts in more detail check out the Agenda Page.
In case you missed it, we have two summits happening on Thursday, October 19th. The Moby Summit, a hands-on collaborative event for advanced container users who are actively maintaining, contributing or generally interested in the design and development of the Moby Project and it’s components. The Enterprise Summit, a full day event for enterprise IT practitioners who want to learn how they can embrace the journey to hybrid IT and implement a new strategy to help fund their modernization efforts.

We have an excellent line up of speakers in store for you and are excited to share the agenda below. We hope that these sessions inspire you to register to DockerCon Europe.
Using Docker
Using Docker sessions are introductory sessions for Docker users, dev and ops alike. Filled with practical advice, learnings, and insight, these sessions will help you get started with Docker or better implement Docker into your workflow.

Creating Effective Images by Abby Fuller, AWS
Docker?!?! But I’m a SysAdmin by Mike Coleman, Docker
Modernizing .NET Apps by Elton Stoneman, Docker & Iris Classon, Konstrukt
Modernizing Java Apps by Arun Gupta, AWS
Road to Docker Production: What You Need to Know and Decide by Bret Fisher, Independent Consultant
Tips and Tricks of the Docker Captains by Adrian Mouat, Container Solutions
Learning Docker From Square One by Chloe Condon, CodeFresh
Practical Design Patterns in Docker Networking by Mark Church, Docker

 Docker Advanced
Docker Advanced sessions provide a deeper dive into Docker tooling, implementation and real world production use recommendations. If you are ready to get to the next level with your Docker usage, join Docker Advanced for best practices from the Docker team.

Sessions to be announced soon!

 Use Case
Use case sessions highlight how companies are using Docker to modernize their infrastructure and build, ship and run distributed applications. These sessions are heavy on business value, ROI and production implementation advice and learnings.

Back To The Future: Containerize Legacy Applications by Brandon Royal, Docker
Using Docker For Product Testing and CI at Splunk by Mike Dickey, Splunk & Harish Jayakumar, Docker
Shipping and Shifting ~100 Apps with Docker by Sune Keller, Alm Brand
How Docker Helps Open Doors At Assa Abloy by Jan Hëdstrom, Assa Abloy &
Patrick van der Bleek, Docker

 Black Belt
Black Belt talks are code and demo heavy and light on the slides. Experts in the Docker ecosystem cover a deeply technical topic by diving way down deeper. Container connoisseurs, prepare to learn and be delighted.  

What Have Syscalls Done For You Lately? by Liz Rice, Aqua Security
A Deeper Dive Into Docker Overlay Networks by Laurent Bernaille, D2SI
Container-relevant Upstream Kernel Developments by Tycho Andersen, Docker
The Truth Behind Serverless by Erica Windisch, IOpipe

 Edge [NEW!]
The Edge track shows how containers are redefining our technology toolbox, from solving old problems in a new way to pushing the boundaries of what we can accomplish with software. Sessions in this track provide a glimpse into the new container frontier.  

Take Control Of Your Maps With Docker by Petr Pridal, Klokan Technologies GmbH
Panel: Modern App Security Requires Containers moderated by Sean Michael Kerner
Skynet vs Planet of The Apes, Duel! by Adrien Blind, Societe Generale
How to Secure the Journey to Microservices – Fraud Management at Arvato by Tobias Gurtzick, Arvato

 Transform [NEW!]
The transform track focuses on the impact that change has on organizations, individuals and communities. Filled with inspiration, insights and new perspectives, these stories will leave you energized and equipped to drive innovation.

Learn Fast, Fail Fast, Deliver Fast: The ModSquad Way by Tim Tyler, Metlife
The Value of Diverse Experiences by Nandhini Santhanam, Docker
We Need To Talk: How Communication Helps Code by Lauri Apple, Zalando
My Journey To Go by Ashley McNamara, Microsoft
A Strong Belief, Loosely Held: Bringing Empathy to IT by Nirmal Mehta, Booz Allen Hamilton

 Community Theater
Located in the main conference hall, the Community Theater will feature lightning talks and cool hacks from the Docker community and ecosystem.

Looking Under The Hood: containerD by Scott Coulton, Puppet
From Zero to Serverless in 60 Seconds, Anywhere by Alex Ellis, ADP
Deploying Software Containers on Heterogeneous IoT Devices by Daniel Bruzual, Aalto University
Android Meets Docker by Jing Li, Viacom
Cluster Symphony by Anton Weiss, Otomato Software
Containerizing Hardware Accelerated Applications by Chelsea Mafrica
Empowering Docker with Linked Data Principles by Riccardo Tommasini, Politecnico di Milano
Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-Compose by Damien Duportal, CloudBees
Experience the Swarm API in Virtual Reality by Leigh Capili, Beatport
Repainting the Past with Distributed Machine Learning and Docker by Finnian Anderson, Student & Oli Callaghan, Student

 Ecosystem
Ecosystem Track showcases work done by sponsoring partners at DockerCon. Ecosystem sessions include a diverse range of topics and opportunity to learn more about the variety of solutions available in the Docker ecosystem.

Sessions to be announced soon!

We hope you can join us in Copenhagen for an amazing event! Tickets have sold out each year, so make sure to register soon!

Announcing new @DockerCon tracks, sessions, speakers and website!  Click To Tweet

The post Announcing new DockerCon Europe tracks, sessions, speakers and website! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Azure Monitor: Enhanced capabilities for routing logs and metrics

Today we are pleased to announce a new set of capabilities within Azure Monitor for routing your Azure resource diagnostic logs and metrics to storage accounts, Event Hubs namespaces, or Log Analytics workspaces. You can now create multiple resource diagnostic settings per resource, enabling you to route different permutations of log categories and metrics to different destinations (in public preview) and route your metrics and logs to a destination in a different subscription. In this post, we’ll walk through these new capabilities and how you can start using them today.

Creating multiple diagnostic settings

A resource diagnostic setting is a rule on an individual Azure resource that determines what logs and metrics among those available for that resource type are to be collected and to where that data will be sent. Resource diagnostic settings have three destinations, or ‘data sinks’ for monitoring data:

Storage accounts for archival
Log Analytics workspaces for search and analytics
Event Hubs namespaces for integration with 3rd party tools or custom solutions

Previously, only one diagnostic setting could be set per resource. We heard feedback that this was too restrictive in two ways:

It limited you to sending monitoring data to only one instance of each destination. For example, you could only send the logs and metrics for a particular Application Gateway to one storage account. If you have two independent teams for security and monitoring that each wanted to consume this data, this limited your ability to offer that data separately to both teams.
It required that you route the same permutation of log categories and metrics to all destinations. For example, it was impossible to route a particular Batch Account’s service logs into Log Analytics while sending that same account’s metrics into a storage account.

Today, we are introducing the public preview of the ability to create multiple diagnostic settings on a single resource, removing both restrictions above. Let’s take a quick look at how you can set this up in the Azure Portal. Navigate to the Monitor blade, and click on “Diagnostic Settings.”

You’ll notice we’ve renamed this section “Diagnostic Settings” from “Diagnostic Logs” to better reflect the ability to route both log and metric data from a resource. In this blade you’ll see a list of resources that support diagnostic settings. Clicking on one will show you a list of all settings on that resource.

If none exist, you will be prompted to create one.

Clicking “turn on diagnostics” will present the familiar blade for setting a diagnostic setting, but now you will see that a field for “name” has been added. Give your setting a name to differentiate between multiple settings on the same resource.

Click “save,” and now returning to the previous blade you will see the created setting and add an additional setting.

Adding more diagnostic settings will add them to this list. Note that you can have a maximum of three diagnostic settings per resource.

You can also do this using the REST API or in an ARM template. PowerShell support is coming soon. Note that routing data to an additional destination of the same type will incur a service fee per our billing information.

Writing monitoring data across subscriptions

Previously, you could only route metrics and log categories for a resource to a storage account, Event Hubs namespace, or Log Analytics workspace within the same subscription as the resource emitting data. For companies with centralized monitoring teams responsible for keeping track of many subscriptions, we heard that maintaining a destination resource per subscription was tedious, requiring knowledge of the unique storage account (or Event Hubs namespace or workspace) for each subscription. Now you can configure a diagnostic setting to send monitoring data to a destination in a different subscription, provided that your user account has appropriate write access to that destination resource.

Note that authentication is done within a particular Azure Active Directory tenant, so monitoring data can only be routed to a destination within the same tenant as the resource emitting data.

These new capabilities are rolling out to all public Azure regions beginning today. Try them out and let us know your feedback through our UserVoice channel or in the comments below.
Quelle: Azure

Introducing Network Service Tiers: Your cloud network, your way

By Prajakta Joshi, Product Manager, Cloud Networking

We’re excited to announce Network Service Tiers Alpha. Google Cloud Platform (GCP) now offers a tiered cloud network. We let you optimize for performance by choosing Premium Tier, which uses Google’s global network with unparalleled quality of service, or optimize for cost, using the new Standard Tier, an attractively-priced network with performance comparable to that of other leading public clouds.

“Over the last 18 years, we built the world’s largest network, which by some accounts delivers 25-30% of all internet traffic” said Urs Hölzle, SVP Technical Infrastructure, Google. “You enjoy the same infrastructure with Premium Tier. But for some use cases, you may prefer a cheaper, lower-performance alternative. With Network Service Tiers, you can choose the network that’s right for you, for each application.”

Power of Premium Tier 
If you use Google Cloud today, then you already use the powerful Premium Tier.

Premium Tier delivers traffic over Google’s well-provisioned, low latency, highly reliable global network. This network consists of an extensive global private fiber network with over 100 points of presence (POPs) across the globe. By this measure, Google’s network is the largest of any public cloud provider.

In Premium Tier, inbound traffic from your end user to your application in Google Cloud enters Google’s private, high performance network at the POP closest to your end user, and GCP delivers this traffic to your application over its private network.

Outbound and Inbound traffic delivery

Similarly, GCP delivers outbound traffic from your application to end users on Google’s network and exits at the POP closest to them, wherever the end users are across the globe. Thus, most of this traffic reaches its destination with a single hop to the end user’s ISP, so it enjoys minimum congestion and maximum performance.

We architected the Google network to be highly redundant, to ensure high availability for your applications. There are at least three independent paths (N+2 redundancy) between any two locations on the Google network, helping ensure that traffic continues to flow between these two locations even in the event of a disruption. As a result, with Premium Tier, your traffic is unaffected by a single fiber cut. In many situations, traffic can flow to and from your application without interruption even with two simultaneous fiber cuts.

GCP customers use Global Load Balancing, another Premium Tier feature, extensively. You not only get the management simplicity of a single anycast IPv4 or IPv6 Virtual IP (VIP), but can also expand seamlessly across regions, and overflow or fail over to other regions.

With Premium Tier, you use the same network that delivers Google’s Search, Gmail, YouTube, and other services as well as the services of customers such as The Home Depot, Spotify and Evernote.

“75% of homedepot.com is now served out of Google Cloud. From the get-go, we wanted to run across multiple regions for high availability. Google’s global network is one of the strongest features for choosing Google Cloud.”   

— Ravi Yeddula, Senior Director Platform Architecture & Application Development, The Home Depot.

Introducing Standard Tier 

Our new Standard Tier delivers network quality comparable to that of other major public clouds, at a lower price than our Premium Tier.

Why is Standard Tier less expensive? Because we deliver your outbound traffic from GCP to the internet over transit (ISP) networks instead of Google’s network.

Outbound and Inbound traffic delivery

Similarly, we deliver your inbound traffic, from end user to GCP, on Google’s network only within the region where your GCP destination resides. If your user traffic originates from a different region, their traffic will first travel over transit (ISP) network(s) until it reaches the region of the GCP destination.

Standard Tier provides lower network performance and availability compared to Premium Tier. Since we deliver your outbound and inbound traffic on Google’s network only on the short hop between GCP and the POP closest to it, the performance, availability and redundancy characteristics of Standard Tier depend on the transit provider(s) carrying your traffic. Your traffic may experience congestion or outages more frequently relative to Premium Tier, but at a level comparable to other major public clouds.

We also provide only regional network services in Standard Tier, such as the new regional Cloud Load Balancing service. In this tier, your Load Balancing Virtual IP (VIP) is regional, similar to other public cloud offerings, and adds management complexity compared to Premium Tier Global Load Balancing, if you require multi-region deployment.

Compare performance of tiers 
We commissioned Cedexis, an internet performance monitoring and optimization tools company, to take preliminary performance measurements for both Network Service Tiers. As expected, Premium Tier delivers higher throughput and lower latency than Standard Tier. You can view the live dashboards at www.cedexis.com/google-reports/ under the “Network Tiers” section. Cedexis also details their testing methodology on their website.

Cedexis graph below shows throughput for Premium and Standard Tier for HTTP Load Balancing traffic at 50th percentile. Standard (blue line) throughput is 3,223 kbps while Premium (green line) is 5,401 kbps, making Premium throughput ~1.7x times that of Standard. See Cedexis graph below:

In general, Premium Tier displays considerably higher throughput, at every percentile, than Standard Tier.

Compare pricing for tiers 

We’re introducing new pricing for Premium and Standard Tiers. You can review detailed pricing for both tiers here. This pricing will take effect when Network Service Tiers become Generally Available (GA). While in alpha and beta, existing internet egress pricing applies.

With the new Network Tiers pricing (effective at GA), outbound traffic (GCP to internet) is priced 24-33% lower in Standard Tier than in Premium Tier for North America and Europe. Standard Tier is less expensive than internet egress options offered by other major public cloud providers (based on typical published prices for July, 2017). Inbound traffic remains free for both Premium and Standard Tiers. We’ll also change our current destination-based pricing for Premium Tier to be based on both source and destination of traffic since the cost of network traffic varies with the distance your traffic travels over Google’s network. In contrast, Standard Tier traffic will be source-based since it does not travel much over Google’s network.

Choose the right tier 

Here’s a decision tree to help you choose the tier that best fits your requirements.

Configure the tier for your application(s) 

One size does not fit all, and your applications in Google Cloud often have differing availability, performance, footprint and cost requirements. Configure the tier at the resource-level (per Instance, Instance template, Load balancer) if you want granular control or at the overarching project-level if you want to use the same tier across all resources.

Try Network Service Tiers today 

“Cloud customers want choices in service levels and cost. Matching the right service to the right business requirements provides the alignment needed by customers. Google is the first public cloud provider to recognize that in the alpha release of Network Service Tiers. Premium Tier caters to those who need assured quality, and Standard Tier to those who need lower costs or have limited need for global networking.”  

— Dan Conde, Analyst at ESG

Learn more by visiting Network Service Tiers website, and give Network Service Tiers a spin by signing up for alpha. We look forward to your feedback!
Quelle: Google Cloud Platform