Internet Music Has Been Graced By A New Meme: Soundcloud Vs Bandcamp

Giphy

If you listen to music in the internet, you&;ve probably heard of Soundcloud and Bandcamp, two sites that let you stream music. They&039;re different from music streaming services like Spotify, Tidal, and Apple Music because they cater to artists trying to build up their audiences rather than established players, though famous artists do use them on occasion.

Soundcloud is a site where any musician can post their music and where people can listen for free, comment on it, and repost it on their own feeds. Chance the Rapper famously used it to make his latest album available for free after it was an Apple Music exclusive for two weeks, and Kanye west posted tracks from his most recent album there. It&039;s common for DJs to use the platform to post their remixes.

The Soundcloud homepage

Bandcamp is similar — artists can post their music and people can listen for free — but the site has a feature that allows people to buy albums under a “pay-what-you-want” price tag or donate to artists they listen to. The site&039;s community focuses on independent artists. It&039;s more common to see entire albums on Bandcamp than on Soundcloud, which favors singles.

The Bandcamp homepage

The stereotypes of the two sites&039; communities are that Soundcloud fans are much more into hip hop and more mainstream music, and Bandcamp users are more likely to listen to twee indie music. The Fader describes the types of music popular on each platform: “emo-rap from Soundcloud, and the lo-fi releases of artists like Frankie Cosmos on Bandcamp.” Now, the powers of the internet have mined Soundcloud and Bandcamp&039;s musical differences and turned them in meme gold.

The band Robots With Rayguns joined in.

h/t to the Fader

Quelle: <a href="Internet Music Has Been Graced By A New Meme: Soundcloud Vs Bandcamp“>BuzzFeed

Parents Will Get Refunds For Amazon Purchases Their Kids Made

AP/Eric Risberg

In a major victory for parents, Amazon agreed to refund as much as $70 million to users whose children made unauthorized in-app game purchases between November 2011 and May 2016, the Federal Trade Commission announced on Tuesday.

According to the FTC, “Amazon offers many children’s apps in its appstore for download to mobile devices such as the Kindle Fire….Amazon’s setup allowed children playing these kids’ games to spend unlimited amounts of money to pay for virtual items within the apps such as &;coins,&039; &039;stars,&039; and &039;acorns&039; without parental involvement.”

The refunds may not mark the end of the issue, however. The FTC agreed to drop its appeal requesting an injunction that would have banned Amazon from continuing this practice future. Currently, any in-app purchase over $20 requires a parental control password or PIN, according to Amazon&039;s site.

Amazon declined to comment for this story.

Parents who realized their kids had accrued a hefty Amazon bill faced an uphill battle to get a refund. Amazon&039;s in-app charges are final and non-refundable.

One consumer&039;s six-year-old, who couldn&039;t read, simply “click[ed] a lot of buttons at random” on her Kindle and racked up several unauthorized charges, according to the 2014 complaint. Another consumer&039;s daughter amassed a $358.42 bill in unauthorized charges.

“This case demonstrates what should be a bedrock principle for all companies — you must get customers’ consent before you charge them,” Thomas Pahl, acting director of the FTC’s Bureau of Consumer Protection, said in a statement. “Consumers affected by Amazon’s practices can now be compensated for charges they didn’t expect or authorize.”

The agency&039;s action against Amazon follows two similar cases against Apple and Google which allowed children to make in-app purchases without their parents consent. Apple and Google were both ordered to offer millions in refunds to consumers for the charges.

Amazon will be leading the refund operations, and details on the program will be announced shortly, the FTC said.

If You Bought An Apple Charger On Amazon, It’s Probably Fake And Might Catch Fire

Quelle: <a href="Parents Will Get Refunds For Amazon Purchases Their Kids Made“>BuzzFeed

A Top Operations Executive Is Leaving Instacart

Patrick T. Fallon / Getty Images

A little over a year after he was hired to oversee Instcart&;s contractor workforce and customer service team, senior vice president of operations Mike Swartz is leaving the company, BuzzFeed News has confirmed.

Prior to joining Instacart, Swartz had a decade-long career in operations at Amazon; he has also served as an advisor to Flipkart and Warby Parker.

The news comes on the heels of a $400 million funding round for Instacart, which is now valued at $3.4 billion.

But in other ways, it&039;s been a tough year for Instacart, marked by an increasingly tense relationship with its workforce. “Shoppers” who work in-store are Instacart employees, but shoppers who buy and deliver customers&039; groceries are contract workers.

Instacart first cut delivery worker wages last March. Then, in September, the company announced it was replacing tips with a service fee that would be collected by the company and distributed to workers. After workers revolted, Instacart agreed to keep tips on the platform — but it made the service fee, which doesn&039;t go directly to workers, the default option.

Instacart failed to explain the difference between the fee and tips to customers; as a result, the delivery workers saw their earnings slide. In October, Instacart CEO Apoorva Mehta told BuzzFeed News that shoppers would have to earn less for the company to continue to grow.

In December, a group of workers filed a class action lawsuit against the company That lawsuit was settled last month for $4.6 million. In the settlement, Instacart promised to better differentiate between tips and the service fee in the future, though how it plans to do that is unclear. Meanwhile, Instacart shoppers tell BuzzFeed News that the rates they earn per delivery have continued to fall.

In a statement about Swartz, Instacart said, “He&039;s been a great asset to our team in the last year, and we wish him the best in his future endeavors.” Swartz did not immediately return a request for comment from BuzzFeed News.

At the time of his hiring, Swartz told Recode, “When you think about changing traffic patterns, product availability, the weather, customer preferences, you realize how complex the demand modeling can be.”

Quelle: <a href="A Top Operations Executive Is Leaving Instacart“>BuzzFeed

Configuring Private DNS Zones and Upstream Nameservers in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes .6Many users have existing domain name zones that they would like to integrate into their Kubernetes DNS namespace. For example, hybrid-cloud users may want to resolve their internal “.corp” domain addresses within the cluster. Other users may have a zone populated by a non-Kubernetes service discovery system (like Consul). We’re pleased to announce that, in Kubernetes 1.6, kube-dns adds support for configurable private DNS zones (often called “stub domains”) and external upstream DNS nameservers. In this blog post, we describe how to configure and use this feature.Default lookup flowKubernetes currently supports two DNS policies specified on a per-pod basis using the dnsPolicy flag: “Default” and “ClusterFirst”. If dnsPolicy is not explicitly specified, then “ClusterFirst” is used:If dnsPolicy is set to “Default”, then the name resolution configuration is inherited from the node the pods run on. Note: this feature cannot be used in conjunction with dnsPolicy: “Default”.If dnsPolicy is set to “ClusterFirst”, then DNS queries will be sent to the kube-dns service. Queries for domains rooted in the configured cluster domain suffix (any address ending in “.cluster.local” in the example above) will be answered by the kube-dns service. All other queries (for example, www.kubernetes.io) will be forwarded to the upstream nameserver inherited from the node.Before this feature, it was common to introduce stub domains by replacing the upstream DNS with a custom resolver. However, this caused the custom resolver itself to become a critical path for DNS resolution, where issues with scalability and availability may cause the cluster to lose DNS functionality. This feature allows the user to introduce custom resolution without taking over the entire resolution path.Customizing the DNS FlowBeginning in Kubernetes 1.6, cluster administrators can specify custom stub domains and upstream nameservers by providing a ConfigMap for kube-dns. For example, the configuration below inserts a single stub domain and two upstream nameservers. As specified, DNS requests with the “.acme.local” suffix will be forwarded to a DNS listening at 1..3.4. Additionally, Google Public DNS will serve upstream queries. See ConfigMap Configuration Notes at the end of this section for a few notes about the data format.apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-systemdata: stubDomains: | {“acme.local”: [“1.2.3.4”]} upstreamNameservers: | [“8.8.8.8”, “8.8.4.4”]The diagram below shows the flow of DNS queries specified in the configuration above. With the dnsPolicy set to “ClusterFirst” a DNS query is first sent to the DNS caching layer in kube-dns. From here, the suffix of the request is examined and then forwarded to the appropriate DNS.  In this case, names with the cluster suffix (e.g.; “.cluster.local”) are sent to kube-dns. Names with the stub domain suffix (e.g.; “.acme.local”) will be sent to the configured custom resolver. Finally, requests that do not match any of those suffixes will be forwarded to the upstream DNS.Below is a table of example domain names and the destination of the queries for those domain names:Domain nameServer answering the querykubernetes.default.svc.cluster.localkube-dnsfoo.acme.localcustom DNS (1.2.3.4)widget.comupstream DNS (one of 8.8.8.8, 8.8.4.4)ConfigMap Configuration NotesstubDomains (optional)Format: a JSON map using a DNS suffix key (e.g.; “acme.local”) and a value consisting of a JSON array of DNS IPs.Note: The target nameserver may itself be a Kubernetes service. For instance, you can run your own copy of dnsmasq to export custom DNS names into the ClusterDNS namespace.upstreamNameservers (optional)Format: a JSON array of DNS IPs.Note: If specified, then the values specified replace the nameservers taken by default from the node’s /etc/resolv.confLimits: a maximum of three upstream nameservers can be specifiedExample 1: Adding a Consul DNS Stub DomainIn this example, the user has Consul DNS service discovery system they wish to integrate with kube-dns. The consul domain server is located at 10.150.0.1, and all consul names have the suffix “.consul.local”.  To configure Kubernetes, the cluster administrator simply creates a ConfigMap object as shown below.  Note: in this example, the cluster administrator did not wish to override the node’s upstream nameservers, so they didn’t need to specify the optional upstreamNameservers field.Example 2: Replacing the Upstream NameserversIn this example the cluster administrator wants to explicitly force all non-cluster DNS lookups to go through their own nameserver at 172.16.0.1.  Again, this is easy to accomplish; they just need to create a ConfigMap with the upstreamNameservers field specifying the desired nameserver.apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-systemdata: upstreamNameservers: | [“172.16.0.1”]Get involvedIf you’d like to contribute or simply help provide feedback and drive the roadmap, join our community. Specifically for network related conversations participate though one of these channels:Chat with us on the Kubernetes Slack network channel Join our Special Interest Group, SIG-Network, which meets on Tuesdays at 14:00 PTThanks for your support and contributions. Read more in-depth posts on what’s new in Kubernetes 1.6 here.–Bowei Du, Software Engineer and Matthew DeLio, Product Manager, GooglePost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

Let’s Meet At OpenStack Summit In Boston!

The post Let&;s Meet At OpenStack Summit In Boston! appeared first on Mirantis | Pure Play Open Cloud.

 
The citizens of Cloud City are suffering — Mirantis is here to help!
 
We&8217;re planning to have a super time at summit, and hope that you can join us in the fight against vendor lock-in. Come to booth C1 to power up on the latest technology and our revolutionary Mirantis Cloud Platform.

If you&8217;d like to talk with our team at the summit, simply contact us and we&8217;ll schedule a meeting.

REQUEST A MEETING

 
Free Mirantis Training @ Summit
Take advantage of our special training offers to power up your skills while you&8217;re at the Summit! Mirantis Training will be offering an Accelerated Bootcamp session before the big event. Our courses will be conveniently held within walking distance of the Hynes Convention Center.

Additionally, we&8217;re offering a discounted Professional-level Certification exam and a free Kubernetes training, both held during the Summit.

 
Mirantis Presentations
Here&8217;s where you can find us during the summit&;.
 
MONDAY MAY 8

Monday, 12:05pm-12:15pm
Level: Intermediate
Turbo Charged VNFs at 40 gbit/s. Approaches to deliver fast, low latency networking using OpenStack.
(Gregory Elkinbard, Mirantis; Nuage)

Monday, 3:40pm-4:20pm
Level: Intermediate
Project Update &; Documentation
(Olga Gusarenko, Mirantis)

Monday, 4:40pm-5:20pm
Level: Intermediate
Cinder Stands Alone
(Ivan Kolodyazhny, Mirantis)

Monday, 5:30pm-6:10pm
Level: Intermediate
m1.Boaty.McBoatface: The joys of flavor planning by popular vote
(Craig Anderson, Mirantis)

 

TUESDAY MAY 9

Tuesday, 2:00pm-2:40pm
Level: Intermediate
Proactive support and Customer care
(Anton Tarasov, Mirantis)

Tuesday, 2:30pm-2:40pm
Level: Advanced
OpenStack, Kubernetes and SaltStack for complete deployment automation
(Aleš Komárek and Thomas Lichtenstein, Mirantis)

Tuesday, 2:50pm-3:30pm
Level: Intermediate
OpenStack Journey: from containers to functions
(Ihor Dvoretskyi, Mirantis; Iron.io, BlueBox)

Tuesday, 4:40pm-5:20pm
Level: Advanced
Point and Click ->CI/CD: Real world look at better OpenStack deployment, sustainability, upgrades!
(Bruce Mathews and Ryan Day, Mirantis; AT&T)

Tuesday, 5:05pm-5:45pm
Level: Intermediate
Workload Onboarding and Lifecycle Management with Heat
(Florin Stingaciu and Lance Haig, Mirantis)

 

WEDNESDAY MAY 10

Wednesday, 9:50am-10:30am
Level: Intermediate
Project Update &8211; Neutron
(Kevin Benton, Mirantis)

Wednesday, 11:00am-11:40am
Level: Intermediate
Project Update &8211; Nova
(Jay Pipes, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Kuryr-Kubernetes: The seamless path to adding Pods to your datacenter networking
(Ilya Chukhnakov, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
OpenStack: pushing to 5000 nodes and beyond
(Dina Belova and Georgy Okrokvertskhov, Mirantis)

Wednesday, 4:30pm-5:10pm
Level: Intermediate
Project Update &8211; Rally
(Andrey Kurilin, Mirantis)

 

THURSDAY MAY 11

Thursday, 9:50am-10:30am
Level: Intermediate
OSprofiler: evaluating OpenStack
(Dina Belova, Mirantis; VMware)

Thursday, 11:00am-11:40am
Level: Intermediate
Scheduler Wars: A New Hope
(Jay Pipes, Mirantis)

Thursday, 11:30am-11:40am
Level: Beginner
Saving one cloud at a time with tenant care
(Bryan Langston, Mirantis; Comcast)

Thursday, 3:10pm-3:50pm
Level: Advanced
Behind the Scenes with Placement and Resource Tracking in Nova
(Jay Pipes, Mirantis)

Thursday, 5:00pm-5:40pm
Level: Intermediate
Terraforming OpenStack Landscape
(Mykyta Gubenko, Mirantis)

 

Notable Presentations By The Community
 
TUESDAY MAY 9

Tuesday, 11:15am-11:55am
Level: Intermediate
AT&;T Container Strategy and OpenStack&8217;s role in it
(AT&038;T)

Tuesday, 11:45am-11:55am
Level: Intermediate
AT&038;T Cloud Evolution : Virtual to Container based (CI/CD)^2
(AT&038;T)

WEDNESDAY MAY 10

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Event Correlation &038; Life Cycle Management – How will they coexist in the NFV world?
(Cox Communications)

Wednesday, 5:20pm-6:00pm
Level: Intermediate
Nova Scheduler: Optimizing, Configuring and Deploying NFV VNF&8217;s on OpenStack
(Wind River)

THURSDAY MAY 11

Thursday, 9:00am-9:40am
Level: Intermediate
ChatOpsing Your Production Openstack Cloud
(Adobe)

Thursday, 11:00am-11:10am
Level: Intermediate
OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
(Ericsson)

Thursday, 1:30pm-2:10pm
Level: Beginner
Participating in translation makes you an internationalized OpenStacker &038; developer
(Deutsche Telekom AG)

Thursday, 5:00pm-5:40pm
Level: Beginner
Future of Cloud Networking and Policy Automation
(Cox Communications)

The post Let&8217;s Meet At OpenStack Summit In Boston! appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Steve Hardy: OpenStack TripleO in Ocata, from the OpenStack PTG in Atlanta

Steve Hardy talks about TripleO in the Ocata release, at the Openstack PTG in Atlanta.

Steve: My name is Steve Hardy. I work primarily on the TripleO project, which is an OpenStack deployment project. What makes TripleO interesting is that it uses OpenStack components primarily in order to deploy a production OpenStack cloud. It uses OpenStack Ironic to do bare metal provisioning. It uses Heat orchestration in order to drive the configuration workflow. And we also recently started using Mistral, which is an OpenStack workflow component.

So it’s kind of different from some of the other deployment initiatives. And it’s a nice feedback loop where we’re making use of the OpenStack services in the deployment story, as well as in the deployed cloud.

This last couple of cycles we’ve been working towards more composability. That basically means allowing operators more flexibility with service placement, and also allowing them to define groups of node in a more flexible way so that you could either specify different configurations – perhaps you have multiple types of hardware for different compute configurations for Nova, or perhaps you want to scale services into particular groups of clusters for particular services.

It’s basically about giving more choice and flexibility into how they deploy their architecture.

Rich: Upgrades have long been a pain point. I understand there’s some improvement in this cycle there as well?

Steve: Yes. Having delivered composable services and composable roles for the Newton OpenStack release, the next big challenge was giving operators the flexibility to deploy services on arbitrary nodes in your OpenStack environment, you need some way to upgrade, and you can’t necessarily make assumptions about which service is running on which group of nodes. So we’ve implented the new feature which is called composable upgrades. That uses some Heat functionality combined with Ansible tasks, in order to allow very flexible dynamic definition of what upgrade actions need to take place when you’re upgrading some specific group of nodes within your environment. That’s part of the new Ocata release. It’s hopefully going to provide a better upgrade experience, for end-to-end upgrades of all the OpenStack services that TripleO supports.

Rich: It was a very short cycle. Did you get done what you wanted to get done, or are things pushed off to Pike now.

Steve: I think there’s a few remaining improvements around operator-driven upgrades, which we’ll be looking at during the Pike cycle. It certainly has been a bit of a challenge with the short development timeframe during Ocata. But the architecture has landed, and we’ve got composable upgrade support for all the services in Heat upstream, so I feel like we’ve done what we set out to do in this cycle, and there will be further improvements around operator-drive upgrade workflow and also containerization during the Pike timeframe.

Rich: This week we’re at the PTG. Have you already had your team meetings, or are they still to come.

Steve: The TripleO team meetings start tomorrow, which is Wednesday. The previous two days have mostly been cross-project discussion. Some of which related to collaborations which may impact TripleO features, some of which was very interesting. But the TripleO schedule starts tomorrow – Wednesday and Thursday. We’ve got a fairly packed agenda, which is going to focus around – primarily the next steps for upgrades, containerization, and ways that we can potentially collaborate more closely with some of the other deployment projects within the OpenStack community.

Rich: Is Kolla something that TripleO uses to deploy, or is that completely unrelated?

Steve: The two projects are collaborating. Kolla provides a number of components, one of which is container definitions for the OpenStack services themselves, and the containerized TripleO architecture actually consumes those. There are some other pieces which are different between the two projects. We use Heat to orchestrate container deployment, and there’s an emphasis on Ansible and Kubernetes on the Kolla side, where we’re having discussions around future collaboration.

There’s a session planned on our agenda for a meeting between the Kolla Kubernetes folks and TripleO folks to figure out of there’s long-term collaboration there. But at the moment there’s good collaboration around the container definitions and we just orchestrate deploying those containers.

We’ll see what happens in the next couple of days of sessions, and getting on with the work we have planned for Pike.

Rich: Thank you very much.
Quelle: RDO