How Microsoft builds massively scalable services using Azure DocumentDB

This week at Microsoft Data Amp we covered how you can harness the incredible power of data using Microsoft’s latest innovations in its Data Platform. One of the key pieces in the Data Platform is Azure DocumentDB, Microsoft’s globally distributed NoSQL database service. Released in 2015, DocumentDB is being used virtually ubiquitously as a backend for first-party Microsoft services for many years.

DocumentDB is Microsoft&;s multi-tenant, globally distributed database system designed to enable developers to build planet scale applications. DocumentDB allows you to elastically scale both, throughput and storage across any number of geographical regions. The service offers guaranteed low latency at P99, 99.99% high availability, predictable throughput, and multiple well-defined consistency models, all backed by comprehensive SLAs. By virtue of its schema-agnostic and write optimized database engine, by default DocumentDB is capable of automatically indexing all the data it ingests and serve SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As a cloud service, DocumentDB is carefully engineered with multi-tenancy and global distribution from the ground up.

In this blog, we cover case studies of first-party applications of DocumentDB by the Windows, Universal Store, and Azure IoT Hub teams, and how these teams could harness the scalability, low latency, and flexibility benefits of DocumentDB to innovate and bring business value to their services.

Microsoft DnA: How Microsoft uses error reporting and diagnostics to improve Windows

The Windows Data and Analytics (DnA) team in Microsoft implements the crash reporting technology for Windows. One of their components runs as a Windows Service in every Windows device. Whenever an application stops responding on a user&039;s desktop, Windows collects post-error debug information and prompts the user to ask if they’re interested in finding a solution to the error. If the user accepts, the dump is sent over the Internet to the DnA service. When a dump reaches the service, it is analyzed and a solution is sent back to the user when one is available.

Windows error reporting diagnostic information

 

Windows&039; need for fast key-value lookups

In DnA’s terminology, crash reports are organized into “buckets”. Each bucket is used to classify an issue by key attributes such as Application Name, Application Version, Module Name, Module Version, and OS Exception code. Each bucket contains crash reports that are caused by the same bug. With the large ecosystem of hardware and software vendors, and 15 years of collected data about error reports, the DnA service has over 10 billion unique buckets in its database cluster.

One of the DnA team’s requirements was rather simple at face value. Given the hash of a bucket, return the ID corresponding to its bucket/issue if one was available. However, the scale posed interesting technical challenges. There was a lot of data (10 billion buckets, growing at 6 million a day), high volume of requests and global reach (requests from any device running Windows), and low latency requirements (to ensure a good user experience).

To store “Bucket Dimensions”, the DnA team provisioned a single DocumentDB collection with 400,000 request units per second of provisioned throughput. Since all access was by the primary key, they configured the partition key to be the same as the “id”, with a digest of the various attributes as the value. As DocumentDB provided <10 ms read latency and <15ms write latency at p99, DnA could perform fast lookups against buckets and lookup issues even as their data and request volumes continued to grow over time.

Windows cab catalog metadata and query

Aside from fast real-time lookups, the DnA team also wanted to use the data to drive engineering decisions to help improve Microsoft and other vendors’ products by fixing the most impactful issues. For example, the team has observed that addressing the top 1 percent of reliability issues could address 50 percent of customers’ issues. This analysis required storing the crash dump binary files, “cabs”, extracting useful metadata, then running analysis and reports against this data. This presented a number of interesting challenges on its own.

The team deals with approximately 600 different types of reliability-incident data. Managing the schema and indexes required a significant engineering and operational overhead on the team.
The cab metadata was also a big volume of data. There were about 5 billion cabs, and 30 million new cabs were added every day.

The DnA team could migrate their Bucket Dimension and Cab Catalog stores to DocumentDB from their earlier solution based on an on-premises cluster of SQL Servers. Since shifting the database’s heavy lifting to DocumentDB, DnA benefited from the speed, scale, and flexibility offered by DocumentDB. More importantly, they could focus less on maintenance of their database and more on improving user experience on Windows.

You can read the case study at Microsoft’s DnA team achieves planet-scale big-data collection with Azure DocumentDB.

Microsoft Global Homing Service: How Xbox Live and Universal Store build highly available location services

Microsoft’s Universal Store team implements the e-commerce platform that is used to power Microsoft’s storefronts across Windows Store, Xbox, and a large set of Microsoft services. One of the key internal components in the Universal Store backend is the Global Homing Service (GHS), a highly reliable service that provides its downstream consumers with the ability to quickly retrieve location metadata associated with one to many, arbitrary large number of, IDs.

Global Homing Service (GHS) using Azure DocumentDB across 4 regions

GHS is on a hot path for the majority of its consumer services and receives hundreds of thousands of requests per second. Therefore, the latency and throughput requirements for the service are strict. The service had to maintain 99.99% availability and predictable latencies under 300ms end-to-end at the 99.9th percentile to satisfy requirements of its partner teams. To reduce latencies, the service is geo-distributed so that it is as close as possible to calling partner services.

The initial design of GHS was implemented using a combination of Azure Table Storage and various levels of caches. This solution worked well for the initial set of loads, but given the critical nature of GHS and increased adoption of the service from key partners, it became apparent that the existing SLA was not going to meet their partners’ P99.9 requirements of <300ms with a 99.99% reliability over 1 minute. Partners with a critical dependency on the GHS call path found that even if the overall reliability was high, there were periods of time where the number of timeouts would exceed their tolerances and result in a noticeable degradation of the partner’s own SLA. These periods of increased timeouts were given the name “micro-outages” and key partners started tracking these daily.

After investigating many possible solutions, such as LevelDB, Kafka, MongoDB, and Cassandra, the Universal Store team chose to replace GHS’s Azure Table backend and the original cache in front of it with an Azure DocumentDB backend. GHS deployed a single DocumentDB collection with 600,000 request units per second deployed across four geographic regions where their partner teams had the biggest footprint. As a result of the switch of DocumentDB, GHS customers have seen p50 latencies under 30ms and a huge reduction in the number and scale of micro-outages. GHS’s availability has remained at or above 99.99% since the migration. In addition to the increase in service availability, overall latencies significantly improved as well for most of GHS call patterns.

Number of GHS micro-outages before and after DocumentDB migration

Microsoft Azure IoT Hub: How to handle the firehose from billions of IoT devices

Azure IoT Hub is a fully managed service that allows organizations to connect, monitor, and manage up to billions of IoT devices. IoT Hub provides reliable communication between devices, the a queryable store for device metadata and synchronized state information, and provides extensive monitoring for device connectivity and device identity management events. Since IoT Hub is at the ingestion point for the massive volume of writes coming from IoT devices across all of Azure, they needed a robust and scalable database in their backend.

IoT Hub provides device-related information, “device twins”, as part of its APIs that device and back ends can use to synchronize device conditions and configuration. A device twin is a JSON document that includes tags assigned to the device in the backend, a property bag of “reported properties” which include device configuration or conditions, and a property bag of “desired properties” that can be used to notify the device to perform a configuration change. The IoT Hub team choose Azure DocumentDB over Hbase, Cassandra, and MongoDB because DocumentDB provided functionality that the team needed like guaranteed low latency, elastic scaling of storage and throughput, provide high availability via global distribution, and rich query capabilities via automatic indexing.

IoT Hub stores the device twin data as JSON documents and performs updates based on the latest state reported by devices in near real-time. The architecture uses a partitioned collection that uses a compound key constructed by concatenating the Azure account (tenant) ID and the device ID to elastically scale to handle massive volumes of writes. IoT Hub also uses Service Fabric to scale out devices across multiple servers, each server communicating with a 1-N DocumentDB partitions. This topology is replicated across each Azure region that IoT Hub is available.

Next steps

In this blog, we looked at a couple of first-party use cases of DocumentDB and how these Microsoft teams were able to utilize Azure DocumentDB to improve user experience, improve latency, and reliability of their services.

Learn more about global distribution with DocumentDB.
Create a new DocumentDB account from the Azure Portal or download the DocumentDB Emulator.
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB or reach out to us on the developer forums on Stack Overflow.

Quelle: Azure

Ubuntu 12.04 (Precise Pangolin) nearing end-of-life

Ubuntu 12.04 "Precise Pangolin" has been with us from the beginning, since we first embarked on the journey to support Linux virtual machines in Microsoft Azure. However, as its five-year support cycle is nearing an end in April 2017 we must now move on and say "goodbye" to Precise. Ubuntu posted the official EOL notice back in March. The following is an excerpt from one of the announcements:

This is a reminder that the Ubuntu 12.04 (Precise Pangolin) release is nearing its end of life. Ubuntu announced its 12.04 (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with the earlier LTS releases, Ubuntu committed to ongoing security and critical fixes for a period of 5 years. The support period is now nearing its completion and Ubuntu 12.04 will reach its end of life near the end of April 2017. At that time, Ubuntu Security Notices will no longer include information or updated packages, including kernel updates, for Ubuntu 12.04.

The supported upgrade path from Ubuntu 12.04 is via Ubuntu 14.04. Users are encouraged to evaluate and upgrade to our latest 16.04 LTS release via 14.04. Ubuntu 14.04 and 16.04 continue to be actively supported with security updates and select high-impact bug fixes.

For users who can&;t upgrade immediately, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04. These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers.

Users interested in Ubuntu 12.04 ESM updates can purchase Ubuntu Advantage.

Existing UA customers can acquire their ESM credentials by filing a support request.
Quelle: Azure

Use the Enhanced AWS Price List API to Access AWS Service- and Region-Specific Product and Pricing Information

Starting today, the AWS Price List API can return product and pricing information for specific AWS services and regions. Previously, the API returned product and pricing information for a service for all AWS regions at once. You can now create more precise requests that return product and pricing information for the AWS service and region of interest.
For example, using this API you can get the product and pricing information for Amazon S3 in the us-east-1 region: https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonS3/current/us-east-1/index.json or https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonS3/current/us-east-1/index.csv
Quelle: aws.amazon.com

Twitter Locks Trump Associate Roger Stone's Account, Again

Roger Stone is back in Twitter&;s penalty box.

Stone, a confidant to President Donald Trump and former adviser to his campaign, had his Twitter account temporarily locked again this week after tweeting “I&039;m watching you and know what you&039;re up to. Better watch your ass” to Media Matters communications director Laura Allison Keiter on Wednesday afternoon. Twitter locked Stone&039;s account in March following another rules violation.

Reached via Twitter, Keiter forwarded an email she received from Twitter Thursday morning confirming it locked Stone&039;s account.

Laura Allison Keiter

Twitter&039;s in the midst of a scotched earth campaign against harassment on its platform. In recent months, the company has rolled out a number of anti-harassment tools including keyword filters and a new disciplinary measure that temporarily throttles the reach of users it believes are targeting others for abuse. In December, Twitter CEO Jack Dorsey said fighting harassment is the company&039;s top priority.

Roger Stone did not immediately reply to a request for comment.

Quelle: <a href="Twitter Locks Trump Associate Roger Stone&039;s Account, Again“>BuzzFeed

How to stop configuration drift during deployment

Quick hypothetical scenario: meet Dan, an application development executive at a large retailer. One morning, Dan was swamped with complaints about the new enterprise messaging system not working when he walked into work on what he thought would be a lovely day. He had sent out a note to the employee community announcing its availability the evening before. And Dan had spent the entire day  testing the application and clearing it for a launch.
Dan was not surprised. Despite all checks, there could always be an undiscovered dependency that was not considered. In this case, it turned out to be a certain security policy implemented long ago to take care of a threat that no longer existed. It took Dan the better part of the week to find the cause and fix it.
That’s configuration drift.
Configuration drifts: Are they inevitable?
Every IT team spends considerable time ensuring that different environments in a software development lifecycle have the same configuration. Provisioning different environments for development, test, quality assurance and production takes weeks and involves coordination across  teams.
Often enough there are manual changes in the development, test or quality assurance environment that are not conveyed to the production environment and the changes lead to errors. Each team may make slight adjustments to their environment that causes configuration drift, creating complexity and communication nightmares across teams. Much time, effort and expense is wasted trying to identify and fix subtle differences. But it doesn’t have to be this way.
A workable solution to this problem is to standardize a pattern for the full-stack hardware and software infrastructure—the complete application and middleware environment—and re-deploy that pattern repeatedly. This would mean each environment picks up the same pattern and the same configuration, thus avoiding configuration drift for any one or multiple environments. This also helps avoid unforeseen configuration issues for all development, test, quality assurance and production environments. And you can avoid the finger-pointing and wasted time trying to identify and fix issues at or after deployment.
Three forces to tame configuration drift
There are three forces to keep the configuration drift beast under control.

Automation: Standardization and automation of provisioning and deploying app and middleware environment through patterns makes it easier to maintain consistent configurations across all environments.
Synchronization: Patterns ensure that all application environments are in sync and error free throughout their lifecycle, thus streamlining dev, test, QA and production rollouts.
Adoption of best practices: Configuration patterns set the standard application and middleware environment and reduce unpredictability.

A solution that can do all the above would be ideal.
IBM PureApplication: The forces unite to provide a single solution
IBM PureApplication is a set of offerings that converges compute, storage, networking, and a middleware and software stack—including PureApplication Software—into a preintegrated, preconfigured and pretested system. You can be up and running within hours, saving tremendous time, effort and resources versus purchasing, installing, configuring and coordinating patches across individual hardware and software components.
Common to every PureApplication offering is a set of best practices that are captured in patterns. Important configuration information is stored in a pattern, such as middleware deployments and connections to data sources. Pre-built patterns for popular enterprise workloads are available, and they can be easily customized for your unique app and middleware environment. This pattern can be executed with push-button ease to deploy the exact same environment to development, test, quality assurance and production environments. You can eliminate differences across environments and avoid future configuration drift.
Automated provisioning and configuration helps accelerate application delivery so you can get your app into production must faster. It also eliminates errors and reduce the time, effort and cost to identify and fix those errors caused by manual processes and configurations.
Check out the great return on investment of PureApplication from the latest Total Economic ImpactTM of IBM PureApplication from Forrester Research.
The post How to stop configuration drift during deployment appeared first on news.
Quelle: Thoughts on Cloud

Announcing Cloud Partner Portal: Public preview for single-virtual machine offers

Today, we are excited to announce the public preview of the Cloud Partner Portal for publishing single virtual machine offers. The Cloud Partner Portal enables our publisher partners to create, define, publish and get insights for their single virtual machine offerings on the Azure Marketplace.

With this announcement, new and existing publisher partners who wish to publish virtual machines onto Azure will be able to use the new Cloud Partner Portal to perform any of the above actions. This new portal will soon support all other offer types and will replace the current publishing portal in time.

The new improved Cloud Partner Portal

Today’s release has several new features that make publishing onto the Azure Marketplace a lot faster, simpler and easier.

Features of Today’s Release:

1. Org Id login support – This has been an ask from our publisher partners for a long time and we are adding support for Org Id to the Cloud Partner Portal. Additionally, the new publishing portal would support RBAC so that offers remain secure and publishers don’t have to make all contributors co-admins giving them only the level of access needed.

2. Get it right the first time – Everyone hates do-overs. There is nothing worse than spending time defining an offer and thinking you are done only an issue with the offer downstream. To prevent this, your offer is validated as you type. This reduces unwanted surprises after publishing the offer.

Additionally, we anticipate an overall reduction in time from starting defining an offer to actually publishing an offer.

We have spent a considerable amount of time in writing validations for every field within the offer to ensure when publishers click publish, their offer will publish successfully. Even as we ship, we are adding new validations with every release which make the process a lot more predictable.

3. Simplified publishing workflow – The new publishing portal has a simplified publishing workflow providing one path to offer publishing. There are no separate production and staging slots exposed to publishers. Publishers just need to ‘Publish’ their offers, and we take care of the rest.

Before an offer goes live, publishers are given a chance to review it and ensure that everything is working as expected.

4. Be more informed – The new Cloud Partner Portal lets publishers know even before they publish their offer about the steps their offer would go through along with estimated execution times. Along with the guidance around the workflows, we have notifications built into the portal which keep the publishers informed on your offer’s progress to getting listed on Azure.

5. Insights in the portal – The Cloud Partner Portal provides a direct link into the insights of an offer. These insights provide a quick glance and drilldowns into an offer’s health and performance on the Azure Marketplace. The insights portal also has an onboarding video and rich documentation that helps publishers familiarize themselves with its features.

6. Feedback is just a click away – The send a smile/frown button will be ubiquitous in the new portal. In a matter of clicks publishers can send feedback directly to the engineering team.

I can keep writing about the host of new features and capabilities of the new publishing portal, however the best way to discover these features is to take the portal for a spin.

If you are an existing Azure Publisher with a Virtual Machine offer, your account for the new publishing portal is already created. Please visit the Cloud Partner Portal and login using your current credentials. Please refer to our documentation if you would need any help in getting started.

Existing publishers can also let us know if they would like to get their offers migrated following the steps available to registered publishers. We also have a brand new seller guide that can help you navigate the Azure Marketplace better and get most value out of it.

If you are a new Publisher looking to publish onto the Azure platform, please fill up the nomination form here and we will be in touch with you.

As you try out the new cloud partner portal, please keep the steady stream of feedback coming in. We hope you enjoy using the portal as much as we enjoyed creating it for you.
Quelle: Azure

Cloudera now supports Azure Data Lake Store

With the release of Cloudera Enterprise Data Hub 5.11, you can now run Spark, Hive, and MapReduce workloads in a Cloudera cluster on Azure Data Lake Store (ADLS). Running on ADLS has the following benefits:

Grow or shrink a cluster independent of the size of the data.
Data persists independently as you spin up or tear down a cluster. Other clusters and compute engines, such as Azure Data Lake Analytics or Azure SQL Data Warehouse, can execute workload on the same data.
Enable role-based access controls integrated with Azure Active Directory and authorize users and groups with fine-grained POSIX-based ACLs.
Cloud HDFS with performance optimized for analytics workload, supporting reading and writing hundreds of terabytes of data concurrently.
No limits on account size or individual file size.
Data is encrypted at rest by default using service-managed or customer-managed keys in Azure Key Vault, and is encrypted with SSL while in transit.
High data durability at lower cost as data replication is managed by Data Lake Store and exposed from HDFS compatible interface rather than having to replicate data both in HDFS and at the cloud storage infrastructure level.

To get started, you can use the Cloudera Enterprise Data Hub template or the Cloudera Director template on Azure Marketplace to create a Cloudera cluster. Once the cluster is up, use one or both of the following approaches to enable ADLS.

Add a Data Lake Store for cluster wide access

Step 1: ADLS uses Azure Active Directory for identity management and authentication. To access ADLS from a Cloudera cluster, first create a service principal in Azure AD. You will need the Application ID, Authentication Key, and Tenant ID of the service principal.

Step 2: To access ADLS, assign the permissions for the service principal created in the previous step. To do this, go to the Azure portal, navigate to the Data Lake Store, and select Data Explorer. Then navigate to the target path, select Access and add the service principal with appropriate access rights. Refer to this document for details on access control in ADLS.

Step 3: Go to Cloudera Manager -> HDFS -> Configuration. Add the following configurations to core-site.xml:

Use the service principal property values obtained from Step 1 to set these parameters:

<property>
<name>dfs.adls.oauth2.client.id</name>
<value>Application ID</value>
</property>
<property>
<name>dfs.adls.oauth2.credential</name>
<value>Authentication Key</value>
</property>
<property>
<name>dfs.adls.oauth2.refresh.url</name>
<value>https://login.microsoftonline.com/<Tenant ID>/oauth2/token</value>
</property>
<property>
<name>dfs.adls.oauth2.access.token.provider.type</name>
<value>ClientCredential</value>
</property>

Step 4: Verify you can access ADLS by running a Hadoop command, for example:

hdfs dfs -ls adl://<your adls account>.azuredatalakestore.net/<path to file>/

Specify a Data Lake Store in the Hadoop command line

Instead of, or in addition to, configuring a Data Lake Store for cluster wide access, you could also provide ADLS access information in the command line of a MapReduce or Spark job. With this method, if you use an Azure AD refresh token instead of a service principal, and encrypt the credentials in a .JCEKS file under a user’s home directory, you gain the following benefits:

Each user can use their own credentials instead of having a cluster wide credential
Nobody can see another user’s credential because it’s encrypted in .JCEKS in the user’s home directory
No need to store credentials in clear text in a configuration file
No need to wait for someone who has rights to create service principals in Azure AD

The following steps illustrate an example of how you can set this up by using the refresh token obtained by signing in to the Azure cross platform client tool.

Step 1: Sign in to Azure cli by running the command “azure login”, then get the refreshToken and _clientId from .azure/accessTokens.json under the user’s home directory.

Step 2: Run the following commands to set up credentials to access ADLS:

export HADOOP_CREDSTORE_PASSWORD=<your encryption password>
hadoop credential create dfs.adls.oauth2.client.id -value <_clientId from Step 1> -provider jceks://hdfs/user/<username>/cred.jceks
hadoop credential create dfs.adls.oauth2.refresh.token -value ‘<refreshToken from Step 1>’ -provider jceks://hdfs/user/<username>/cred.jceks

Step 3: Verify you can access ADLS by running a Hadoop command, for example:

hdfs dfs -Ddfs.adls.oauth2.access.token.provider.type=RefreshToken -Dhadoop.security.credential.provider.path=jceks://hdfs/user/<username>/cred.jceks -ls adl://<your adls account>.azuredatalakestore.net/<path to file>
hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar teragen -Dmapred.child.env="HADOOP_CREDSTORE_PASSWORD=$HADOOP_CREDSTORE_PASSWORD" -Dyarn.app.mapreduce.am.env="HADOOP_CREDSTORE_PASSWORD=$HADOOP_CREDSTORE_PASSWORD" -Ddfs.adls.oauth2.access.token.provider.type=RefreshToken -Dhadoop.security.credential.provider.path=jceks://hdfs/user/<username>/cred.jceks 1000 adl://<your adls account>.azuredatalakestore.net/<path to file>

Limitations of ADLS support in EDH 5.11

Only Spark, Hive, and MapReduce workloads are supported on ADLS. Support for ADLS in Impala, HBase, and other services will come in future releases.
ADLS is supported as a secondary storage. To access ADLS, use fully qualified URLs in the form of adl://<your adls account>.azuredatalakestore.net/<path to file> .

Additional resources

Cloudera documentation on ADLS support

Quelle: Azure

Why It's So Hard For Riders To Sue Uber

Why It's So Hard For Riders To Sue Uber

AP/Julio Cortez

Uber users who sign up for the app and agree to its terms of service have been given sufficient notice that they have given up their right to sue the company, Uber said in a Massachusetts appeals court Monday.

A group of Massachusetts riders who sued the company for charging them an $8.75 airport ride fee, were “expressly and conspicuously informed” of Uber&;s terms and conditions once they clicked the “done” button to enter their payment information, Uber said. Those terms include giving up the right to bring a class action lawsuit against the company and an agreement to settle disputes out of court, it said.

“Reasonably communicated notice of terms, coupled with an opportunity to review those terms via hyperlink, satisfies the Massachusetts inquiry notice standard,” the company argued. Whether the rider “bothers to access and read those terms is irrelevant.”

A screenshot of Uber’s notification to consumers of its terms of service and privacy policy.

A screenshot of Uber's notification to consumers of its terms of service and privacy policy.

Uber / Via documentcloud.org

Last year a district court upheld Uber&039;s arbitration clause in a decision, but the Boston users are appealing that ruling, claiming Uber attempts to “obscure” its terms, which “abrogate basic legal rights,” including your constitutional right to a jury trial and any obligation to provide a safe
vehicle, or a safe driver.

“When Uber wants to notify consumers about surge pricing, it makes sure they know about the price hike and requires that they specifically agree to it,” Matthew Wessler, a principal at Gupta Wessler representing the Boston riders, told BuzzFeed News. “But when it comes to requiring the waiver of important constitutional rights, companies are much less likely to provide that kind of clear notice.”

Uber declined to comment to BuzzFeed News.

In a number of cases brought by consumers challenging contractual language that prohibits customers from suing, courts have upheld the contract. But a California court on Monday ruled that Uber obscured its terms of service and privacy policy on its sign up screen, meaning the rider suing the company was not reasonably notified that he was giving up his right to a class action lawsuit.

Uber is far from alone in using so-called arbitration clauses, which prohibit consumers from taking the company to court. They&039;ve become widespread across corporate America.

Companies like Starbucks, In-N-Out and Netflix all have agreements that prohibit class action lawsuits and push people into private arbitration. A number of credit cards, loan products and telecommunications companies like AT&T and Verizon also require consumers to agree to an arbitration provision.

In arbitration, instead of going to court, consumers pay fees that can be as much as $1,450 to resolve their complaints in front of an arbitrator in private proceedings, according to the Consumer Financial Protection Bureau, which released a study of arbitration in 2015. The process is similar to court proceedings, except it is less formal and any award is ultimately decided by the arbitrator, who is not a judge.

Consumers resolved 341 cases through arbitration between filed in 2010 and 2011, but only 32 of those cases ended with an award. The total amount awarded to consumers was $172,433, according to the study.

Following a series of state court decisions, arbitration has grown so common that nearly all consumer contracts contain some type of clause that prohibits class actions and forces people into a private dispute resolution process.

AP/Eric Risberg

Colin Marks, a professor at St. Mary&039;s University School of Law, told BuzzFeed News that courts have typically sided with companies when these clauses are challenged.

“It’s always on the consumer,” he said. “I don&039;t know if the average Uber user knows what arbitration means, but it assumes you know what it means.”

Marks said in this case, Uber met the minimum standard to tell users that they have terms and conditions at some point during the sign up process. Federal and state law does not require companies to disclose key points in the terms, including whether you&039;re giving up your right to sue.

“It&039;s not whether or not it&039;s a legal requirement as an ethical requirement,” Jennifer Bennett, an attorney with the consumer advocacy group Public Justice, told BuzzFeed News. “What companies are doing is sneaking in terms that they know consumers aren’t going to be able to find or know or understand which forces consumers to give up their right to go to court.”

Daniel Simons / Via youtube.com

Bennett said psychological research shows that people tend to overlook details while they are occupied with completing a particular task.

In one experiment, half of people who were asked to count how many times a basketball was being passed around were so focused on that task at hand that they didn&039;t notice a person walking by in a gorilla costume.

In Uber&039;s case, this could mean if a user is presented with the terms of service as a hyperlink while they are typing in payment information, it is unlikely they will click through to read the terms, she said.

“They keep trying to put the burden on the consumer to go around hunting for terms,” Bennett said. “It seems wrong for businesses to do.”

Quelle: <a href="Why It&039;s So Hard For Riders To Sue Uber“>BuzzFeed

Microsoft Azure Platform for India GST Suvidha Providers (GSPs)

Goods and Services Tax (GST) is essentially one new indirect tax system for the whole nation, which will make India one unified common market, right from the manufacturer to the consumer. It is a broad-based, comprehensive, single indirect tax which will be levied concurrently on goods and services across India.

The Central and State indirect taxes that may be subsumed by GST include Value Added Tax (VAT), Excise Duty, Service Tax, Central Sales Tax, Additional Customs Duty and Special Additional Duty of Customs. GST will be levied at every stage of the production and distribution chains by giving the benefit of Input Tax Credit (ITC) of the tax remitted at previous stages; thereby, treating the entire country as one market.

Due to the federal structure of India, there will be two components of GST – Central GST (CGST) and State GST (SGST). Both Centre and States will simultaneously levy GST across the value chain. For interstate transactions, an Integrated GST (IGST) will be applicable which will be settled back between the center and the states.

Goods and Services Tax Network (GSTN) a non-Government, private limited company has been formed to provide the IT infrastructure to State/Central government and taxpayers. It has setup a central solution platform to technically enable the GST system including registration of taxpayers, upload/download of invoices, filing returns, State/Central Government reporting, IGST settlement etc. This platform, the GST Platform, has been setup from day 1 as an open API (GST API) based platform allowing various authorized parties to exchange information at scale.

The core IT Strategy for GST had been identified way back and is now accessible.

GSTN has further identified GSPs who can wrap the GST Platform and offer various value added services to its customers (i.e. taxpayers) and further downstream sub-GSPs or registered application service providers (ASPs).

GSPs have limited time to understand the new set of rules, which are continuously evolving, develop the solution and host and run it on a secure and scalable platform. Govt. has also adopted an ecosystem approach which will allow GSPs to further expose their APIs to downstream ASPs (Application Service Providers) who will cater to taxpayer needs.

GSP and the ASPs need to focus on solution capabilities and select a platform which provides most of the plumbing necessary to build an open yet secure, scalable, maintainable and compliant platform and achieve all of this at a manageable cost.

With three huge data centers in India offering a host of IAAS, PAAS and SAAS services supporting host of open source and commercial platforms, Azure offers the best platform to host GSP and GST related ASP solutions.

The attached document authored by Mandar Samant (Area Architect- Microsoft Services) provides a good overview of how Azure services can get GSPs and ASPs started very quickly and provide cutting edge GST solution to the taxpayer community in India.  
Quelle: Azure