Announcing Storage Service Encryption with Customer Managed Keys limited preview

Today, we are excited to announce the Preview of Azure Storage Service Encryption with Customer Managed Keys integrated with Azure Key Vault for Azure Blob Storage. Azure customers already benefit from Storage Service Encryption for Azure Blob and File Storage using Microsoft managed keys.

Storage Service Encryption with Customer Managed Keys uses Azure Key Vault that provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated HSMs (Hardware Security Modules). Key Vault streamlines the key management process and enables customers to fully maintain control of keys that are used to encrypt data, manage, and audit their key usage.

This is one of the most requested features by enterprise customers looking to protect sensitive data as part of their regulatory or compliance needs, HIPAA and BAA compliant.

 

Customers can generate/import their RSA key to Azure Key Vault and enable Storage Service Encryption. Azure Storage handles the encryption and decryption in a fully transparent fashion using envelope encryption in which data is encrypted using an AES based key, which is in turn protected using the Customer Managed Key stored in Azure Key Vault.

Customers can rotate their key in Azure Key Vault as per their compliance policies. When they rotate their key, Azure Storage detects the new key version and re-encrypts the Account Encryption Key for that storage account. This does not result in re-encryption of all data and there is no other action required from user.

Customers can also revoke access to the storage account by revoking access on their key in Azure Key Vault. There are several ways to revoke access to your keys. Please refer to Azure Key Vault PowerShell and Azure Key Vault CLI for more details. Revoking access will effectively block access to all blobs in the storage account as the Account Encryption Key is inaccessible by Azure Storage.

Customers can enable this feature on all available redundancy types of Azure Blob storage including premium storage and can toggle from using Microsoft managed to using customer managed keys. There is no additional charge for enabling this feature.

You can enable this feature on any Azure Resource Manager storage account using the Azure Portal, Azure PowerShell, Azure CLI, or the Microsoft Azure Storage Resource Provider API.

To participate in the preview please send an email to ssediscussions@microsoft.com. Find out more about Storage Service Encryption with Customer Managed Keys.
Quelle: Azure

Azure #CosmosDB: Case study around RU/min with the Universal Store Team

Karthik Tunga Gopinath, from the Universal Store Team at Microsoft, leveraged RU/min to optimize the entire workload provisioning with Azure Cosmos DB. He is sharing his experience on this blog.

The Universal Store Team (UST) in Microsoft is the platform powering all “storefronts” of Microsoft assets. These storefronts can be web, rich client, or brick mortar stores. This includes the commerce, fraud, risk, identity, catalog, and entitlements/license systems.

Our team, in the UST, streams user engagement of paid applications, such as games, in near real-time. This data is used by Microsoft Fraud systems to identify if a customer is eligible for refund upon request. The streaming data is ingested to Azure Cosmos DB. Today, this usage data along with other insights provided by the Customer Knowledge platform forms the key factors in the refund decision. Since this infrastructure is used for UST Self-Serve refund, it is imperative that we drive down the operation cost as much as possible.

We chose Azure Cosmos DB primarily for three reasons, guaranteed SLAs, elastic scale, and global distribution. It is crucial for the data store to keep up with the incoming stream of events with guaranteed SLAs. The storage needed to be replicated in order to serve refund queries faster across multiple regions with support for disaster recovery.

The problem

Azure Cosmos DB provides a predictable and low-latency performance backed by the most comprehensive SLA in the industry. Such performance requires capacity provisioning at the second granularity. However, by relying only on provisioning per second, cost was a concern for us as we had to provision for peaks.

For refund scenario, we need to store 2 TB of usage data. This coupled with the fact that we cannot fully control the write skew causes a few problems. The incoming data has temporal bursts due to various reasons such as new game releases, discounts on game purchases, weekday vs. weekends, etc. This would result in the writes being frequently throttled. To avoid being throttled during bursts, we needed to allocate more RUs. This over allocation proved to be expense since we didn’t use all the RUs allocated majority of the time. 

Another reason we allocate more RUs is to decrease our Mean Time to Repair (MTTR). This is primary when trying to catchup to the current stream of metric events after a delay or failure. We need to have enough capacity to catchup as soon as possible. Currently, the platform has between 2,500 to 4,000 writes/sec. In theory, we only need 24K RUs/second, each write is 6 RUs given our document size. However, because of the skew it is hard to predict when a write will happen on which partition. Also, the partition key we used is designed for extremely fast read access for good customer experience during self-service refund.

Request Units per Minute (RU/m) Stress Test Experiment

To test RU/m, we designed the catchup or failure recovery test in our dev environment. Previously we allocated 300K RU/s for the collection. We enabled RU/m and reduced compute capacity from 300K RU/second to 100K RU/second. This gave us extra 1M RU/m. To push our writes to the limit and test our catchup scenario we simulated an upstream failure. We stopped streaming for about 20 hours. We then started streaming backlog data and observed if the application could catchup with the lower RU/s plus additional RU/m. The dev environment also had the same load as we see in production.

Data Lag Graph

RU/min usage

The catchup test is our worst-case workload. The first graph shows the lag in streaming data. We see that overtime the application can catchup and reduce the lag to near zero (real-time). From the second graph we see that RU/m is utilized when we start the catchup test. It shows that the load was beyond our allocated RU/s which is our desired outcome for the test. The RU/m usage is between 35K to 40K until we catchup. This is the expected behavior since the peak load is on Azure Cosmos DB is during this period. The slow drop in RU/m usage is because the catchup is near completion. Once data is close to near real-time, the application doesn’t need all the extra RU/m since the RU/s provisioned is enough to meet the throughput requirements for normal operations most of the time.

RU/m usage during normal conditions

As mentioned above, during normal operations the streaming pipeline requires only 24k RU/s. However, because we might have a lot of activity on a specific partition (“Hot Spot”), each partition can have unexcepted capacity need. Looking at the graph below, you can see sporadic RU/m consumption as RU/m is still used for non-peak load. Such hot spots could happen for numerous reasons that were mentioned above. Also, we noticed that the application did not experience any throttling during the entire test period. Previously we allocated 300k RU/s to handle these bursts. Now with RU/m, we only need to provision 100k RU/s. RU/m also helped us during our normal operation, not just during peak load.

RU/m usage during normal operations

Results

The above experiments proved that we could leverage RU/m and lower RU/s allocation while still handling peak load. By leveraging RU/m we could reduce our operation cost by more than 66%.

Next steps

The team is actively working on ways to reduce the write skews and working with Azure Cosmos DB team to get the most out of RU/m.

Resources

Our vision is to be the most trusted database service for all modern applications. We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into Azure Cosmos DB. We spend limitless hours talking to customers every day and adapting Azure Cosmos DB to make the experience truly stellar and fluid. We hope that RU/m capability will enable you to do more and will make your development and maintenance even easier!

So, what are the next steps you should take?

First, understand the core concepts of Azure Cosmos DB.
Learn more about RU/m by reading the documentation:

How RU/m works
Enabling and disabling RU/m
Good use cases
Optimize your provisioning
Specify access to RU/m for specific operations
Visit the pricing page to understand billing implications

If you need any help or have questions or feedback, please reach out to us through askcosmosdb@microsoft.com. Stay up-to-date on the latest Azure Cosmos DB news (#CosmosDB) and features by following us on Twitter @AzureCosmosDB and join our LinkedIn Group.

About Azure Cosmos DB

Azure Cosmos DB started as “Project Florence” in the late 2010 to address developer pain-points faced by large scale applications inside Microsoft. Observing that the challenges of building globally distributed apps are not a problem unique to Microsoft, in 2015 we made the first generation of this technology available to Azure Developers in the form of Azure DocumentDB. Since that time, we’ve added new features and introduced significant new capabilities. Azure Cosmos DB is the result. It represents the next big leap in globally distributed, at scale, cloud databases.
Quelle: Azure

Ola, Uber's Largest Rival In India, Just Called The Company "Despicable" And "Low On Morality"

Indian police escort Uber taxi driver and accused rapist Shiv Kumar Yadav © following his court appearance in New Delhi on December 8, 2014. Delhi's government on December 8 banned Uber from operating in the Indian capital after a passenger accused one of its drivers of rape.

Chandan Khanna / AFP / Getty Images

Uber's largest rival in India, Ola, has come out with a statement, which says that the ride-hailing giant is “low on morality.”

Ola issued the statement after a Recode report revealed that Eric Alexander, Uber's president of business in the Asia Pacific region who has since been fired, obtained the medical records of a passenger who was raped by an Uber driver in New Delhi, India, in December 2014. Sources told Recode that Uber did not believe the victim's story and thought that it was an attempt by Ola to hurt Uber's brand in India.

Ola's statement reads:

“It is a shame that the privacy and morals of a woman have to be questioned in an attempt to trivialise a horrific crime. It is despicable that anyone can even conceive an attempt to malign competition using this as an opportunity. If this report were to be even remotely true, this is an all time low on morality and a reflection of the very character of an organisation.”

Uber declined to comment to BuzzFeed News about Ola's statement. Amit Jain, the company's India President, said: “Uber responded by working closely with law enforcement and the prosecution to support their investigation and see the perpetrator brought to justice.”

Alexander’s departure came on the heels of Uber's Tuesday announcement in which the company said that it had fired 20 people after internally investigating over 200 claims of discrimination and harassment. Alexander's handling of the New Delhi rape case was one of the claims that was part of Uber's internal investigation.

Twitter: @Uber_Comms

Quelle: <a href="Ola, Uber's Largest Rival In India, Just Called The Company "Despicable" And "Low On Morality"“>BuzzFeed

Do you have a cognitive business?

Cloud and cognitive technologies are driving a revolution throughout enterprises. They’re “disruptive enablers” that produce deeper customer engagement, scale expertise and transform how organizations uncover new opportunities.
Ideally, the combination of cloud and cognitive joins digital business with a new level of digital intelligence, resulting in an organization that creates knowledge from data and expands virtually everyone’s expertise. This enables an enterprise to continually learn and adapt, as well as better anticipate the needs of the marketplace.
To know whether you have a cognitive business, consider these questions:

Do you rely on traditional computing — rules-based, logic-driven, dependent on organized information — or cognitive systems that learn systematically, aren’t dependent on rules, and handle disparate and varied data?
Do your systems understand unstructured information such as imagery, natural language and sounds in books, emails, tweets, journals, blogs, images, sound and videos?
Can your systems learn continually, honing your organization’s expertise to immediately take more informed actions?
Can you and your customers interact with your systems, dissolving barriers and fueling unique, essential user experiences?

As I work with clients, I see those who adopt cloud and cognitive solutions with an entirely different set of advantages from their competitors. Cognitive organizations set their sights on going deeper and wider into their own, as well as third party data. They shorten the cycles between what they can learn from data and what game-changing actions they take. The result is a cognitive business that can think collectively and respond in whole new ways to the marketplace.
Look at how financial technology company Alpha Modus is using cloud and cognitive to solve a common problem of the financial marketplace: analyzing data before the prime moments for investment opportunities have passed. Alpha Modus created a solution with the IBM Bluemix platform to leverage Watson technology. With these capabilities, it can now unlock a variety of unstructured data to evaluate market sentiment and predict market direction.
Cloud and cognitive are also a natural fit with retailers. To integrate its diverse array of order management systems, 1-800-Flowers.com migrated to an IBM Commerce on Cloud platform running in an IBM Bluemix cloud environment. This provides seamless service delivery across the retailer’s 10 brands, increases efficiency, reduces costs and enhances scalability with the IBM Cloud solution.
Then, the retailer created “GWYN,” a cognitive “concierge” that helps tailor responses to each customer by offering personalized feedback and service. It’s based on the IBM Expert Personal Shopper software, which uses the IBM Watson cognitive technology system. When customers inform GWYN that they are looking for a gift for their mothers, for example, GWYN will follow up with a series of questions such as type of occasion and sentiment to ensure that the right product suggestion is given.
Here are a few other cloud and cognitive actions organizations can take:

Build cognitive apps that see, hear, talk and learn and can exceed a user’s highest expectations for experiences and connection to your organization. This can be easy and quick to do. Developers can use IBM Watson application programming interfaces (APIs) now through theopen source developer cloud.

Transform how work gets done and what expertise is shared by giving business processes or workflows cognitive capabilities. Think across the organization by picking one or two places to start.
Collaborate with an expert provider such as IBM. Together, create a comprehensive cognitive system to uncover opportunities to reinvent your industry and give your organization a new view of what’s possible.

Cloud and cognitive systems will put your organization in the here and now. More than anything, they’ll answer the most important question all enterprises grapple with: “How do we create new value?”
With digital systems and intelligence, it will not be one answer, but many, as exciting opportunities await.
Find out how to build your business into a cloud and cognitive enterprise.
The post Do you have a cognitive business? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Solutions guide: Preparing Container Engine environments for production

By Vic Iglesias, Cloud Solutions Architect

Many Google Cloud Platform (GCP) users are now migrating production workloads to Container Engine, our managed Kubernetes environment. You can spin up a Container Engine cluster for development, then quickly start porting your applications. First and foremost, a production application must be resilient and fault tolerant and deployed using Kubernetes best practices. You also need to prepare the Kubernetes environment for production by hardening it. As part of the migration to production, you may need to lock down who or what has access to your clusters and applications, both from an administrative as well as network perspective.

We recently created a guide that will help you with the push towards production on Container Engine. The guide walks through various patterns and features that allow you to lock down your Container Engine workloads. The first half focuses on how to control access to the cluster administratively using IAM and Kubernetes RBAC. The second half dives into network access patterns teaching you to properly configure your environment and Kubernetes services. With the IAM and networking models locked down appropriately, you can rest assured that you’re ready to start directing your users to your new applications.

Read the full solution guide for using Container Engine for production workloads, or learn more about Container Engine from the documentation.
Quelle: Google Cloud Platform