Dear DocumentDB customers, welcome to Azure Cosmos DB!

Dear DocumentDB customers,

We are very excited that you are now a part of the Azure Cosmos DB family!

Azure Cosmos DB, announced at the Microsoft Build 2017 conference, is the first globally distributed, multi-model database service for building planet scale apps. You can easily build globally-distributed applications without the hassle of complex, multiple-datacenter configurations. Designed as a globally distributed database system, Cosmos DB automatically replicates all of your data to any number of regions of your choice, for fast, responsive access. Cosmos DB supports transparent multi-homing and guarantees 99.99% high availability.

Only Cosmos DB allows you to use key-value, graph, and document data in one service, at global scale and without worrying about schema or index management. Cosmos DB allows you to use your favorite API including SQL (Document DB), JavaScript, Gremlin, MongoDB, and Azure Table storage to query your data. As the first and only schema-agnostic database, regardless of the data model, Azure Cosmos DB automatically indexes all your data to eliminate any friction, so you can perform blazing fast queries and focus on your app.

One of the APIs Azure Cosmos DB supports is the SQL (DocumentDB) API and the document data-model. You're already very well familiar with it and already using it to run your current DocumentDB applications. You are already using to run your current DocumentDB applications. These APIs are not changing – the NuGet package, the namespaces, and all dependencies remain the same. You don't need to change anything to continue running your apps built with SQL (DocumentDB) API. You are simply now a part of the service that gives you more capabilities at your disposal.

Why the move to Azure Cosmos DB?

The Cosmos DB project started in 2010 as “Project Florence” to address developer pain-points that are faced by large Internet-scale applications inside Microsoft. Observing that these problems are not unique to Microsoft’s applications, we decided to make Cosmos DB generally available to external developers in 2015 in the form of Azure DocumentDB – the service you’ve been using. The exponential growth of the service has validated our design choices and the unique tradeoffs we have made.

Azure Cosmos DB is the next big leap in globally distributed, at scale, cloud databases. As a DocumentDB customer, you now have access to the new breakthrough system and capabilities offered by Azure Cosmos DB. As a part of this release of Azure Cosmos DB, DocumentDB customers, with their data, are automatically Azure Cosmos DB customers. The transition is seamless and you now have access to all capabilities offered by Azure Cosmos DB. These capabilities are in the areas of the core database engine as well as global distribution, elastic scalability, and industry-leading, comprehensive SLAs.

Specifically, Cosmos DB is all about providing intelligent choices to developers and enabling you to build planet scale apps.

Cosmos DB exposes multiple well-defined consistency models: Databases today only offer two extreme choices for consistency – “strong” consistency and “eventual” consistency. In contrast, Cosmos DB is the first production globally distributed database service to have harvested a set of useful consistency models from decades of research and have operationalized them. Cosmos DB offers five well-defined consistency models which provide clear tradeoffs with respect to latency/availability, backed by SLAs.

Cosmos DB allows developers to model real world in its true form: No data is born relational. Cosmos DB allows developers to store and query their data in its original form. It exposes graph, documents, key-values, column-family data models and will enable others. The multi-model and multi-API capabilities remove the friction, allowing you to build with any data model and API.

Cosmos DB meets developers where they are: Cosmos DB offers a multitude of APIs to access and query data including, SQL and various popular OSS APIs.

What are the extra capabilities you get?

The current developer facing manifestation of this work is the new support for Gremlin and Table Storage APIs. And this is just the beginning… We will be adding other popular APIs and newer data models over time with more advances towards performance and storage at global scale.

It is important to point out that DocumentDB’s SQL dialect has always been just one of the many APIs that the underlying Cosmos DB was capable of supporting. As a developer using a fully managed service like Azure Cosmos DB, the only interface to the service is the APIs exposed by the service. To that end, nothing really changes for you as an existing DocumentDB customer. Azure Cosmos DB offers exactly the same SQL API that DocumentDB did. However, now (and in the future) you can get access to other capabilities, which were previously not accessible.

Another manifestation of our continued work is the extended foundation for global and elastic scalability of throughput and storage. One of the very first manifestations of it is the RU/m but we have more capabilities that we will be announcing in these areas. These new capabilities help reduce costs for our customers for various workloads. Please read our recent blog on RU/m here. We have made several foundational enhancements to the global distribution subsystem. One of the many developer facing manifestations of this work is the consistent prefix consistency model (making in total five well-defined consistency models). However, there are many more interesting capabilities we will release as they mature.

If you still have more questions

Here you can read the answers to the most frequently asked questions by other DocumentDB customers about Cosmos DB experience.

Next Steps

Thank you for being our customers! Cosmos DB wouldn’t be the same without you. We brought together your feedback, decades of distributed systems research combined with superb engineering and craftsmanship to create this service. Azure Cosmos DB is the database of the future – it is what we believe is the next big thing in the world of massively scalable databases! It makes your data available close to where your users are, worldwide. Our mission is to be the most trusted database service in the world and to enable you to build amazingly powerful, cosmos-scale apps, more easily.

Next, we recommend you:

Read these:  Azure Cosmos DB announcement blog and the technical overview blog
Understand the core concepts of Azure Cosmos DB
Learn more about the service and its capabilities by reading the documentation
Visit the pricing page to understand the billing

Try out the new capabilities in Azure Cosmos DB and let us know what you think! If you need any help or have questions or feedback, please reach out to us through askcosmosdb@microsoft.com. Stay up-to-date on the latest Azure Cosmos DB news (#CosmosDB) and features by following us on Twitter @AzureCosmosDB and join our LinkedIn Group. We are really excited to see what you will build with Cosmos DB.

— Your friends at Azure Cosmos DB @AzureCosmosDB
Quelle: Azure

Azure #CosmosDB: Introducing Per Minute (RU/m) provisioning to lower your cost, increase your performance

Last week at our annual Build conference, we’ve announced Azure Cosmos DB – our globally-distributed, multi-model database service and a set of new capabilities to enable developers to build apps that are out of this world. As a part of those new capabilities, customers can now provision request unit (RU) throughput at a per minute granularity – we call it RU/m. This new option, currently in preview with a 50% discount, is very complementary with the existing request unit per second (RU/s) provisioning model. With RU/s, you get a predictable performance at the granularity of a second, but it also means that you must provision for spikes and bursty workloads to avoid throttling. Now, with RU/m, you can consume more of what you provision and save on costs. No need to provision for peak anymore! By combining provisioning per second with provisioning per minute, you now can: Address workloads with large spikes Fit the workload patterns that need minute granularity (common in IoT) Have the flexibility in a dev/test environment: the first thing our developers want to do is to code and not have to think of how many request units they need Substantially lower your per-second provisioning needs and save up to 60% in costs, since you don’t need to provision for your peak workloads anymore With Azure Cosmos DB, our philosophy is to continuously innovate to deliver more value to our customers at a lower cost. This new option combines both. Here is what some of our early adopter customers say about this new and exciting capability: “RU/m is a real game changer for us, we see more than doubled “performance” in our load tests that simulate typical user’s behavior. And more importantly we are not blocked during temporary spikes of user’s activity.” – Sergii Kram, Lead Software Engineer, Quest “The RU/m feature is exactly what our project needed.  Previously we had to provision our service to four times our normal max load so that we didn’t throttle requests during spikes in traffic.  With the new RU/m feature, we were able to drastically reduce our DocDB cost and completely eliminate throttling during those spikes.” – Tyler Hennessy, Senior Software Engineer, Xbox “We will definitely use this feature to avoid overprovisioning and save money. Our traffic pattern is very “spiky” (multiple parallel data collection threads dump data hourly) and enabling RU/m provisioning provided the same service quality with a much lower overall throughput. An iterative tuning approach of “adjust-and-monitor” allowed us to scale the setup to a usable production configuration in a few days.” – Andreas Schiffler, Senior Software Engineer, Windows Servicing & Delivery – Data Analytics (WSD DA) How does RU/m work? RU/m is aligned with RU/s. Most important, RU/m can be enabled with a click in the portal or with a single line of code by using the SDKs. The amount of RU/m you get is linear with how many RU/s you provision. RU/m is billed hourly and in addition to reserved RU/s. You can consider RU/m as a flexible budget to consume RUs within a minute. Pricing is fixed so you will always get low cost and financial predictability, without taking the risk of variable pricing. RU/m can be enabled at container level. This can be done through the SDKs (Node.js, Java or .Net) or through the portal (also include MongoDB API workloads). For every 100 RU/s provisioned, you also get 1,000 RU/m provisioned (the ratio is 10x). This means that if you get 1,000 RU/s with 10,000 RU/m for a full month, you will spend $80/month ($60 for 1,000 RU/s+$20 for 10,000 RU/m with preview pricing). At a given second, a request unit will consume your RU/m provisioning only if you have exceeded your per second provisioning. Within a 60 second period (UTC), the per minute provisioning is refilled. RU/m can be enabled only on containers with no more than 5,000 RU/s per partition provisioned. You can decide which type of operations can access the RU/m budget. As an example, you can decide to use RU/m budget only for critical operations and disable RU/m for ad-hoc operations (e.g.: Queries, find more in the documentation). A concrete example Below is a concrete example, in which a customer can provision 10k RU/s with 100k RU/m, saving 73% in cost against provisioning for peak (at 50k RU/s). During a 90-second period on a collection that has 10,000 RU/s and 100,000 RU/m provisioned: Second 1: The RU/m budget is set at 100,000 Second 3: During that second the consumption of request units was 11,010 RUs, 1,010 RUs above the RU/s provisioning. Therefore, 1,010 RUs are deducted from the RU/m budget. 98,990 RUs are available for the next 57 seconds in the RU/m budget. Second 29: During that second, a large spike happened (>4x the per second provisioning) and the consumption of request units was 46,920 RUs. 36,920 RUs are deducted from the RU/m budget that dropped from 92,323 RUs (28th second) to 55,403 RUs (29th second). Second 61: RU/m budget is refilled to 100,000 RUs. Enabling/Disabling RU/m You can enable RU/m at the container level through the SDK or the portal. Through the portal, you only need to click on scale, select the container you want and enable RU/m. To learn how to provision RU/m through the SDK, please refer to the documentation. Currently RU/m is available for the following SDKs: .Net 1.14.1 Java 1.11.0 Node.js 1.12.0 Python 2.2.0 Support for other SDKs will be added soon. Scenarios and Impact with Early Adoption Customers During our beta preview, we’ve identified some interesting and illustrative scenarios to test how big a performance improvement and how much savings our customers were able to achieve with RU/m at scale and worldwide. By referring to those scenarios and our multi-step approach to gradually optimize your throughput, we hope you can also replicate the same improvements. If you refer to the documentation, you will be able to see how the portal metrics can be used to monitor throttling and RUs consumption. Example 1: Leverage RU/m to reduce throttling In e-commerce scenario, a retailer may expect spikes when a merchant registers a new batch of items in their inventory. A customer had a container with 400,000 RU/s provisioned, and 1.68% of the requests were throttled due to insufficient provisioning for spikes. As soon as RU/m provisioning was enabled, this customer experienced an 88% drop in throttled requests (down to 0.2%). As a second step, this customer lowered their provisioned capacity at per second granularity – from 400,000 RU/s to 300,000 RU/s with a throttling rate of 0.25%. As a third step, the customer lowered throughput provisioning to 200,000 RU/s (and 2m RU/m) with a throttling rate of 1.12%. Finally, their ideal provisioning level was found at 250K RU/s with 2.5 million RU/m. Outcome: 17% cost saving on provisioning 80% of throttling eliminated Example 2: Reduce throttling and lower provisioning costs with a spiky workload In this case, a customer was storing a telemetry data for devices with very spiky needs due to sporadic queries. This customer had a partitioned container with 100,000 RU/s provisioned. Due to spiky needs and despite high provisioning, this customer experienced some throttling (0.0109% of requests being throttled). Right after enabling RU/m, the ratio of throttled requests dropped to 0.000567%, representing 95% elimination of throttling. As a second step, they lowered the provisioning to 80K RU/s + 800K RU/m and were still able to hold the same ratio of 0.000677% throttled requests. As a third step, they decreased the provisioning to 50K RU/s + 500K RU/m. Throttling increased to 0.0121%, so the customer decided to increase back the provisioning per second to 60K RU/s + 600K RU/m. Throttling dropped back 0.00199%. Outcome: 20% cost saving on provisioning 80% of throttling eliminated Example 3: Lower provisioning cost and eliminate small throttling A customer from the gaming industry stored data mainly with a predictable access but just a few small spikes. They had provisioned 8,000 RU/s for one single partition and experienced a little bit of throttling (with 0.000053% of requests to be throttled). RU/m was a perfect capability to eliminate any throttling and give the customer a peace of mind. Working together, we also quickly realized that their workload had the potential to be further optimized. First, to enable RU/m, we had to lower their single partition provisioning to 5,000 RU/s (RU/m works only on partitions with a maximum of 5,000 RU/s). Despite a drop of 3,000 RU/s in provisioning, we were able to eliminate all throttling. Since the consumption of RU/m was minimal, this was a signal that we could lower the provisioning to 4,000 RU/s while keeping RU/m. They didn’t experience any throttling and were able to use more than 18% of what they provisioned. As seen in the graph below, we ended up provisioning only 2,000 RU/s with 20,000 RU/m while eliminating all the throttling.Their average cost of consumed RU was lower than any existing cloud service with throughput provisioning or consumption. Their average cost amounted to less than $0.10 per million RUs consumed, 75% cheaper than object store read transactions. Outcome: 53% cost saving on provisioning 100% of throttling eliminated (initially at low level) Example Use-Cases Summary Example Example 1 Example 2 Example 3 Initial Throughput 400,000 RU/s 100,000 RU/s 8,000 RU/s Final Throughput 250,000 RU/s + 2,500,000 RU/m 60,000 RU/s + 600,000 RU/m 2,000 RU/s + 20,000 RU/m Savings 17% cost saving 80% of throttling eliminated 20% cost saving 80% of throttling eliminated 53% cost saving 100% of throttling eliminated Resources Our vision is to be the most trusted database service for all modern applications. We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into Azure Cosmos DB. We spend limitless hours talking to customers every day and adapting Azure Cosmos DB to make the experience truly stellar and fluid. We hope that RU/m capability will enable you to do more and will make your development and maintenance even easier! So, what are the next steps you should take? First, understand the core concepts of Azure Cosmos DB Learn more about RU/m by reading the documentation: How RU/m works Enabling and disabling RU/m Good use cases Optimize your provisioning Specify access to RU/m for specific operations Visit the pricing page to understand billing implications If you need any help or have questions or feedback, please reach out to us through askcosmosdb@microsoft.com. Stay up-to-date on the latest Azure Cosmos DB news (#CosmosDB) and features by following us on Twitter @AzureCosmosDB and join our LinkedIn Group. About Azure Cosmos DB Azure Cosmos DB started as “Project Florence” in the late 2010 to address developer pain-points faced by large scale applications inside Microsoft. Observing that the challenges of building globally distributed apps are not a problem unique to Microsoft, in 2015 we made the first generation of this technology available to Azure Developers in the form of Azure DocumentDB. Since that time, we’ve added new features and introduced significant new capabilities.  Azure Cosmos DB is the result. It represents the next big leap in globally distributed, at scale, cloud databases.
Quelle: Azure

Announcing IBM Voice Gateway to make your call center cognitive

Cognitive solutions are transforming business. Company leaders are watching for new cognitive capabilities to come to market and learning how they can use them to improve business. One of the latest cognitive solutions is IBM Voice Gateway. I’m excited to introduce it to you here.
Before I go into the detail of what Voice Gateway is, I’d like to frame it within the context of other changes going on in the customer support space. Customer support—especially online support—has gone through a lot of change in recent years. Channels such as Twitter and Facebook enable businesses to reach out to unhappy customers who are posting their frustration. Cognitive chatbots are available to support customers 24 hours a day and 365 days a year.
What we haven’t yet seen: businesses taking the cognitive bots that they have available online, and making them available over the phone. An omnichannel bot that provides support across multiple channels means that the experience for customers is the same, no matter how they want to contact a business. And it means that your investment into improving that bot benefits across your support workflow, instead of only improving online support.
That’s why we built IBM Voice Gateway – a solution that connects to Watson’s conversation service—the backbone of cognitive bots running on Watson—and brings that conversation service to your telephone network and call centers.
What does IBM Voice Gateway do?
First is the Cognitive Self-service agent. IBM Voice Gateway connects to a telephone network and routes the calls through Watson Speech-to-Text, Conversation, and Text to Speech services. Watson can understand what has been said and how to respond appropriately.
Simply put, Watson acts as a call center agent. This works out-of-the-box with Voice Gateway. IBM Voice Gateway helps you build integrations with databases as well as bring in additional Watson services like sentiment analysis. You can use Watson to access customer records, provide a customized solution to answer queries and provide quotes. With Watson, you can handle and resolve more customer queries on their first call.

If Watson can’t resolve a query a to your call center, IBM Voice Gateway can transfer calls to customer service agents. When a caller is speaking to an agent, IBM Voice Gateway can use Watson to detect what is being discussed, sending helpful information to the agent in real time.
For example, the system might send a link to an internal document showing the agent how to resolve a specific issue or answer a caller’s questions on a given topic. This can make it easier for agents to focus on the customer’s experience, not searching for answers. And this also means agents don’t have to be experts on every topic, reducing training time and helping agents to handle a wider range of queries.
Where does IBM Voice Gateway run?
Watson’s services are only available through IBM Bluemix, so that part of the solution will be running in the cloud. IBM Voice Gateway itself is delivered as a collection of Docker containers, so it can run either on-premises or in any cloud. You can run the whole solution in IBM Bluemix if you want to move to a full cloud deployment.  Or you could run it on-premises if you want to keep applications accessing customer data within your firewall. But know that you can secure your environment wherever you run IBM Voice Gateway. Your choice of where to run the solution can easily align with your overall cloud strategy.
Get started with IBM Voice Gateway
Want to learn more? Watch the quick demo to see IBM Voice Gateway in action. Or test out the caller’s experience with Watson by calling (855) 969-4241. If you’re ready to try Voice Gateway, download the developer and trial use Docker images from DockerHub linked from our documentation here.
The post Announcing IBM Voice Gateway to make your call center cognitive appeared first on Cloud computing news.
Quelle: Thoughts on Cloud