DataStax brings Apache Cassandra as a service to Google Cloud

At Google Cloud, we are committed to bringing open source technologies to our customers. For the last decade, Apache Cassandra has been an open source database of choice behind many of the largest internet applications.  While Cassandra’s scale-out architecture can support applications with massive amounts of data, it can be complex to deploy, manage, and scale. This is why many enterprises, moving more of their workloads to the cloud, have been asking for an easier way to run Cassandra workloads on Google Cloud.We are excited to announce the general availability of DataStax’s Cassandra as a Service, called Astra, on the Google Cloud Marketplace. This means you can now get a single, unified bill for all Google Cloud services as well as DataStax Astra. In addition, Datastax Astra is integrated into our console to provide a seamless user experience. Developers can now create Cassandra clusters on Google Cloud in minutes and build applications with Cassandra as a database as a service without the operational overhead of managing Cassandra. Datastax Astra on Google Cloud is available in seven regions across the U.S., Europe, and Asia, with a free tier in either South Carolina (US-EAST1) or Belgium (EUROPE-WEST1).Astra deploys and manages your enterprise’s Cassandra databases directly on top of Google Cloud’s infrastructure so that your data sits in the same Google Cloud global infrastructure as your apps. This means users and enterprises can deliver a high-performance experience at a global scale. Astra users will find a consistent developer experience with open-source Cassandra tools and APIs, as well as REST and GraphQL endpoints and a browser-based CQL shell. Check out the DataStax documentation for additional details.DataStax Astra Cassandra as a Service topology deployed on Google Cloud, using OSS Kubernetes Operator to deploy Apache Cassandra across three Google Cloud zones.How enterprises are using CassandraCompanies like Cisco and METRO see strong opportunities in scaling infrastructure and building efficiency with DataStax Astra on Google Cloud.   Customers rely on Cisco technologies for networking, multi-cloud, and security. “Our team has been working for the past couple of years to ensure our infrastructure is set up to scale to meet unforeseen challenges,” said Maniyarasan Selvaraj, lead Cisco engineer. “Cassandra is at the center of this with its reliability, resilience, and scalability. We are looking forward to the new release of DataStax Astra that could offer us an easier, better experience for Cassandra deployment and application development in the cloud.”METRO, a B2B wholesaler and retail specialist, relies on DataStax and Google Cloud for its digital transformation. “At METRO, we decided to become a digital player and to change the way we build and run software. We moved from on-premises, waterfall and commercial systems to cloud, agile and open source, working with DataStax and Cassandra,” says Arnd Hannemann, technical architect at METRONOM, the tech unit at METRO. “To take us to the next stage, teams will need more flexibility of what and how they use cloud infrastructure. Since most of our application teams are already using Cassandra as a main data store, the new DataStax Astra on Google Cloud promises to deliver this flexibility with very low effort and maintenance.”Ready to start building Cassandra apps in the cloud? You can find Astra in the Google Cloud marketplace. Astra has a 10 GB free tier and billing is integrated within the Google Cloud experience. You can also take it for a test drive.
Quelle: Google Cloud Platform

Microsoft and Redis Labs collaborate to give developers new Azure Cache for Redis capabilities

Now more than ever, enterprises must deliver their applications with speed and robustness that matches the high expectations of their customers. The ability to provide sub-millisecond response times, reliably support the demands of enterprises from small to large, and scale seamlessly to handle millions of requests per second are critical to modern application development. At the same time, technology solutions need to be more open and flexible to handle cloud native architectures while maintaining mission-critical uptime and reliability.

Microsoft and Redis Labs partnering to bring new features to Azure Cache for Redis

With this in mind, I am announcing a new partnership between Microsoft and Redis Labs to bring their industry-leading technology and expertise to Azure Cache for Redis. This partnership represents the first native integration between Redis Labs technology and a major cloud platform, underscoring our commitment to customer choice and flexibility.

For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications. We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.

Redis integration designed for enterprise customers

With this partnership, we worked together with Redis to build a new expanded offering powered by Redis Labs technologies aimed specifically at the needs of enterprise customers. With the integration of Redis Labs technology with Azure Cache for Redis, customers can now access Redis Labs-developed modules including, RediSearch, RedisBloom, and RedisTimeSeries, which provide new data structures that will further enable use cases like data analytics and machine learning.

These modules will be paired with existing features of Azure Cache for Redis including data persistence, clustering, geo-replication, and network isolation. Customers will also now have the option to deploy on SSD flash storage, offering up to ten times larger cache sizes and similar performance levels at a lower price per GB. Reliability will be an even greater priority with an enhanced SLA and the capability to utilize active geo-replication to configure a globally available cache that can fail over to another region without any data loss. In the future, this capability will make it possible to connect on-premise caches with caches in Azure for availability and failover.

Native Azure management creates streamlined experience for developers

While the new service is managed natively in Azure as two new Enterprise tiers, customers will subscribe to the Redis Labs software through Azure Marketplace as an integral part of the configuration process. This unique integration provides all the benefits of using a service embedded in Azure, including management through the Azure portal and command-lines, security and standards compliance, and a unified billing experience.

Microsoft will handle first-line support and collaborate with Redis Labs on specific support issues to utilize their deep knowledge of the technology. As a native offering, developer teams will now find it significantly easier to integrate Redis Enterprise functionality into their Azure development efforts by taking advantage of the security, configuration, and support tools they are already familiar with. Plus, developers can enable these new features with no downtime or change in billing management.

We are thrilled to be expanding our relationship with Redis Labs and continuing our collaboration with the Redis open source community. Together, we will unlock the potential of Redis and enable enterprises to build applications that are more responsive and scalable than ever before with tools that developers love.

Learn more

For more information on the Redis Labs partnership, you can read the blog post from Redis Labs CEO, Ofer Bengal. Additional product information is also available on the Redis Labs blog. The initial announcement was made at RedisConf 2020 Takeaway. Preview for this new offering will be available later this year. Sign up to be notified when the preview is available.
Quelle: Azure

Announcing the general availability of Azure Spot Virtual Machines

Today we’re announcing the general availability of Azure Spot Virtual Machines (VMs). Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot pricing is available on single VMs in addition to VM scale sets (VMSS). This enables you to deploy a broader variety of workloads on Azure while enjoying access to discounted pricing compared to pay-as-you-go rates. Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Spot VMs can be evicted at any time if Azure needs capacity.

The workloads that are ideally suited to run on Spot VMs include, but are not necessarily limited to, the following:

Batch jobs.
Workloads that can sustain or recover from interruptions.
Development and test.
Stateless applications that can use Spot VMs to scale out, opportunistically saving cost.
Short lived jobs which can easily be run again if the VM is evicted.

Spot VMs have replaced the preview of Azure low-priority VMs on scale sets. Eligible low-priority VMs have been automatically transitioned over to Spot VMs.

Spot Virtual Machine pricing

Unlike low-priority VMs, prices for Spot VMs will vary based on capacity for size or SKU in an Azure region. Spot pricing can give you insights into the availability and demand for a given Azure VM series and specific size in a region. The prices will change slowly to provide stabilization, thus allowing you to better manage budgets. In the Azure portal, you will have access to the current Azure VM Spot prices to easily determine which region or VM size best fits your needs. Spot prices are capped at pay-as-you-go rates.

 

Deployment of Spot Virtual Machines

Spot VMs are easy to deploy and manage. Deploying a Spot VM is similar to configuring and deploying a regular VM. For example, in the Azure portal, you can simply select Azure Spot instance to deploy a Spot VM. You can also define your maximum price for your Spot VMs. You get a couple of options:

You can choose to deploy your Spot VM without capping the price. Azure will charge you the Spot VM price at any given time, giving you piece of mind that your VMs will not be evicted for price reasons.
 
Alternatively, you can decide to provide a specific price to stay within your budget. Azure will not charge you above the maximum price you set and will evict the VM if the spot price rises above your defined maximum price.
  
 

There are a few other options available to lower costs:

If your workload does not require a specific VM series and size, then you can find other VMs in the same region that may be cheaper.
If your workload is not dependent on a specific region and you do not have data residency requirements, then you can find a different Azure region to reduce your cost.

Quota for Spot VMs

As part of this announcement, to give better flexibility, Azure is also rolling out a separate quota for Spot VMs that is separate from your pay-as-you-go VM quota. The quota for Spot VMs and Spot VMSS instances is a single quota for all VM sizes in a specific Azure region. This approach will give you easy access to a broader set of VMs.
  

Handling evictions

Azure will try to keep your Spot VM running and minimize evictions, but your workload should be prepared to handle evictions as runtime for an Azure Spot VMs and VMSS instances is not guaranteed. You can optionally get a 30-second eviction notice by subscribing to scheduled events. Your VMs can be evicted due to the following reasons:

Spot prices have gone above the max price you defined for the VM. Azure Spot VMs get evicted when the Spot price for the VM you have chosen goes above the price you defined at the time of deployment. You can try to redeploy your VM by changing prices.
Azure needs to reclaim capacity.

In both scenarios, you can try to redeploy the VM in the same region or availability zone.

Best practices

Here are some effective ways to best utilize Azure Spot VMs:

For long running operations, try to create checkpoints so that you can restart your workload from a previously known checkpoint to handle evictions and save time.
In scale-out scenarios, to save costs, you can have two VMSS, where one has regular VMs and the other has Spot VMs. You can put both in the same load balancer to opportunistically scale out.
Listen to eviction notifications in the VM to get notified when your VM is about to be evicted.
If you are willing to pay up to pay-as-you-go prices then use Eviction type to Capacity Eviction only, in the API provide -1 as max price as Azure never charges you more than the Spot VM price.
To handle evictions, build a retry logic to redeploy VMs. If you do not require a specific VM series and size, then try to deploy a different size that matches your workload needs.
While deploying VMSS, select max spread in portal management tab or FD==1 in the API to find capacity in a zone or region.

Customer success stories

We are pleased with the feedback customer and partners are providing, and we plan to extend the capabilities of this offering to meet the needs of our stakeholders.

“We constantly hear from our customers that they want flexibility in their HPC environment. Flexibility in VM types, available capacity, and even up-front commitment. Azure’s Spot offering is exciting because it provides that flexibility, which combined with Rescale provides cost efficiencies and reduced preemption risk.” Gerhard Esterhuizen, VP of Engineering at Rescale and Brian Tecklenburg, VP of HPC Marketing at Rescale

“We benchmark performance across cloud providers, and Azure has consistently been among the top performers. Azure Spot VMs now allow our customers to use the best infrastructure available in an ad-hoc fashion. Azure Spot VMs, combined with Rescale’s HPC job orchestration and automated checkpoint restarts, help mitigate preemption risks. As a result, our customers can finally use the best cloud infrastructure, whenever they want.” Mulyanto Poort, VP of HPC Engineering at Rescale

 

“InMobi runs one of our largest platforms, the InMobi Exchange, entirely on Azure. Having a cost-effective, cloud-native solution supporting high degrees of concurrency and scale was critical for our business, as the InMobi Exchange frequently finds itself catering to fluctuating traffic curves given the seasonal nature of the digital advertising industry. Leveraging the Azure Spot VM offerings, we’ve been able to rewire our application stack to be fully stateless and it’s been a real game changer with respect to making it cost efficient . Since InMobi was one of the early adopters of the Spot VM offering, we’ve found Microsoft to be excellent partners in ensuring the product evolves to meet our required levels of scale and functionality. As of now, we’ve moved the majority of our serving and data processing compute needs to Azure Spot VMs. And by doing so, we have been able to realize nearly 50-60 percent cost efficiencies on our compute needs, and that’s been a massive help in making our business more economically efficient.” Prasanna Prasad, Senior Vice President, Engineering, InMobi

Learn more about Azure Spot Virtual Machines

Spot VM webpage.
Spot VM pricing: Windows and Linux.
Create Spot VMs in Azure portal.
Create Spot VMs in Azure CLI.
Create Spot VMs in Azure PowerShell.
Create Spot VMs in Azure Resource Manager templates.
Create Spot VMSS in Azure Resource Manager templates.

Quelle: Azure

Announcing Azure Front Door Rules Engine in preview

Starting today, customers of Azure Front Door (AFD) can take advantage of new rules to further customize their AFD behavior to best meet the needs of their customers. These rules bring the specific routing needs of your customers to the forefront of application delivery on Azure Front Door, giving you more control in how you define and enforce what content gets served from where.

Azure Front Door provides Azure customers the ability to deliver content fast and securely using Azure’s best-in-class network. We’ve heard from customers how important it is to have the ability to customize the behavior of your web application service, and we’re excited to announce Rules Engine, a new functionality on Azure Front Door, in preview today. Rules Engine is for all current and new Azure Front Door customers but is particularly important for customers looking to streamline security and content delivery at the edge.

New scenarios in Azure Front Door

Rules Engine allows you to specify how HTTP requests are handled at the edge.

The malleable nature of Rules Engine makes it the ideal solution to address legacy application migrations, where you don’t want to worry about users accessing old applications or not knowing how to find content in your new apps. Similarly, geo match and device identification capabilities ensure that your users are always seeing the best content for where they are and what device they are accessing it on. Implementing security headers and cookies with Rules Engine can also ensure that no matter how your users come to interact with the site, that they’re doing so over a secure connection, preventing browser-based vulnerabilities from impacting your site.

Different combinations of match conditions and actions give you fine-grained control over which users get which content and make the possible scenarios that you can accomplish with Rules Engine endless. Some of the technical capabilities that empower these new scenarios on AFD include the following:

Enforce HTTPS, ensure all your end users interact with your content over a secure connection.
Implement security headers to prevent browser-based vulnerabilities, like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, X-Frame-Options, as well as Access-Control-Allow-Origin headers for CORS scenarios. Security-based attributes can also be defined with cookies.
Route requests to mobile or desktop versions of your application based on the patterns in the contents of request headers, cookies, or query strings.
Use redirect capabilities to return 301/302/307/308 redirects to the client to redirect to new hostnames, paths, or protocols.
Dynamically modify the caching configuration of your route based on the incoming requests.
Rewrite the request URL path and forward the request to the appropriate backend in your configured backend pool.

Rules Engine is designed to handle a full breadth of scenarios. To learn more, a full list of match conditions and AFD Rules Engine actions can be found in our documentation.

How Rules Engine works

Rules Engine handles requests at the edge. Once configuring Rules Engine, when a request hits your Front Door endpoint, Web Application Firewall (WAF) will be executed first, followed by the Rules Engine configuration associated with your frontend or domain. When a Rules Engine configuration is executed, it means that the parent routing rule is already a match. Whether all actions in each of the rules within the Rules Engine configuration are executed is subject to all of the match conditions within that rule being satisfied. If a request matches none of the conditions in your Rule Engine configuration, then the default Routing Rule is executed.

For example, in the configuration below, a Rules Engine is configured to append a response header which changes the max-age of the cache control if the match condition is met.
  

In another example, we see that Rules Engine is configured to send a user to a mobile version of the site if the match condition, device type, is true.
 

In both examples, when none of the match conditions in Rules Engine are met, the default behavior specified in the Route Rule is what gets executed.

Next steps

We look forward to seeing how Rules Engine helps you unlock further capabilities in Azure Front Door. To learn more about what’s available today, check out the documentation for Azure Front Door Rules Engine.
Quelle: Azure

Learn how to deliver insights faster with Azure Synapse Analytics

Today, it’s even more critical to have a data-driven culture. Analytics and AI play a pivotal role in helping businesses make insights-driven decisions—decisions to transform supply chains, develop new ways to interact with customers, and evaluate new offerings.

Many organizations are turning to cloud analytics solutions to quickly create a data-driven culture, accelerate time to insight, reduce costs, and maximize ROI. Join us on Wednesday, June 17, 2020, from 10:00 AM–11:00 AM Pacific Time for Azure Synapse Analytics: How It Works, a virtual event where you’ll hear directly from Microsoft Azure customers. They’ll explain how they’re using the newest Azure Synapse capabilities to deliver insights faster, bring together an entire analytics ecosystem in a central location, reduce costs, and transform decision-making.

In technical demos, customers will show how they combine data ingestion, data warehousing, and big data analytics in a single cloud-native service with Azure Synapse. If you’re a data engineer trying to wrangle multiple data types from multiple sources to create pipelines, or a database administrator with responsibilities over your data lake and data warehouse, you’ll see how all this can be simplified in a code-free environment.

Customers will also demonstrate how Power BI provides a graphical complement to Azure Synapse with built-in Power BI authoring, giving their employees access to unprecedented insights from enterprise data—in seconds, through beautiful visualizations.

Companies have demonstrated significant cost reductions with cloud analytics solutions. Compared to on-premises solutions, these solutions:

Require lower implementation and maintenance costs.
Reduce analytics project development time.
Provide access to more frequent innovation.
Deliver higher levels of security and business continuity.
Help ensure better competitive advantage and higher customer satisfaction.

With cloud analytics, organizations pay for data and analytics tools only when needed, pausing consumption when not in use. Businesses can reallocate budget previously spent on hardware and infrastructure management to optimizing processes and launching new projects. In fact, customers average a 271 percent ROI with Azure Synapse—savings that come from lower operating costs, increased productivity, reallocating staff to higher-value activities, and increasing operating income due to improved analytics. Analytics in Azure is up to 14 times faster and costs 94 percent less than other cloud providers.

BI specialists, data engineers, and other IT and data professionals all use Azure Synapse to build, manage, and optimize analytics pipelines, using a variety of skillsets and in multiple industries. The Azure Synapse studio provides a unified workspace for data prep, data management, data warehousing, big data, and AI tasks.

Data engineers can use a code-free visual environment for managing data pipelines.
Database administrators can automate query optimization and easily explore data lakes.
Data scientists can build proofs of concept in minutes.
Business analysts can securely access datasets and use Power BI to build dashboards in minutes—all while using the same analytics service.

At the Azure Synapse Analytics: How It Works event, you’ll learn how to access and analyze all your data, from your enterprise data lake to multiple data warehouses and big data analytics systems, with blazing speed. With Azure Synapse, data professionals can query both relational and non-relational data using the familiar SQL language, using either serverless or provisioned resources.

Of course, trust is critical for any cloud solution. Customers will share how they take advantage of advanced Azure Synapse security and privacy features such as automated threat detection and always-on data encryption. They help ensure that data stays safe and private by using column-level security and native row-level security, as well as dynamic data masking to automatically protect sensitive data in real time.

Attend the Azure Synapse Analytics: How It Works virtual event on June 17, 2020, to learn how to deliver:

Powerful insights.
Unprecedented ROI.
Unified experience.
Limitless scale.
Unmatched security.

Register early for a chance to win a Microsoft Surface Go tablet (three winners total). Winners will be selected at random. NO PURCHASE NECESSARY. Open to any registered event attendee 18 years of age or older. Void in Cuba, Iran, North Korea, Sudan, Syria, Region of Crimea, and where prohibited. Sweepstakes ends June 17, 2020. See the Official Rules.  
Quelle: Azure

How Docker is Partnering with the Ecosystem to Help Dev Teams Build Apps

Back in March, Justin Graham, our VP of Product, wrote about how partnering with the ecosystem is a key part of Docker’s strategy to help developers and development teams get from source code to public cloud runtimes in the easiest, most efficient and cloud-agnostic way. This post will take a brief look at some of the ways that Docker’s approach to partnering has evolved to support this broader refocused company strategy. 

First, to deliver the best experience for developers Docker needs much more seamless integration with Cloud Service Providers (CSPs). Developers are increasingly looking to cloud runtimes for their applications as evidenced by the tremendous growth that the cloud container services have seen. We want to deliver the best developer experience moving forward from local desktop to cloud, and doing that includes tight integration with any and all clouds for cloud-native development. As a first step, we’ve already announced that we are working with AWS, Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms. You will see us continue to progress our activity in this direction. 

The second piece of Docker’s partnership strategy is offering best in class solutions from around the ecosystem to help developers and development teams build apps faster, easier and more securely. We know that there are tools out there that developers love and rely on in their workflow, and we want those to integrate with their Docker workflow. This makes everyone’s lives easier. Expect to see Docker Hub evolve to become a central point for the ecosystem of developer tools companies to partner with to deliver a more seamless and integrated experience for developers. Imagine your most beloved SaaS tools integrating right into Hub. 

We have been talking with some fantastic partners in the industry and are excited to make some announcements that bring this all to life in the coming weeks. Stay tuned! And if you haven’t already, register now for DockerCon on May 28, 2020 where you’ll learn more about how we’re working with the ecosystem accelerate code to cloud development and hear from some of our great partners.   
The post How Docker is Partnering with the Ecosystem to Help Dev Teams Build Apps appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/