Announcing price cuts on Local SSDs for on-demand and preemptible instances

By Chris Kleban and Michael Basilyan, Product Managers

Starting today, you’ll pay up to 63% less for Local solid-state disks (SSDs) attached to on-demand Google Compute Engine virtual machines. That’s $0.080 per GB per month in most US regions. We’re also introducing even lower prices for Local SSDs used with Preemptible VM instances: up to 71% cheaper than before. That’s $0.064 per GB per month in most US regions.

At Google we’re always looking to reduce total cost of ownership for our customers, pass along price reductions achieved through technology advancements and adjust our pricing so you can take advantage of technology that will help you innovate, in a manner that’s simple for our users.

Local SSD is our high performance, physically attached block storage offering that persists as long as your instance exists. Supporting both NVMe and ISCSI interfaces, Local SSD provides the high IOPs and bandwidth performance that the world’s most demanding workloads require. Local SSD is often the preferred option for your scratch disks, caching layers and scale-out databases like NoSQL.

A key feature of Local SSDs is that you can attach any amount of Local SSD storage to an any machine shape. You aren’t locked in at a fixed ratio of Local SSD capacity to a VM’s vCPU count and memory. Also, Local SSDs are available on the same instances as GPUs, giving you flexibility in building the most high performance systems.

In addition to dropping prices on Local SSDs attached to regular, on-demand instances, we’re lowering the price for Local SSDs attached to Preemptible VMs. Preemptible VMs are just like any other Compute Engine VM, with the caveat that they cannot run for more than 24 hours and that we can preempt (shut down) the VM earlier if we need the capacity for other purposes. This allows us to use our data center capacity more efficiently and share the savings with you. You may request special Local SSD quota for use with Preemptible instances, though your current Local SSD quota works as well (learn more).

Google Cloud Platform (GCP) customers use Preemptible VMs to greatly reduce their compute costs, and have come up with lots of interesting use cases along the way. Our customers are using Preemptible VMs with Local SSDs to analyze financial markets, process data, render movies, analyze genomic data, transcode media and complete a variety of business and engineering tasks, using thousands of Preemptible VM cores in a single job.

We hope that the price reduction on Local SSDs for on-demand and Preemptible VMs will unlock new opportunities and help you solve more interesting business, engineering and scientific problems.

For more details, check out our documentation for Local SSDs and Preemptible VMs. For more pricing information, take a look at the Compute Engine Local SSD pricing page or try out our pricing calculator. If you have questions or feedback, go to the Getting Help page.

We’re excited to see what you build with our products. If you want to share stories and demos of the cool things you’ve built with Compute Engine, reach out on Twitter, Facebook or G+.
Quelle: Google Cloud Platform

Operating Azure Stack

Ever since we announced that Azure Stack is ready to order, we’ve seen a variety of questions related to managing and operating Azure Stack. This blog kicks off a series of blogs addressing these questions. Operating Azure Stack is different. Today, your on-premises IT infrastructure provides a secure and controlled environment for your business solutions, but it also requires configuration, deployment, backup, and management tasks. Your IT administrators spend most of their time on these tasks, to simply keep your on-premises environments running. Azure Stack is an extension of Azure, it enables you to run Azure services in your on-premises environments. That way, you can enable a modern application development environment for your organization across cloud and on-premises, while taking advantage of all the Azure native toolsets and APIs. To ensure you can successfully provide Azure services in your own on-premises environments and can operate them with cloud SLAs, we’ve spent the last several months talking with many of you who’ve told us that the following infrastructure management tasks are the most important, time consuming and complex, and these should be our focus for simplification: Managing capacity: Ensuring that your infrastructure capacity is configured to correctly deal with the demands of providing cloud capacity. Checking and maintaining health: From monitoring, security, business continuity, and disaster recovery, customers want solutions that address these operational tasks and allow them to focus on service delivery. Managing tenants’ use of resources: Infrastructure is successful only when tenants are satisfied with the services, and customers want to be assured that they can successfully provide and operate these services for tenants. The “Azure Stack Operator” will be responsible for these tasks. It was with these tasks in mind that we made the necessary investments in the infrastructure management capabilities of Azure Stack and in the definition of the “Azure Stack Operator” role. This introductory post will be followed by a series of posts where we’ll go into more detail about each of these investments, including:   Monitoring and diagnostics: Monitoring, notifications, and management capabilities allow you to manage the infrastructure and service health, performance, and capacity that underlie your tenant workloads. Patching and update: With Azure Stack, you can update your infrastructure software while minimizing the impact on your business applications, services, and workloads. Business continuity: Azure Backup and Azure Site Recovery will enable tenant-driven protection for business applications and services. Security and compliance: Azure Stack has a secure-by-design approach across network, data, and management. Hardware lifecycle management: Azure Stack will have validated workflows to enable the replacement of failed components. Intuitive experiences: A portal and command-line experience highlights the common actions you need to perform. This allows you to make decisions quickly and intuitively. Future posts will also address the ways Azure Stack can be integrated into your existing datacenter including networking, identity, and ticketing and will go into more depth on the Azure Stack Operator role. Operating Azure Stack is different. Although many scenarios are familiar, I want to make sure you approach Azure Stack knowing that how you operate it will be different. Your value will not only be measured by how you manage Azure Stack infrastructure, but also by what services you provide to your developer community and how fast you can enable them. More information At Ignite this year in Orlando we will have a series of sessions that will educate you on all aspects of Azure Stack. See our list of sessions and register to attend. Lastly, the Azure Stack team is extremely customer focused and we are always looking for new customers to talk too. If you are passionate about Hybrid Cloud and want to talk with team building Azure Stack at Ignite please sign up for our customer meetup.
Quelle: Azure

Announcing deploy to Azure app service Jenkins plugin and more

We are proud to announce the availability of the Azure App Service plugin for Jenkins, which provides Jenkins native capability to continuously deploy to Azure Web Apps. Depending on your environment, you can choose to use Team Service together with Jenkins, or leverage this plugin to deliver your cloud apps or services.

Azure Web App lets developers rapidly build, deploy, and manage powerful websites and web apps using .NET, Node.js, PHP, Python, and Java. It provides built-in autoscale, load balancing, high availability and auto-patching – letting you focus on your application code. Web App on Linux is now in Public Preview, giving you an additional option to run your cloud apps natively on Docker Containers for Linux.

This release of the Azure App Service plugin for Jenkins supports deploying to Azure Web App through:

Git and FTP for Web App and Web App on Linux
Docker for Web App on Linux

The plugin is pipeline-ready so you can use it in a Jenkinsfile. You can find a walkthrough of deploying a Java app to Web App on Linux on Jenkins Hub.

Additional support, such as deploying to Azure Functions, is on the roadmap. Stay tuned for more updates in the coming months.

Azure Storage plugin update

Speaking of pipeline support, from version 0.3.6 onwards, you can leverage the Azure Storage plugin in pipeline code to upload and download build artifacts. Here are the sample syntax for upload and download respectively:

azureUpload storageCredentialId: '<credentials id>', storageType: 'blobstorage',
containerName: '<container>', filesPath: '<files in glob pattern>', virtualPath: '<remote path>'

azureDownload storageCredentialId: '<credentials id>', downloadType: 'container',
containerName: '<container>', includeFilesPattern: '<files in glob pattern>', downloadDirLoc: '<local path>'

You can refer to this article about Using Azure Storage with a Jenkins plugin on Jenkins Hub for more information.

As always, we would love to get your feedback via comments below. You can also email Azure Jenkins Support to let us know what you think.
Quelle: Azure