Building a render farm in GCP using OpenCue—new guide available

From rendering photorealistic humans and fantastical worlds which blend seamlessly with live action photography, to creating stylized characters and environments for animated features, we are in a golden age of computer-generated imagery. It’s no wonder that this work requires more and more processing power, faster networks, and more capable storage to complete each frame of these projects.As the work necessary to complete each frame in a movie grows in complexity, so does the number of scenes requiring visual effects (VFX) or animation. A blockbuster film’s shot count is now in the thousands, and for an animated feature, every shot requires a multitude of different rendering tasks. In addition, visual content created for streaming services, television, advertisements, and game cinematics increasingly call for visual effects and animation augmentation—much of it at the same level of quality as feature films. The number of projects requiring VFX and animation work is growing rapidly and pushing render requirements to new heights.Google Cloud Platform (GCP) can help by providing resources to get this work done efficiently and in a cost effective manner. By using Instance Templates to tailor a Virtual Machine (VM) in size to fit the resource requirements of each individual frame or task, you optimize your spend by right sizing your VMs. Managed Instance Groups (MIGs) can be used to scale the number of resources in these templates to the amount of tasks you need to render. When processing is complete for each of these, simply shut down the associated resources so you only pay for what you use, when you use it.But how does one orchestrate the distribution of the multitude of rendering tasks required for an individual film, much less the group of films larger studios work on concurrently?For a long time, studios have carried the cost of building their own render management tools, or used a third party software provider to help solve this problem. There is now another option. In collaboration with Sony Pictures Imageworks, Google recently released OpenCue, an open source, high-performance render manager built specifically for the needs of the visual effects and animation industry. OpenCue can be run in a variety of ways, and it’s capable of managing resources that are exclusively on-premise, entirely in the cloud, or spanning both in a hybrid environment.Today, we’re announcing a new solution: Building a render farm in GCP using OpenCue. This tutorial guides you through deploying OpenCue, and all the resources required, to build a render farm in GCP. It explores a workflow for creating and deploying all prerequisite software as Docker images, as well as managing the size and scale of compute resources through Instance Templates and MIGs. It also provides an overview of the OpenCue interface as you manage rendering an animation scene from start to finish.We hope you find this guide useful. Please tell us what you think and be sure to sign up for a trial at no cost to explore building a render farm in GCP using OpenCue.
Quelle: Google Cloud Platform

How does your cloud storage grow? With a scalable plan and a price drop

Consolidating storage into a centrally managed infrastructure resource can make life as a storage architect much easier. But the path to consolidation is fraught with complexity. Data is flowing into your organization constantly from live sources, whether from your company’s customers, employees, partners or the devices and hardware you maintain. All this data sits at different locations owned by different business units, inside various types of storage technologies that aren’t necessarily available the moment you need them for your data storage needs.But one thing is a constant for most businesses today: The amount of data to be stored just keeps growing. Today we’re announcing the Storage Growth Plan for Google Cloud Storage, a way to provide flexible, ready-when-you-need-it data storage that won’t result in unexpected bills.Cloud Storage is great at solving the consolidation and the capacity problem of data storage today. It is the unified object storage that powers many Google Cloud Platform (GCP) customers, letting you store and move data as needed. You can use Transfer Appliance to get petabytes into Cloud Storage quickly, you can stream data into Cloud Storage with Dataflow, you can even move data from AWS S3 to Cloud Storage with Storage Transfer Service. You pay by the gigabyte-month for the data you are storing in Cloud Storage, and you can store petabytes, exabytes or more. And once it’s in Cloud Storage, integrations across the platform make it easy to expose your data to services like BigQuery, Dataproc and CloudML.It’s easy to store and use the data in Cloud Storage—but it’s still being created at an astonishing and unpredictable rate. And creation unpredictability means cost unpredictability. We’ve developed the Storage Growth Plan to help enterprise customers manage storage costs and meet the forecasting and predictability that is often asked of IT organizations. It’s a new way to commit to Cloud Storage that protects you from the cost volatility associated with your data storage behavior. Here’s how it works:You commit to at least $10,000 spending per month for 12 months of Cloud Storage usage. This is a fixed amount you will pay each month.You can grow stored data, with no extra charges for usage over your commitment, during those 12 months.At the end of 12 months, you have two choices for renewal.Commit to the next 12 months at whatever your peak usage was. If that is within 30% of your original commitment, all of your previous year’s overage is free. If it is more than 30%, you repay that remainder over the next year.Or,  leave the plan and pay for the past year’s overage.Repeat 12 months at a time for as long as you like.We heard from customers that data growth can be unpredictable, but costs can’t be. We’ve also heard that data can have unpredictable life cycles. A legacy image archive might become relevant again as a Cloud Vision API training set, or an analytics workload might only sit in hot storage for a month. Storage Growth Plan applies to any storage class, enabling you to move your data freely between hot and cold classes of storage and maintain cost predictability.Storage Growth Plan helps companies like Recursion set storage costs as they build the world’s largest biological image dataset. Recursion currently manages a data set growing by more than 2 million new images a week. “This dataset enables the company to train neural networks and use other sophisticated computational techniques to identify changes in thousands of cellular and subcellular features in response to various tests,” says Ben Mabey, Vice President of Engineering at Recursion. “This approach, which we call ‘Phenomics,’ helps us pursue novel biology, drug targets, or drug candidates with more data and less bias.”  You can take advantage of this new commitment structure today by contacting sales.Adding geo-redundancy and price drops for Cloud StorageWe’re also passing on continued technical innovation to our customers in the form of price drops, in addition to introducing this new way to buy Cloud Storage. We recently announced that Cloud Storage Coldline in multi-regional locations is now geo-redundant. This means that Coldline data–the lowest-access tier of Cloud Storage—is protected from regional failure by storing another copy of your data at least 100 miles away in a different region. This image illustrates how your data is stored in different types of locations:We’ve added this redundancy to Coldline storage, but haven’t raised the price. Instead, we’re dropping prices for our Coldline class of storage in regional locations by 42%. Data stored in Coldline in regional locations is now as low as $0.004 per GB. As with all Cloud Storage classes, the data is still accessible to users in milliseconds.We often hear from customers that they take advantage of all of our classes of storage as their data ages. What starts in the Standard class of storage when it’s accessed frequently eventually moves to Nearline and then Coldline as it’s accessed less frequently. You can turn on object lifecycle management to move data among storage classes automatically based on a policy you set. Or, for use cases like digital archives, backups or content under a retention requirement where you won’t be accessing the data, you can start in a colder class. Regardless of which class you start with, Cloud Storage will maintain the redundancy of that data per the location of the bucket as it is tiered. And you’ll have a consistent experience across tiers no matter how often data is being accessed. Take advantage of these new features and options to create the flexible storage infrastructure to support your cloud. Learn more about GCP storage here.Thanks to contributions from Chris Talbott.
Quelle: Google Cloud Platform

Go global with Cloud Bigtable

Today, we’re announcing the expansion of Cloud Bigtable’s replication capabilities, giving you the flexibility to make your data available across a region or worldwide. Now in beta, this enhancement allows customers to create a replicated cluster in any zone at any time.Cloud Bigtable is a fast, globally distributed wide-column NoSQL database service. It can seamlessly scale from gigabytes to petabytes, while maintaining high-performance throughput and low-latency response times to meet your application’s goals. This is the same functionality that is proven in a number of Google products, including Google Search, Google Maps, and YouTube, as well as used by Google Cloud customers in industries and workloads including Internet of Things (IoT), finance, ad tech, gaming, and more, to deliver personalization and analytics features to users worldwide. Apps using Cloud Bigtable can serve data quickly to users, and can now do that even when the data has been created thousands of miles away.Cloud Bigtable now makes it easy to globally distribute data, so you can:Serve global audiences with lower latency by bringing data that’s generated in any region, such as personalized recommendations, closer to the users wherever they areAggregate data ingested from worldwide sources (such as IoT sensor data) to a single location for aggregation, analytics, and machine learningIncrease the availability and durability of your data beyond the scope of a single regionIsolate batch and serving workloadsEvery cluster in a replicated instance accepts both reads and writes, providing multi-primary replication (sometimes referred to as “multi-master”) with eventual consistency. You can set up replication by adding one or more Cloud Bigtable clusters, whether on the same continent or halfway around the world.In the example below, let’s say you have customers in North America, Europe, Asia, and Australia. With this new enhancement, you can deploy a globally replicated Cloud Bigtable instance with a cluster in each region to provide low-latency access to your end users.Cloud Bigtable customer Oden Technologies was keen to boost the availability and durability of their service for their worldwide industrial automation customers.“Google Cloud Bigtable is an essential component of Oden Technologies’ real-time analytics,” says James Maidment, Director of Infrastructure. “Our analytics enable our customers in manufacturing to eliminate waste and quality defects in their production process. In order for Oden to be a truly mission-critical tool and competitive with existing solutions, our customers need to trust that our service will be online when they need it most. The Cloud Bigtable multi-region replication allows us to guarantee and deliver the availability and durability our customers expect from Oden.”You can configure a replication topology using any zones where Cloud Bigtable is available, or add clusters in additional regions to an existing instance without any downtime. Additionally, the flexible replication model provided by Cloud Bigtable lets you reconfigure your instance’s replication topology at any time by allowing you to add or remove clusters for any existing instance, even if you are currently writing data to that instance.Here’s what happens when you add a cluster to an existing instance:First, all existing data will be bulk-replicated from the existing cluster to the new oneThen, all future writes to any cluster will be replicated to all other clusters in the instanceAll tables within an instance are replicated to all clusters, and you can monitor replication progress for each table via the Tables list in the GCP Console.Moving data between regions in Cloud BigtableTo move data from one region to another, just add a new cluster in the desired location, and then remove the old cluster. The old cluster remains available until data has been replicated to the new cluster, so you don’t have to worry about losing any writes. You can continue writing to Cloud Bigtable, since it takes care of replicating data automatically.Cloud Bigtable in more GCP regionsWe are also happy to announce the latest regional launch of Cloud Bigtable in São Paulo, Brazil, as we continue to deploy Cloud Bigtable in more locations to bring the performance and reliability of the popular wide-column database service to more customers.Additionally, we’ve recently added Cloud Bigtable in Mumbai, India; Hong Kong; and Sydney, Australia, making Cloud Bigtable available in 17 total regions. Here are all the current Cloud Bigtable regions, with more coming in the near future:Google’s global network powers Cloud BigtableCloud Bigtable’s high-performance global replication is enabled by Google’s global private network, which spans the globe and provides high-throughput, low-latency connections around the world to support large-scale database workloads.Next stepsIf you’re interested in learning more about Google’s global network and how it enables replication across regions and continents in Cloud Bigtable, be sure to sign up for the session at Google Cloud NEXT in San Francisco in April. We look forward to seeing you there.To get started with Cloud Bigtable replication, create an instance and configure one or more application profiles to use in your distributed application, or try it out with a Cloud Bigtable lab. Use code 1j-bigtable-719 to explore the Qwiklab at no cost through March 31, 2019.
Quelle: Google Cloud Platform

Azure Premium Blob Storage public preview

Today we are excited to announce the public preview of Azure Premium Blob Storage. Premium Blob Storage is a new performance tier in Azure Blob Storage, complimenting the existing Hot, Cool, and Archive tiers. Premium Blob Storage is ideal for workloads with high transactions rates or requires very fast access times, such as IoT, Telemetry, AI and scenarios with humans in the loop such as interactive video editing, web content, online transactions, and more.

Our testing shows that both average and 99th percentile server latency is significantly lower than our Hot access tier, providing faster and more consistent response times for both read and write across a range of object sizes. Your application should be deployed to compute instances in the same Azure region as the storage account to realize low latency End-to-End. For more details see, “Premium Blob Storage – a new level of performance.”

Figure 1 – Latency comparison of Premium and Standard Blob Storage

Premium Blob Storage is available with Locally-Redundant Storage (LRS) and comes with High-Throughput Block Blobs (HTBB), which provides very high and instantaneous write throughput when ingesting block blobs larger than 256KB.

You can store block blobs and append blobs in Premium Blob Storage. To use Premium Blob Storage you provision a new ‘Block Blob’ storage account in your subscription (see below for details) and start creating containers and blobs using the existing Blob Service REST API and/or any existing tools such as AzCopy or Azure Storage Explorer.

Pricing and region availability

Premium Blob Storage has higher data storage cost, but lower transaction cost compared to data stored in the regular Hot tier. This makes it cost effective and can be less expensive for workloads with very high transaction rates. Check out the pricing page for more details.

Premium Blob Storage public preview is available in US East, US East 2, US Central, US West, US West 2, North Europe, West Europe, Japan East and Southeast Asia regions.

Object tiering

At present data stored in Premium cannot be tiered to Hot, Cool or Archive access tiers. We are working on supporting object tiering in the future. To move data, you can synchronously copy blobs from using the new PutBlockFromURL API (sample code) or AzCopy v10, which supports this API. PutBlockFromURL synchronously copies data server side, which means that the data has finished copying when the call completes, and all data movement happens inside Azure Storage.

How to create a storage account (Azure portal)

To create a block blob storage account using the Azure Portal navigate to the ‘Create storage account’ blade and fill it in:

In Location choose one of the supported regions
In Performance choose Premium
In Account Kind choose Block Blob Storage (preview)

Example below:

Once you have created the account, you can manage the Premium Blob Storage account, including generating SAS tokes, review metrics, and more.

How to create a storage account (PowerShell)

To create a block blob account, you must first install the PowerShell AzureRm.Storage preview module.

Step 1: Ensure that you have the latest version of PowerShellGet installed.

Install-Module PowerShellGet –Repository PSGallery –Force

Step 2:  Open a new PowerShell console and install AzureRm.Storage module.

Install-Module Az.Storage –Repository PSGallery -RequiredVersion 1.1.1-preview –AllowPrerelease –AllowClobber –Force

Step 3: Open a new PowerShell console and login with your Azure account.

Connect-AzAccount

Once the PowerShell preview module is in place you can create a block blob storage account:

New-AzStorageAccount -ResourceGroupName <resource group> -Name <accountname> -Location <region> -Kind "BlockBlobStorage" -SkuName "Premium_LRS"

How to create a storage account (Azure CLI)

To create a block blob account, you must first install Azure CLI v. 2.0.46 or higher, then

Step 1: Login to your subscription

az login

Step 2: Add the storage-preview extension

az extension add -n storage-preview

Step 3:  Create storage account

az storage account create –location <location> –name <accountname> –resource-group <resource-group> –kind "BlockBlobStorage" –sku "Premium_LRS"

Feedback

We would love to get your feedback at premiumblobfeedback@microsoft.com.

Conclusion

We are very excited about being able to deliver Azure Blob Storage with low and consistent latency with Premium Blob Storage and look forward to hearing your feedback. To learn more about Blob Storage please visit our product page. Also, feel free to follow my Twitter for more updates.
Quelle: Azure

How and Why We’re Changing Deployment Topology in OpenShift 4.0

Red Hat OpenShift Container Platform is changing the way that clusters are installed, and the way those resulting clusters are structured. When the Kubernetes project began, there were no extension mechanisms. Over the last four-plus years, we have devoted significant effort to producing extension mechanisms, and they are now mature enough for us build systems […]
The post How and Why We’re Changing Deployment Topology in OpenShift 4.0 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift