McKesson chooses Google Cloud to help it chart a course to the future

From centralizing data management to using artificial intelligence (AI) to make healthcare predictions, advances in technology are transforming all medical disciplines. And as healthcare organizations strive to keep up with increasing patient expectations, many are looking to the cloud to find new ways to deliver quality, affordable services to patients, members and customers.Today, we are thrilled to announce that McKesson has selected Google Cloud as its preferred cloud provider. A Fortune 6 company, McKesson is a global leader in healthcare supply chain management solutions, retail pharmacy, community oncology and specialty care, and healthcare information technology. Its aim is to deliver more value to its customers and the healthcare industry—quickly and efficiently—through common platforms and resources.McKesson will take advantage of Google Cloud in numerous ways. The company will use Google Cloud Platform’s managed services, as well as healthcare-specific services such as theCloud Healthcare API, to help enhance its platforms and applications. It will use analytics on Google Cloud to make data-driven decisions for product manufacturing, specialty drug distribution, and pharmacy retail operations. Also, McKesson will migrate and modernize the mission critical SAP environment it uses to run its business to Google Cloud. Through the power of the cloud, McKesson hopes to create and modernize next generation solutions to deliver better healthcare—one patient at a time.“This partnership will support our continued digital transformation,” said Andrew Zitney, senior vice-president, CTO of McKesson Technology. “It will not only accelerate and expand our strategic objectives, it will also help fuel next generation innovation by driving new technologies, advancing new business models and delivering insights.”As we evolve to a more digitally-based healthcare environment, cloud computing will change how healthcare providers deliver quality, affordable services to their patients, members and customers. We believe our collaboration with McKesson will bring significant value to the healthcare ecosystem by building on Google Cloud’s secure, flexible and connected infrastructure to create and deploy better healthcare solutions.
Quelle: Google Cloud Platform

A quick hop across the pond: Supercharging the Dunant subsea cable with SDM technology

In 1858, Queen Victoria sent the first transatlantic telegram to U.S. President James Buchanan, sending a message in Morse Code at a rate of one-word per minute. In Q3 of 2020, when we turn on our private Dunant undersea cable that connects the U.S.A. and France, it will transmit 250 Terabits of data per second—enough to transmit the entire digitized Library of Congress three times every second.To achieve this record-breaking capacity, Dunant will be the first cable in the water to use space-division multiplexing (SDM) technology. SDM increases cable capacity in a cost-effective manner with additional fiber pairs (twelve, rather than six or eight in traditional subsea cables) and power-optimized repeater designs. These advancements were created in partnership with SubCom, a global partner for undersea data transport, which will engineer, manufacture and install the Dunant system utilizing their SDM technology and equipment.Traditional subsea cables are powered from the shore end and rely on a dedicated set of pump lasers to amplify the optical signal for each fiber pair as data traverses the length of the cable. Now, SDM technology allows pump lasers and associated optical components to be shared among multiple fiber pairs, while still working within the unique power constraints of the ocean floor. In this way, the 6,400km-long Dunant will add dedicated capacity, diversity and resilience to our global network, and will enable interconnection to other network infrastructure in the region.First announced in 2018, the Dunant cable is named in honor of Swiss businessman and social activist Henry Dunant, the founder of the Red Cross and first recipient of the Nobel Peace Prize. It joins the Curie cable, named for renowned scientist, Marie Curie, as our second private international cable.Demand for online content has exploded in recent years, driven by more internet users, increased engagement with rich content like video, and new demand for cloud services. When it comes online next year, it’s our hope that Dunant and these advances in submarine cable technology will help users access online content quickly from wherever they may be.
Quelle: Google Cloud Platform

Want repeatable scale? Adopt infrastructure as code on GCP

Imagine you provision a virtual machine for your dev environment, work through the kinks, and then decide you needed to create one just like it for your test environment. Are you confident that you can recreate the same configuration? What about all of the tweaks you made to get things running in dev—did you track them? Can you validate that the two environments are configured identically once provisioned? What happens when you need to scale that same configuration to thousands of machines to support production?If you answered ‘no’ to any of these questions, then you should really be thinking about infrastructure as code (IaC), which lets you make changes to your environment in a way that can be tested, automatically applied and be audited, according to your change management processes. The good news is that if you run on Google Cloud Platform (GCP), it’s already tightly integrated with popular IaC tools. Better yet, adopting IaC principles sets the stage for being able to handle massive growth in demand for your applications.Understanding infrastructure as codeAt a high level, infrastructure as code is a process that allows you to treat your infrastructure provisioning and configuration in the same manner you handle application code. Your provisioning and configuration logic is stored in source control and can take advantage of continuous integration and continuous deployment (CI/CD) pipelines, so that it’s visible and discoverable across your organization.You may be wondering whether you can use IaC processes with different kinds of infrastructures, such as:Virtual machines (or bare metal systems) set up with a configuration automation product like Puppet, Chef, Salt or Ansible? Check!Containers deployed in a Kubernetes cluster? Why not!A pleasant mix of the above? Done!What changes are the actual artifacts that you version in your source control system. Those artifacts can be YAML descriptors, Dockerfiles, shell scripts and their dependencies, among others. It does not matter what tools your are using to provision and configure your infrastructure—the important thing is to capture the process in a way can be repeated and automated!By implementing IaC process in your projects you increase the level of control you have on the design and implementation of the infrastructure that supports your applications. This is due to continuous versioning and review of the descriptors that define your infrastructure. Want the development team to review the changes? Check! The Ops technical manager demands periodical audits? Check!Then, by extending your CI/CD pipeline beyond your application, you can have changes applied to your test and production infrastructures within minutes of committing the changes to the code repository. You can get even fancier, too, by applying a test-driven development model to your infrastructure. Why not—your infrastructure deserves tests too! Open source tools like InSpec let you develop a platform-agnostic compliance test suite that checks the correctness of all the moving parts of the infrastructure.Infrastructure as code gotchasBe on the lookout for pitfalls, though! Just like implementing DevOps for your application stack, infrastructure as code automation requires process and governance changes. For one, system administrators who may have traditionally made configuration changes manually need to adopt a developer mindset, complete with checking in their configuration changes to source control and implementing a managed test and promotion process. If so, manual changes (that should not be allowed by your change management process!) made outside of the IaC pipeline will be lost in subsequent releases. Implementing IaC could bring unnecessary overhead if the change management process you adopt is too heavy. Rule of thumb: if you feel that it’s taking too much time to apply a change, it probably is! Finally, you may also need to train your Ops colleagues who don’t have experience with IaC tools and concepts.How GCP simplifies IaCGCP supports IaC processes by letting you build environments with repeatable and automated processes. These environments include not only the runtime environments, but also networking and related services, Cloud Identity and Access Management (Cloud IAM), as well as DevOps-inspired build/deploy pipelines.Because GCP is built on open standards and open-source projects, you can just re-use your existing expertise to build your next-gen infrastructure in the cloud. Tools like Deployment Manager and first-class support for Terraform help your team fully exploit all the resources that GCP has to offer. Don’t worry about starting from scratch as we have ready-made templates that follow Google’s best practices! Read more about the available tools.GCP’s approach to IaC doesn’t have a steep learning curve or complex interface to master: deploy your whole environment with one command and keep it updated automatically! It also gives you the flexibility of an incremental migration approach where you can lift-and-shift your workloads to GCP and gradually optimize them for the cloud by managing changes via IaC processes.IaC also gives you the chance of achieving linear Ops-team growth in the event of exponential workload growth. With IaC, it doesn’t matter if you are managing an environment with ten containers or one million (apart from the obvious scalability issues to tackle). Want to know more? Read Chapter 18th of the Site Reliability Engineering book.Putting IaC Best Practices to WorkIn this blog post we presented a high level description of what IaC is and why you may want to use it to manage the infrastructure supporting your GCP projects—namely, to have more control of your resources, and to be sure your infrastructure will stand up to increased demand. Click here to learn more about Infrastructure as Code on GCP.
Quelle: Google Cloud Platform

Introducing Lustre file system Cloud Deployment Manager scripts

Data is core to high performance computing (HPC), especially for workloads such as those in life sciences, oil and gas, financial services, and media rendering. Accessing large amounts of data at extremely high speeds and low latencies is essential to HPC, but has always been a key challenge in running HPC workloads.The HPC community has long met this need using storage technologies like the Lustre open-source parallel file system, which is commonly used in supercomputers today. The nearly unlimited scale of the cloud unlocks powerful capabilities for users, while also increasing the demand for fast parallel storage. Unfortunately, the configuration of the Lustre parallel file system is typically a technically challenging and time-consuming task, and can require an expert to implement correctly.In order to simplify the complex process of building and configuring a Lustre cluster for our users, the engineers at Google Cloud Platform (GCP) have developed a set of scripts to easily deploy a Lustre storage cluster on Google Compute Engine using the Google Cloud Deployment Manager. The scripts are available here in the GCP GitHub repository, under the community directory. We’ve worked to make this as simple as possible, even if you don’t have a lot of Lustre experience. We’ll briefly walk you through how to use the scripts here.1. Create a Lustre clusterThough it’s challenging in an on-premises environment, the process to deploy a ready-to-use Lustre cluster in GCP is very simple. First, create a project to contain the Lustre cluster, and ensure that you have GCP quota available to support your expected cluster.Next, clone the git repository to a local device or Cloud Shell with access to gcloud and your project, and change to the lustre directory by running these commands:Once the Lustre deployment manager scripts are downloaded, review the lustre-template.yaml, which has descriptions of each field and example valid input, as well as the description of the YAML fields in the Configuration section of README.md, to understand what each field configures. Then open the lustre.yaml file with your favorite editor (vi, nano, etc.) and edit the configuration fields to satisfy your requirements. At a minimum, ensure that the following fields are complete and valid in your environment:cluster_namezonecidrexternal_ipsmdt_disk_typemdt_disk_size_gbost_disk_typeost_disk_size_gbNote: The rest of this blog post assumes you use the default values populated in the lustre.yaml file for the fields cluster_name and fs_name. If you change these values, make sure to continue your changes throughout the following instructions.This YAML file defines the configuration for a Lustre cluster. When the configuration is deployed, it will create a Lustre cluster with the Lustre file system ready to use, including these components:VPC Network—Network to host Lustre traffic, unless an existing VPC network such as a Shared VPC is provided.VPC Subnet—Subnet to route Lustre traffic, unless an existing VPC Subnet is provided.Cloud NAT—NAT device to route traffic to the internet, unless external IPs are disabled.Firewall rules—Firewall rules will be created to allow inter-node communication, and SSH into the Lustre cluster.Lustre VMs—A set of Lustre virtual machines will be created and configured to host various roles immediately as part of the deployment:MDS—Lustre metadata server and management server serves the independent metadata and Lustre management functionality.OSS—Object storage server serves the file data in a distributed manner.2. Deploy the Lustre clusterOnce the fields are configured to match your preferences, you can deploy and configure the entire Lustre cluster with a single command:gcloud deployment-manager deployments create lustre –config lustre.yamlYou can monitor the progress of the deployment through the command line, or in the deployment manager interface:Once the deployment has completed successfully, you will see output like this, showing that a VPC network, subnet, firewall rules, and VM instances have been created according to the configuration:Next, SSH into the lustre-mds1 instance using either gcloud or the console SSH button. Once you log in, you may see the following message:If you do see this message, wait until the installation is complete. (If you do not see this message, then the installation has already completed.)If you do see this message, wait until the installation is complete. (If you do not see this message, then the installation has already completed.)3. Log in and test LustreOnce the installation is complete, you will see the following message of the day when logging into an instance in the cluster:This message indicates that the Lustre cluster is installed, and that the Lustre file system is mounted and available. You can now mount Lustre clients that have the Lustre client software installed. For example, you can test a mount from the lustre-mds1 node to verify that the Lustre file system is online, like this:The mount command should return quickly with no output. If you experience an issue with this step, check out the Troubleshooting section of our README.md file.You can confirm that Lustre is mounted on your client multiple ways. One way is to check that an entry exists in the mount command, like this:mount | grep lustreYou should see output that includes a line similar to:10.20.0.2@tcp:/lustre on /mnt/lustre type lustre (rw)You can also check the output of the Lustre configuration utility, lctl, to ensure that the entire Lustre file system is mounted and available, using this command:sudo lfs dfYou should see output similar to this that shows the Lustre metadata target(s) (MDT), the Lustre object storage target(s) (OST), the mount point, the total file system size, and used and available storage.Your Lustre file system is now mounted. You can test writing a file to the file system by running these commands:You should see your new file testfile has been created. Change the permissions for /mnt/lustre to allow non-root users to access the file system, or enable authentication in Lustre (the Lustre User/Group Upcall is disabled in these Lustre deployment manager scripts by default, which causes Lustre to fall back to the OS authentication).Exploring further with LustreYour Lustre cluster is now online and ready to host your scratch and HPC data to solve your hardest performance problems. Check out the README.md for even more detail and to learn how to expand your Lustre cluster by adding new OSS nodes.Visit the Google HPC Solutions page to read about other solutions, and try combining your Lustre cluster with some of our other solutions to begin running your HPC workloads in Google Cloud. For example, combine Lustre and Slurm on GCP to create an auto-scaling cluster with access to a powerful Lustre file system. You can also learn more about HPC in the cloud during this Next ‘19 session.Get in touch with questions and feedbackTo ask questions or post customizations to the community, use the Google Cloud Lustre Google discussion group. To request features, provide feedback, or report bugs, use this form.
Quelle: Google Cloud Platform

Announcing the Cloud Healthcare API beta: Improving data access and shareability across organizations

At Google Cloud, we are focused on providing healthcare and life sciences organizations with innovative technology needed to improve our healthcare system. Through our customers and partners, we are working to improve healthcare for patients, providers, payers, and the many organizations involved in the discovery, development, and delivery of healthcare products and services.Today, we’re pleased to announce that our Cloud Healthcare API is now in beta. From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data—and better understand that data through the application of analytics and machine learning in real time, at scale.   Cloud Healthcare API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP), providing a critical bridge between existing care systems and applications hosted on Google Cloud. Using the API, customers can unlock significant new capabilities for data analysis, machine learning, and application development. These capabilities, in turn, enable the next generation of healthcare solutions.While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers. This is why we are also excited to share how some of our newest customers and partners are leveraging Google Cloud to transform the healthcare industry.   Next week at Google Cloud Next, we’ll hear from many healthcare customers, including:American Cancer Society will outline how it is using Cloud ML Engine on GCP to accurately and quickly identify novel patterns in digital pathology images.Hunterdon Health will discuss how cloud-native endpoints like Chrome Enterprise can be deployed throughout your healthcare network to increase information access, reduce operational costs, and deliver a better patient experience.Stratus Medicine will review a serverless architecture for generating real-time clinical predictions using Cloud Healthcare API to feed FHIR and DICOM data into Cloud Machine Learning Engine.CareCloud will discuss mapping X12 EDI transactions to FHIR as part of a broader approach to building a comprehensive Clinical Data Warehouse.Kaiser Permanente will talk about how it leverages Google’s CI-CD process, API best practices, and Apigee API management to power its API-first strategy.LifeImage will demo how they are enabling point of care epidemiology and secure image sharing networks on GCP.iDigital will present their architecture for a zero-footprint teleradiology solution on top of Cloud Healthcare HL7v2 and DICOM API.We look forward to continuing to bring innovative products to the healthcare and life sciences space, and partnering with organizations to improve our healthcare system. Visit our website to learn more about Google Cloud’s solutions in healthcare andlife sciences.
Quelle: Google Cloud Platform

6 HPC must-sees at Next ‘19

High performance computing (HPC) is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations.At Google Cloud Next ‘19, we have lots of sessions to help you understand how to use our scalable compute, networking and storage infrastructure. If you’re attending the event, here are 5 HPC sessions to mark on your calendar.1. High Performance Computing on Google Cloud Platform (GCP): Deploy an HPC Cluster Now – Register hereIn this session, we’ll discuss why GCP is a great platform to run HPC workloads. We’ll present best practices, architectural patterns, and how our professional services organization can help you on your journey. We’ll conclude by demoing the deployment of an autoscaling batch system in GCP.2. HPC Partner Ecosystem  – Register hereCome learn about how our HPC partners focusing on job scheduling, workload management, applications, and libraries can enable you to start running HPC workloads easily and quickly on Google Cloud.3. Technical Deep Dive Into Storage for High Performance Computing – Register hereLarge-scale computing in the cloud is maturing, but HPC storage in the cloud is still in its infancy. In this session, we will discuss HPC storage in the cloud with solutions for EDA, fintech, manufacturing, media, genomics, and many more. Then, our HPC storage partner DataDirect Networks will discuss its Lustre parallel file system and other future offerings on Google Cloud.4. How We Broke the World Record for Computing Digits of Pi (31.4 trillion!) – Register hereWe calculated 31.4 trillion digits of Pi on Google Cloud—the new world record. This session will discuss the nature of the calculation, the architecture, challenges and techniques, benefits of Google Cloud, and of course the brief history of Pi computation. Along the way, you’ll learn a ton about large-scale cloud computing.5. Performance Benchmarking on Google Cloud Platform – Register hereHow do you benchmark performance in the cloud, and in particular on Google Compute Engine? We’ll use PerfKitBenchmarker to take an early look at our new C2 instances and see how they stack up to our N1 series. We’ll also provide scripts so you can benchmark systems from the comfort of your own home!6. University Students & Researchers Push the Bounds of What is Possible With GCP – Register hereResearchers, students and developers at universities around the world are asking what’s possible—and using GCP to find out. Come learn how Google Cloud is helping researchers make new discoveries and share their insights, be it mapping the cosmos, seeking solutions to the opioid crisis, building more accessible technology to help people communicate and more.This is just a small sampling of the hundreds of breakout sessions we’ll be holding next week. To learn more about the event, and secure your spot, check out Google Cloud Next ‘19 website. And if you can’t make it to the show, stay tuned, because we’ll be publishing full-length recordings of every session for your viewing pleasure.
Quelle: Google Cloud Platform

Bringing it all together: learn GCP solutions from the sources at Next ‘19

If you need details about using a particular Google Cloud Platform (GCP) service—whether Compute Engine or Cloud Storage or BigQuery—you typically turn to the respective service’s docs. But developing cloud-based apps involves combining multiple services into a complete solution. And there are as many solutions as there are businesses using the cloud.We understand this, and that’s why we have a team of solutions architects (SAs) who are industry veterans and experts in app design and cloud architecture. These Google devs show you how to put services together to solve your particular business need, and ultimately, create your very own solution tailored to your specific requirements.Learn about GCP solutions at Next ’19If you’re attending Google Cloud Next ’19, our annual cloud conference, you’ll have the opportunity to meet these SAs in person. Stop by the whiteboard booth to put faces to the names and solutions, to share ideas for topics or to even get help with your designs and plans.And while you’re at it, there are plenty of sessions with our SAs that you can attend. Here are a few to mark on your calendars, if you haven’t already:Automate your way to a consistent and repeatable world with CSP Learn about a set of opinionated solutions that help speed up your creation of a solid cloud platform.Journey to the cloud confidently with Citrix and Google Cloud Get practical advice on how to move to GCP using Citrix and open source tools, with reference architectures and customer stories.Revolutionizing media workflows with intelligent content See how machine learning can help add intelligence for better media workflows.Managing a render farm on GCP using OpenCue Get a live demo of OpenCue, the open-source, high-performance render manager for animation industry companies.BigQuery ML: What’s New, and an Exploration With Booking.com on Using It to Assess Data Quality Learn about BigQuery ML’s latest models and features, plus hear how Booking.com uses these models to assess data quality.Large-scale multiplayer gaming on Kubernetes See how open-source projects built on Kubernetes can handle infrastructure for you, featuring the Open Match matchmaking and the Agones game-serving platforms.Marketing and creative insights from unstructured data: Cloud ML APIs Check out examples of how customers use the Cloud Natural Language and the Cloud Vision APIs to take advantage of unstructured data for marketing analytics.GCP for Apache Kafka Users: Stream Ingestion and Processing View two approaches to data integration and how to use Cloud Dataflow for your data streams.Explore solutions docs for your requirementsAlong with Next sessions, you can explore the Solutions Gallery, a portal featuring more than 600 solutions documents that explain how to use GCP components to create apps that address real-world requirements. Each of our solutions dives deep to describe how the SA implemented, well, a solution to a particular customer requirement. Some of the solutions are big-picture overviews. Some list best practices. Many are tutorials that include code and the link to GitHub repositories where you can find all the components you need to run the solution yourself.The topics are as wide-ranging as our customers and as their reasons for building cloud-based apps. Here are just a few examples:Is your system prepared for the unexpected? A multi-part series by SA Grace Mollison describes how to design a cloud architecture to handle disaster recovery. Check out her excellent “run-flat” analogy for hot failover.Want hands-on experience using machine learning for a real app? SA Lukman Ramsey created a multi-part tutorial series that explains how to use TensorFlow and GCP components to build a recommendation system.You already know how to back up your systems when everything is on-premises, but what do you do when your infrastructure is in the cloud? A solution by SA David Cueva Tello explains how to set up automated backups using Cloud Composer.We invite you to explore our solutions and learn about the many ways you can use GCP to address your development needs. And keep an eye out here for information about new
Quelle: Google Cloud Platform

Last month today: March on the Google Cloud blog

Whew—March was a busy month around here, with a new region opening, an actual Guinness World Record, and some interesting stories of using Google Cloud Platform (GCP) for gaming and basketball data analytics. Here are the top stories that caught your interest last month.Bulking up infrastructureGCP’s sixth European region, in Zurich, Switzerland, opened last month. It launched with our standard set of products across compute, databases, storage, security, big data and networking, including Compute Engine, Cloud Bigtable, BigQuery and more. The Zurich region comes with three availability zones and Cloud Interconnect, our private, software-defined network to speed cloud access and data movement. See which GCP region is closest to you at GCPing.com.On the infrastructure front, we introduced a new Cloud Storage pricing plan. The Storage Growth Plan tackles data growth and volatility that many of our users experience. We want to keep cost unpredictability out of that equation, so this new plan lets you commit to 12-month periods of using Cloud Storage for a fixed dollar amount.Using cloud for fun and πWe celebrated Pi Day, 3/14, here at Google Cloud with the excitement of winning a Guinness World Record for the most digits of π ever calculated, to 31.4 trillion decimal places. Google Cloud developer advocate Emma Haruka Iwao used Compute Engine VMs running y-cruncher to do the calculations. In addition to this being the most π digits ever calculated, it’s the first time this record was set using the cloud. Using cloud also brings the benefit of easy sharing: You can get your hands on the digits via pi.delivery.Also last month: The annual March Madness tournament started, and we continued our partnership to explore the NCAA’s 80-plus years’ worth of historical basketball data using Google Cloud. This year, we’re bringing student developers into the fold and adding a new online course so you can learn how to use BigQuery to analyze NCAA data with SQL and make a machine learning model to make predictions based on the historical data. We also built a public Data Studio dashboard with plenty of new insights to help you survive the madness. We’ll have bootcamps at Next ‘19 to continue the fun.And speaking of fun and games, Google Cloud made a big splash at this year’s Game Developers Conference, where we highlighted how Stadia and Google Cloud are better together, as well as how GCP is powering popular games like Apex Legends and Tom Clancy’s The Division 2. These multiplayer games run on GCP infrastructure around the world so players can constantly access matchmaking, statistics and high score data.Putting tools and concepts together for an SRE serviceThe latest installment in our series about using Istio covers how you can bring application metrics into your reporting and in line with your site reliability engineering (SRE) practice. Since Istio integrates with Stackdriver, you can get about a dozen metrics right away without further configuration. This post covers how you might choose from those metrics as you’re setting service-level indicators (SLIs), and how you can use Stackdriver Logging and Trace to get into the details that are most relevant for your team and business.That’s a wrap for March. We’ll see you next month—and at Next ‘19 in the meantime!
Quelle: Google Cloud Platform

5 HPC must-sees at Next ‘19

High performance computing (HPC) is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations.At Google Cloud Next ‘19, we have lots of sessions to help you understand how to use our scalable compute, networking and storage infrastructure. If you’re attending the event, here are 5 HPC sessions to mark on your calendar.1. High Performance Computing on Google Cloud Platform (GCP): Deploy an HPC Cluster Now – Register hereIn this session, we’ll discuss why GCP is a great platform to run HPC workloads. We’ll present best practices, architectural patterns, and how our professional services organization can help you on your journey. We’ll conclude by demoing the deployment of an autoscaling batch system in GCP.2. Technical Deep Dive Into Storage for High Performance Computing – Register hereLarge-scale computing in the cloud is maturing, but HPC storage in the cloud is still in its infancy. In this session, we will discuss HPC storage in the cloud with solutions for EDA, fintech, manufacturing, media, genomics, and many more. Then, our HPC storage partner DataDirect Networks will discuss its Lustre parallel file system and other future offerings on Google Cloud.3. How We Broke the World Record for Computing Digits of Pi (31.4 trillion!) – Register hereWe calculated 31.4 trillion digits of Pi on Google Cloud—the new world record. This session will discuss the nature of the calculation, the architecture, challenges and techniques, benefits of Google Cloud, and of course the brief history of Pi computation. Along the way, you’ll learn a ton about large-scale cloud computing.4. Performance Benchmarking on Google Cloud Platform – Register hereHow do you benchmark performance in the cloud, and in particular on Google Compute Engine? We’ll use PerfKitBenchmarker to take an early look at our new C2 instances and see how they stack up to our N1 series. We’ll also provide scripts so you can benchmark systems from the comfort of your own home!5. University Students & Researchers Push the Bounds of What is Possible With GCP – Register hereResearchers, students and developers at universities around the world are asking what’s possible—and using GCP to find out. Come learn how Google Cloud is helping researchers make new discoveries and share their insights, be it mapping the cosmos, seeking solutions to the opioid crisis, building more accessible technology to help people communicate and more.This is just a small sampling of the hundreds of breakout sessions we’ll be holding next week. To learn more about the event, and secure your spot, check out Google Cloud Next ‘19 website. And if you can’t make it to the show, stay tuned, because we’ll be publishing full-length recordings of every session for your viewing pleasure.
Quelle: Google Cloud Platform

How Georgetown University is migrating enterprise systems to Google Cloud to transform education and research

Like many enterprises, universities need access to scalable infrastructure and technologies so they can build global connections, manage ever-expanding data needs, and facilitate interdisciplinary collaboration. And many are turning to the cloud to help them do exactly that.Today, we’re proud to announce thatGeorgetown is expanding its work with Google Cloud to include migrating on-site enterprise systems toGoogle Cloud Platform (GCP) within one year.  By shifting their workloads to GCP, the university will implement an AI-first and cloud-first strategy that will help them to optimize their operations and accelerate research. Judd Nicholson, Georgetown’s Vice President and Chief Information Officer, says, “the relationship with Google Cloud is a key step in Georgetown’s strategy to continuously modernize its information technology infrastructure to support the evolving needs of our students, faculty and researchers.”Georgetown was an early adopter of cloud solutions and uses G Suite to collaborate and communicate daily across campus. By taking this next step, Georgetown will digitally transform the current technology infrastructure on campus to better support teaching, learning, and research.“This collaboration will accelerate Georgetown’s ability to provide advanced high-performance computing and sustainable storage resources to its research communities,” says Billy Jack, Georgetown’s Vice Provost for research. ”It will also serve as a gateway to creating a university of the third millennium—a learning and research institution that embraces data and technological literacy and analytical proficiency as one of the foundations of modern scholarship and enquiry across all disciplines.”Georgetown will benefit from their expanded work with Google Cloud in a number of key ways.By shifting to Google data centers, which are powered entirely by renewable energy, Georgetown will be deepening its commitment to sustainability. “Our work with Google Cloud to accelerate Georgetown’s continued transition to the cloud and away from its dependence on an on-premise data center will support our sustainability efforts,” says Nicholson.Using Google Cloud’s advanced technologies in areas such as big data analytics and AI, Georgetown can accelerate research in key areas for positive social impact—medicine, life- and climate-sciences, global development, and many more.Moving all their on-site enterprise systems to Google cloud will help Georgetown reduce costs and improve efficiencies by creating synergies across all of their services.You can read our case study to learn more about our work with Georgetown. For more information on Google Cloud education solutions, visit our website.
Quelle: Google Cloud Platform