Pi in the sky: Calculating a record-breaking 31.4 trillion digits of Archimedes’ constant on Google Cloud

Ever since the ancient Babylonians, people have been calculating the digits of π, the ratio of the circumference of a circle to its diameter that starts as 3.1415… and goes on forever. In honor of Pi Day, today March 14 (represented as 3/14 in many parts of the world), we’re excited to announce that we successfully computed π to 31.4 trillion decimal places—31,415,926,535,897 to be exact, or π * 1013. This has broken a GUINNESS WORLD RECORDSTM title , and the first time the record was broken using the cloud, proving Google Cloud’s infrastructure works reliably for long and compute-heavy tasks.We achieved this feat using y-cruncher, a Pi-benchmark program developed by Alexander J. Yee, using a Google Compute Engine virtual machine cluster. 31.4 trillion digits is almost 9 trillion digits more than the previous world record set in November 2016 by Peter Trueb. Yee independently verified the calculation using Bellard’s formula and BBP formula. Here are the last 97 digits of the result.6394399712 5311093276 9814355656 1840037499 35734609921433955296 8972122477 1577728930 8427323262 4739940You can read more details of this record from y-cruncher’s perspective in Yee’s report.A constant raceGranted, most scientific applications don’t need π beyond a few hundred digits, but that isn’t stopping anyone; starting in 2009, engineers have used customized personal computers to calculate trillions of digits of π. In fact, the race to calculate more π digits has only accelerated as of late, with computer scientists using it as a way to test supercomputers, and mathematicians to compete against one another.However, the complexity of Chudnovky’s formula—a common algorithm for computing π—is O(n (log n)3). In layman’s terms, this means that the time and resources necessary to calculate digits increase more rapidly than the digits themselves. Furthermore, it gets harder to survive a potential hardware outage or failure as the computation goes on.For our π calculation, we decided to go to the cloud. Using Compute Engine, Google Cloud’s high-performance infrastructure as a service offering, has a number of benefits over using dedicated physical machines. First, Compute Engine’s live migration feature lets your application continue running while Google takes care of the heavy lifting needed to keep our infrastructure up to date. We ran 25 nodes for 111.8 days, or 2,795 machine-days (7.6 machine-years), during which time Google Cloud performed thousands of live migrations uninterrupted and with no impact on the calculation process.Running in the cloud also let us publish the computed digits entirely as disk snapshots. In less than an hour and for as little as $40/day, you can copy the snapshots, work on the results, and dispose of the computation resources. Before cloud, the only feasible way to distribute such a large dataset was to ship physical hard drives.Then there are the general benefits of running in the cloud: availability of a broad selection of hardware, including the latest Intel Skylake processors with AVX-512 support. You can scale your instances up and down on demand, and kill off when you are done with them, only having paid for what you used.Here are additional details about the program:An overview of our π cluster architectureCluster designWe selected an n1-megamem-96 instance for the main computing node. It was the biggest virtual machine type available on Compute Engine that provided Intel Skylake processors at the beginning of the project. The Skylake generation of Intel processors supports AVX-512, which are 512-bit SIMD extensions that can perform floating point operations on 512-bit data or eight double-precision floating-point numbers at once.Currently, each Compute Engine virtual machine can mount up to 64 TB of Persistent Disks. We used the iSCSI protocol to remotely attach Persistent Disks to add additional capacity. The number of nodes were decided based on y-cruncher’s disk benchmark performance. We selected n1-standard-16 for the iSCSI target machines to ensure sufficient bandwidth between the computing node and the storage as the network egress bandwidth and Persistent Disk throughput are determined by the number of vCPU cores.How to get your hands on the digitsOur pi.delivery service provides a REST API to access the digits on the web. It also has a couple of fun experiments that lets you visualize and listen to π.To make it easier for you to use these digits in your own work, we have made the resulting π digits available as snapshots on Google Cloud Platform. Each snapshot contains a single text file with the decimal digits and you can create a new Persistent Disk based on these images. We provide both XFS and NTFS disk formats to accommodate Linux and Windows operating systems respectively. The snapshots are located in the us multi-region.You need to join the pi-31415926535897 Google Group to gain access. It will cost approximately $40 per day to keep the cloned disk in one of the us-central1, us-west1, and us-east1 regions in your project. We will keep the snapshots until March 14, 2020. The snapshots are available at the following locations.XFS: https://www.googleapis.com/compute/v1/projects/pi-31415926535897/global/snapshots/decimal-digits-xfsNTFS: https://www.googleapis.com/compute/v1/projects/pi-31415926535897/global/snapshots/decimal-digits-ntfsTo create a new disk named pi314-decimal-digits-xfs in your project based on the XFS snapshot, for example, type the following command:gcloud compute disks create pi314-decimal-digits-xfs –source-snapshot https://www.googleapis.com/compute/v1/projects/pi-31415926535897/global/snapshots/decimal-digits-xfsRemember to delete the disk once you’re done with it to avoid unexpected charges.gcloud compute disks delete pi314-decimal-digits-xfsPlease refer the restoring a non-boot disk snapshot section and the gcloud compute disks create command help for more instructions on how to use these images.Coming full circleThe world of math and sciences is full of records just waiting to be broken. We had a great time calculating 31.4 trillion π digits, and look forward to sinking our teeth into other great challenges. Until then, let’s celebrate the day with fun experiments. Our Pi Day Celebration cloud experiment on our Showcase experiments website lets you generate a custom art piece from digits of π that you pick. And if you’re going to Google Cloud Next ’19 in San Francisco, come to our deep-dive technical session together with Alexander Yee to discuss details and insights from this experiment, interact with the Showcase experiment and watch a live experiment with the π digits inside the DevZone.
Quelle: Google Cloud Platform

Release with confidence: How testing and CI/CD can keep bugs out of production

With today’s dueling demands to iterate faster while keeping quality standards high, minimizing both the frequency and severity of bugs in code is no easy task. This is doubly true in serverless environments, where lightweight code bases and fully-managed architectures enable developers to iterate more rapidly than ever before. Thorough testing is an effective method of finding potential bugs and protecting against errors in production that can have real business impact.Testing can be somewhat of a double-edged sword, however: it’s a critical part of a successful launch, but it can easily take developers away from other tasks. Therefore, it’s important to know the types of testing available, and which ones to utilize for your specific needs.While there is no shortage of ways to test your serverless applications, all of them come with trade-offs around speed, cost, accuracy, and scope. Which combination works best for you will depend on variables like how critical, long-lasting, and well-maintained your code is. Code that is critical and reused often requires in-depth testing with a wide range of scopes, while less important, non-reused code can often get by with fewer, higher-level tests.In the next couple posts, we’ll look at testing and other important strategies that help minimize the frequency of bugs in production serverless deployments and reduce the severity of those that inevitably sneak past your test suite. We’ll also take a look at some example code that was developed for Cloud Functions, Google Cloud’s function-based serverless compute platform. This first post will discuss two important strategies for minimizing the frequency of bugs in production: testing and CI/CD (continuous integration and continuous deployment), and will cover general testing techniques, followed by examples of how to apply them with Cloud Functions.Keeping tests realTesting your functions locally and on a CI/CD machine is a good defense against most bugs, but it won’t catch everything. For example, it won’t identify issues with environment configuration or external dependencies that could impact your production deployment.To get over this hurdle, we need an environment to test in that has all the functionality of a production Cloud Functions environment, but none of the associated risk should the environment get corrupted. To do this, we can set up a test—or canary—environment that resides somewhere between a local machine and production and replicates the production environment. One common approach is to use a separate Google Cloud Project as a canary environment.Testing 101Once our canary Cloud Functions environment is set up, we can start to talk about the three primary testing types that we’ll use: unit tests, integration tests, and system tests. Let’s look at each type individually, stepping up from the easiest to the most involved.Lightweight: unit testsPerhaps the easiest, quickest tests you can run are unit tests. Unit tests focus on a single feature and confirm that things work as expected. They have a few great things going for them, but are generally limited in their scope and the types of issues they identify.Unit tests use mocking frameworks to fake external dependencies. For example, let’s say you have a feature that calls an API, the API returns a certain response, and then the feature does something based on that response. Unit testing takes that API out of the equation. The mocking framework returns a pre-defined response—what you would expect the API to return if it were working properly, for example—and simply makes sure that the feature itself behaves how we think it should whenever it gets that response.Unit tests at a glance:Are fast and cheap to run since they rarely require billed cloud resourcesConfirm that the details of your code work as expected. For example, they’re great for edge case checking and other similar tests.Are useful for investigating known issues, but not great at identifying new onesHave no reliance on external dependencies (like libraries, APIs, etc). Of course, this means they also can’t be used to verify these things.Let’s take a quick look at an example. First, here is a sample HTTP function that creates a Cloud Storage bucket based on the name parameter in the request body:And here is a very basic unit test for our function. This test creates a mock version of the @google-cloud/storage library using proxyquire and sinon. It then checks that the mock library’s createBucket function is being called with the correct arguments.Middleweight: integration testsStepping up a bit from unit tests are integration tests. As the name suggests, integration tests verify that parts of your code fit together as you expect.Integration tests can use a mocking framework, as we described in unit testing, or can rely on real external dependencies. Using a mocking framework is quicker and cheaper, while bringing in external dependencies provides a more robust test. As a rule of thumb, we recommend mocking any dependencies that are slow (more than one second) and/or expensive. This enables these tests to be run quickly and cheaply.Integration tests at a glance:Balance problem detection and isolation. They are large enough in scope to detect some unanticipated bugs, but can still be run relatively quicklyMay require small amounts of billed resources, depending on how you run your tests. For example: if a test run depends on actual build resources, then those runs would cost money.Here is an integration test for our sample function. This test sends an HTTP request to the function and checks that it actually creates a Cloud Storage bucket with the correct name. For integration tests, the value of BASE_URL should point to a version of the function running locally on a developer’s machine (such as http://localhost:8080).Heavyweight: system testsSystem tests broaden the scope to verify that your code works as a system. To that end, system tests rely heavily on external dependencies—making these tests both slower and more expensive.One important thing to keep in mind with system tests is that state matters, and it may introduce consistency or shared-resource issues. For example, if you run multiple tests at the same time and your test tries to create a resource that already exists (or delete a resource that doesn’t exist), your test results may become flakey.At a glance:Since you’re directing traffic at an actual cloud deployment, system tests can require moderate amounts of billed resources.System tests provide good bug detection. They can even catch unanticipated bugs and bugs outside your codebase,  such as in your dependencies or cloud deployment configuration.Since the scope of system tests is so large, they aren’t as good at isolating problems and their root causes as the other types of tests we’ve discussed.Here is a system test for our sample function. Like our integration test, it sends an HTTP request to the newBucket Cloud Function and checks to make sure the correct bucket was created.If you look closely, you’ll notice that this test is exactly the same as the integration test. In fact, the only difference is that the BASE_URL variable is set so that the test points at a deployed Cloud Function instead of a locally-hosted one.Though this trick is often specific to HTTP-triggered functions, reusing integration test code in system tests (and vice versa) can help reduce the maintenance burden created by your tests.Other testing optionsLet’s take a quick look at some other common types of testing, and how you can best utilize them with Cloud Functions.Static testsStatic tests verify that your code follows language and style conventions and dependency best practices. While they are relatively simple to run, one major limitation you have to account for is their narrow focus.Many static test options are free to install and easy to use. Linters (such as prettier for Node.js and pylint for Python) enforce style conventions, while dependency tools (such as Snyk for Node.js) check for dependency issues.Load testsLoad tests involve creating vast amounts of traffic and directing it at your app to make sure your app can handle real-world traffic spikes. They verify that the entire, end-to-end system—including non-autoscaled components—are capable of handling a specified request load, which is usually a multiple of the peak number of simultaneous users you expect.Load tests can be expensive, since they require lots of billed resources to run, and slow due to the external dependencies they rely on. On the plus side, many of the actual testing tools are free, including Apache Bench (“ab” on most Mac and Linux systems),Apache JMeter, and Nordstrom’s serverless-artillery project.Security testsSecurity tests verify that code and dependencies can handle potentially malicious input, and can be part of your unit, integration, system, or static testing. Beware: security tests have the potential to damage their target app environment. For example, a testing tool may attempt to drop a database or otherwise compromise the resources in its environment. The lesson here is: make sure to use a test or canary environment unless you are 100% sure the tool in question won’t hurt your production environment.There are many free security testing options out there, including Zed Attack Proxy, Snyk.io, the Big List of Naughty Strings, and oss-fuzz, just to name a few. However, no automated security testing tool is perfect. If you are serious about security, hire a security consultant.CI/CD FTWAt the beginning of this post, we mentioned two ways to minimize the frequency of bugs: testing and CI/CD. Now that we’ve covered testing, let’s take a look at how continuous integration and continuous deployment can provide an additional layer of defense against bugs in production.The motivation for CI/CD is fairly straightforward. If you’re a developer, version control—whether it’s git branches or another system—is your source of truth. At the same time, code for Cloud Functions has to be tested and then redeployed manually. This presents no shortage of potential issues.CI/CD systems automate this process, letting you automatically mirror any changes in version control to GCF deployments. CI/CD systems detect code changes using hooks in version control systems that are triggered whenever new code versions are received. These systems can also invoke language-specific command-line functions to run your tests, followed by a call to gcloud to automatically deploy any code changes to Cloud Functions.There are many different CI/CD options available, including Google’s own Cloud Build, which natively integrates with GCP and source repositories. A basic CI/CD for Cloud Functions is fairly simple to set up and deploy with Cloud Build—see this page for more details.In conclusionWriting a thorough and comprehensive test suite, running it in a realistic “canary” environment,  and automating your deployment process using CI/CD tools are techniques that can help you reduce your production bug rate. When used together, they can significantly increase the reliability and availability of your services while decreasing the frequency of buggy code and its resulting negative business impacts.However, as we cautioned at the beginning, testing simply can’t catch every bug before it hits production. In our next post, we’ll discuss how to minimize the business impact of bugs that do make their way into your Cloud Functions based applications using monitoring and in-production debugging techniques.
Quelle: Google Cloud Platform

Help stop data leaks with the Forseti External Project Access Scanner

Editor’s note:This is the second post in a series about Forseti Security, an open-source security toolkit for Google Cloud Platform (GCP) environments . In our last post, ClearDATA told us about a serverless alternative to the usual way of deploying Forseti in a dedicated VM. In this post, we learn about Forseti’s new External Project Access Scanner.With data breaches or leaks a common headline, cloud data security is a constant concern for organizations today. But securing cloud-based data is no easy feat. In particular, it can be hard to identify and secure the routes by which data can leave the organization—so called data exfiltration.Consider the following scenario: a Google Cloud Platform (GCP) user has permissions in projects across different organizations, the root note in a GCP resource hierarchy. As a member of Organization A, they have permissions in a project under Organization A’s GCP organization node. This user also has permissions in a project under Organization B’s GCP organization node.However, nowhere in Organization A’s Cloud Identity and Access Management (IAM) console does it indicate that the user has permissions to a project in Organization B. There is also no evidence of this in Organization A’s G Suite admin console, so the user can move data between organizations virtually unnoticed.This kind of exfiltration vector is difficult to detect. Fortunately, the Forseti Security toolkit includes an External Project Access Scanner that can help.What does the Forseti scanner do?In GCP, the best practice is to use service accounts to perform actions where a GCP user isn’t directly involved. For example, if an application in a VM needs to connect to Google Cloud Storage, the application uses a service account for that interaction. Following this best practice, Forseti also uses service accounts to make API calls when it scans for permissions.Each project in GCP has an ancestry known as a Resource Hierarchy.  This ancestry will always start at an organization node (e.g. Organization A).  Under the organization there will be zero or more folders. A GCP project may either be a child of a folder or the organization itself.The challenge here is that a service account only has permissions in the organization where Forseti is deployed. In other words, if Forseti is deployed in Organization A, it can’t see what projects a user has to in Organization B.This is where the concept of “delegated credentials” becomes incredibly useful. Delegated credentials allow a service account to temporarily act as a user. After compiling a list of users in the organization, the service account impersonates each user with these delegated credentials. The scanner then obtains the list of projects to which each user has access, regardless of the organization node.Having the list of projects, and still using each user’s delegated credentials, the scanner obtains the project ancestry of each project.This scanner is configured via whitelist rules (details discussed later). When you first deploy Forseti, the only rule that exists is to permit users to have access to projects in the organization where Forseti is deployed. In other words, if Forseti is deployed in a project in Organization A, then users in Organization A have access to projects in Organization A and only Organization A.A violation occurs when none of the ancestors of a project are whitelisted in the External Project Access Scanner Rules.To sum up the operation of the External Access Project Scanner, it:Obtains a list of all the users in a GCP organizationFor each userObtains delegated credentials from that userObtains a list of projects to which the user has accessIterates over each project, obtaining the project’s ancestryDetermines which ancestries are in violation of the whitelist rulesReports the violationsHow to configure and run the External Project Access ScannerThe first step is to install and configure Forseti; you can find some great instructions on forsetisecurity.org.Then, you need to configure your whitelist rules.  As mentioned previously, the External Project Access Scanner is configured by whitelist rules in the external_project_access_rules.yaml file. The first time you open this file, there’s only one entry that whitelists the organization in which you’ve deployed Forseti. For example:A resource in GCP is identified by the resource type/resource number ID.  Each rule may list multiple type/ID pairs as long as they are organizations or folders types..Once the desired rules are in place, you can run the scanner. At this point, it is important to note that the scanner does not run in a cron job, like the other Forseti scanners, but must be manually invoked. This is because, depending on the size of the organization, this scanner has the potential to execute for a long time. Remember that the scanner iterates over every user in an organization and calls the GCP API to obtain a list of all projects. Then, for each project, the scanner obtains the ancestry, again via the API. This can amount to a lot of API calls that take a long time to execute.After selecting the Forseti model, here’s how to run it via the CLI.When the scanner completes, it stores violations in at least two locations:Forseti’s CloudSQL database in the violations tableA GCS bucket in the project where Forseti is deployedAn e-mail notification if you configured the Forseti server to do soThe violation data itself is worth discussion. Violation data is in the form of a JSON string and resembles the following.A violation entry is generated for each project (‘full_ name’) and per user (‘member’) where the project’s ancestry is in violation. The ‘rule_ancestors’ field lists all the ancestors that were listed in an External Project Access Scanner rule.Future workWith the External Project Access Scanner you can now identify projects in organizations or folders that aren’t whitelisted by the scanner rules. As of Forseti v2.9.0, the whitelist rules apply to all users in an organization. This means that all users in an organization may have the ability to access projects in another organization if such a rule existed. Going forward, one improvement would be to enhance the rule definition to allow each rule to be applied to specific users or groups.Additionally, the External Project Access Scanner returns a violation regardless of the permissions level a user has on a project in another organization. Whether the user has viewer, editor or project owner roles, the scanner reports a violation all the same. The rule could be further improved by allowing the specification of an allowed permission for each whitelisted organization or folder.ConclusionMigrating your workloads to the cloud brings increased flexibility, but also an expanded threat domain. Thankfully, tools like Forseti can greatly mitigate that risk, with a powerful suite of security analysis, notification, and enforcement tools for GCP. When trying to secure data in the cloud, the External Project Access Scanner affords insight into an often overlooked data exfiltration path.  To get started with Forseti and the External Project Access Scanner, visit forsetisecurity.org.
Quelle: Google Cloud Platform

Cloud AI helps you train and serve TensorFlow TFX pipelines seamlessly and at scale

Last week, at the TensorFlow Dev Summit, the TensorFlow team released new and updated components that integrate into the open source TFX Platform (TensorFlow eXtended). TFX components are a subset of the tools used inside Google to power hundreds of teams’ wide-ranging machine learning applications. They address critical challenges to successful deployment of machine learning (ML) applications in production, such as:The prevention of training-versus-serving skewInput data validation and quality checksVisualization of model performance on multiple slices of dataA TFX pipeline is a sequence of components that implements an ML pipeline that is specifically designed for scalable, high-performance machine learning tasks. TFX pipelines support modeling, training, serving/inference, and managing deployments to online, native mobile, and even JavaScript targets.In this post, we‘ll explain how Google Cloud customers can use the TFX platform for their own ML applications, and deploy them at scale.Cloud Dataflow as a serverless autoscaling execution engine for (Apache Beam-based) TFX componentsThe TensorFlow team authored TFX components using Apache Beam for distributed processing. You can run Beam natively on Google Cloud with Cloud Dataflow, a seamless autoscaling runtime that gives you access to large amounts of compute capability on-demand. Beam can also run in many other execution environments, including Apache Flink, both on-premises and in multi-cloud mode. When you run Beam pipelines on Cloud Dataflow—the execution environment they were designed for—you can access advanced optimization features such as Dataflow Shuffle that groups and joins datasets larger than 200 terabytes. The same team that designed and built MapReduce and Google Flume also created third-generation data runtime innovations like dynamic work rebalancing, batch and streaming unification, and runner-agnostic abstractions that exist today in Apache Beam.Kubeflow Pipelines makes it easy to author, deploy, and manage TFX workflowsKubeflow Pipelines, part of the popular Kubeflow open source project, helps you author, deploy and manage TFX workflows on Google Cloud. You can easily deploy Kubeflow on Google Kubernetes Engine (GKE), via the 1-click deploy process. It automatically configures and runs essential backend services, such as the orchestration service for workflows, and optionally the metadata backend that tracks information relevant to workflow runs and the corresponding artifacts that are consumed and produced. GKE provides essential enterprise capabilities for access control and security, as well as tooling for monitoring and metering.Thus, Google Cloud makes it easy for you to execute TFX workflows at considerable scale using:Distributed model training and scalable model serving on Cloud ML EngineTFX component execution at scale on Cloud DataflowWorkflow and metadata orchestration and management with Kubeflow Pipelines on GKEFigure 1: TFX workflow running in Kubeflow PipelinesThe Kubeflow Pipelines UI shown in the above diagram makes it easy to visualize and track all executions. For deeper analysis of the metadata about component runs and artifacts, you can host a Jupyter notebook in the Kubeflow cluster, and query the metadata backend directly. You can refer to this sample notebook for more details.At Google Cloud, we work to empower our customers with the same set of tools and technologies that we use internally across many Google businesses to build sophisticated ML workflows. To learn more about using TFX, please check out the TFX user guide, or learn how to integrate TFX pipelines into your existing Apache Beam workflows in this video.Acknowledgments:Sam McVeety, Clemens Mewald, and Ajay Gopinathan also contributed to this post.
Quelle: Google Cloud Platform

Neue GCP-Region in Zürich: Wir erweitern unseren Support für Schweizer und europäische Unternehmen

Unsere Google-Cloud-Platform-Region in Zürich ist ab sofort betriebsbereit. Unsere sechste europäische und neunzehnte Region weltweit bietet Unternehmen in der Schweiz mehr Möglichkeiten beim Zugriff auf ihre Daten und Workloads – bei noch geringerer Latenz.Eine Cloud für die SchweizDie GCP-Region Zürich (europe-west6) ist die ideale Unterstützung für  Unternehmen in der Schweiz und ganz Europa. Mit drei Verfügbarkeitszonen ermöglicht sie Workloads mit hoher Verfügbarkeit. Hybrid-Cloud-Kunden können neue und bestehende Implementierungen mithilfe unseresPartner Ökosystems sowie über zwei dedizierte Interconnect Points of Presence nahtlos integrieren.Dank der neuen Region Zürich können Unternehmen aus der Schweiz noch schneller auf GCP-Produkte und -Services zugreifen. Durch das Hosting von Anwendungen in der neuen Region können sich die Latenzzeiten für Endanwender in der Schweiz um bis zu 10 ms verbessern. AufGCPing.com können Sie die Latenzzeiten von Ihrem jeweiligen Standort in die Region Zürich selbst einsehen. Die Region Zürich startet mit unserem umfassenden Standardportfolio, einschließlich Produkten wie Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner und BigQuery.Um die Vorteile vieler GCP-Services zu nutzen, können Sie mithilfe von Transfer Appliance Ihre Daten in die Cloud bringen. Transfer Appliance ist ein Server mit hoher Kapazität, mit dem große Datenmengen schnell und sicher übertragen werden können – und ist ab sofort auch am Schweizer Markt verfügbar. Wir empfehlen Transfer Appliance, um große Datenmengen zu verschieben, deren Upload andernfalls mehr als eine Woche dauern würde.Hier können Sie Transfer Appliance anfragen.Die neue Region für die Schweiz verfügt zudem über Cloud Interconnect, unser privates, softwaredefiniertes Netzwerk, das eine schnelle und zuverlässige Verbindung zwischen den einzelnen Regionen auf der ganzen Welt gewährleistet. Sie können über das Google-Netzwerk Services nutzen, die in der Region Zürich derzeit noch nicht verfügbar sind, und sie mit anderen weltweit implementierten GCP-Services kombinieren. Somit können SieProdukte, die für Unternehmen mit weltweiter Präsenz entwickelt wurden, schnell in verschiedenen Regionen implementieren und skalieren.Schweizer Kunden sagen „Grüezi” zu Google CloudMit einem besonderen Event in Zürich mit mehr als 800 anwesenden Entscheidern aus Wirtschaft sowie Entwicklern haben wir die neue Region gestartet. Urs Hölzle, Senior Vice President, Technical Infrastructure, eröffnete die Region feierlich. Vertreter von Pharma-, Fertigungs- und Finanzunternehmen aus der Schweiz und ganz Europa informierten sich über Google Cloud und die Vorteile der lokalen Region für ihren Cloud-Betrieb.Was unsere Kunden zur neuen Region sagen„Swiss-AS fokussiert sein Geschäft ausschließlich auf den Support für AMOS, der führenden Instandhaltungssoftware für die Luftfahrt. Heute realisieren wir mithilfe von Google Cloud Platform die Bereitstellung unseres AMOS Cloud Service in dedizierten Cloud-Umgebungen weltweit. Dank der lokalen Präsenz von GCP in Zürich rücken unsere Services noch näher an unsere AMOS-Kunden im deutschsprachigen Raum.“– Alexis Rapior, Hosting Team,Swiss AviationSoftware Ltd.„Die neue Schweizer Cloud-Region eröffnet spannende Möglichkeiten für den Gesundheitssektor: Nun kann die Universitätsklinik Balgrist neue Technologien für die Echtzeitverarbeitung einführen. Auch die Zusammenarbeit in der medizinischen Forschung und Entwicklung wird einfacher und effektiver.“– Thomas Huggler, Geschäftsführer,Universitätsklinik Balgrist„Wir freuen uns sehr über die Einführung von Google Cloud Platform in der Schweiz. Mit Google Cloud können wir uns auf die Entwicklung innovativer Softwarefunktionen für unsere Kunden fokussieren. Zudem bietet sie uns die Möglichkeit, neue Umgebungen innerhalb von Sekunden einzurichten.“– Marc Loosli, Leiter Innovation LAB & Co-Founder,NeXora AG (Teil der Quickline Group)„Belimo ist der weltweit führende Hersteller von Antriebselementen, Ventilen und Sensoren für Heizungs-, Lüftungs- und Klimaanlagen (HLK). In jüngster Zeit haben es uns IoT-Technologien erlaubt, HLK-Systeme anzubieten, die mithilfe von mit der Cloud verbundenen Geräten gesteuert werden. Dies bietet zusätzlichen Komfort, Energieeffizienz, Sicherheit sowie eine einfache Installation und Wartung. Belimo hat sich für GCP entschieden, weil wir bei unseren globalen Cloud Services auf hohe Verfügbarkeit, zuverlässige Performance und Skalierbarkeit setzen. Die Spitzentechnologie und die Tools von Google Cloud helfen unseren Teams, sich auf das Wesentliche zu konzentrieren.“– Peter Schmidlin, Chief Innovation Officer,Belimo Automation AGWas unsere Partner zur neuen Region sagen„Wabion ist mehr als nur begeistert, dass Google Cloud in die Schweiz kommt. Offen gesagt: Das ist das Beste, was dem Schweizer Cloud-Markt passieren kann. Wir haben Kunden, die sehr an Googles Innovationen interessiert sind und bisher nicht migriert sind, weil es bislang keine Schweizer Region gab. Die neue Region Zürich schließt diese Lücke und eröffnet Wabion großartige Möglichkeiten, Kunden auf ihrer Reise in die Google Cloud zu unterstützen.“– Michael Gomez, Geschäftsführer,Wabion SchweizWas kommt als Nächstes?Weitere Einzelheiten über die neue Region finden Sie hier. Dort haben Sie auch Zugriff auf kostenloses Informationsmaterial, Whitepapers, die On-Demand-Videoserie „Cloud On-Air“ und vieles mehr. Wenn Sie noch nicht mit GCP vertraut sind, sehen Sie sich dieBest Practices der Region für Compute Engine an und nehmen Sie Kontakt mit uns auf, um noch heute in Google Cloud einzusteigen.Noch in diesem Jahr werden wir weitere GCP-Regionen eröffnen, beginnend mit Osaka, Japan. Auf unserer Standort-Seite finden Sie aktuelle Informationen zur Verfügbarkeit weiterer Services und Regionen.
Quelle: Google Cloud Platform

Exploring container security: four takeaways from Container Community Summit 2019

Editor’s note: On February 20, we hosted the fourth annual Container Security Summit at Google’s campus in Seattle. This event aims to help security professionals increase the security of their container deployments and apply the latest in container security research. Here’s what we learned.Container security is a hot topic, but it can be intimidating. Container developers and operators don’t usually spend their days studying security exploits and threat analysis; likewise, container architectures and components can feel foreign to the security team.Dev, ops, and security teams all want their workloads to be more secure (and make those pesky containers actually “contain”!); the challenge is making those teams more connected to bring container security to everyone. The theme of the 2019 Container Security Summit was just that: “More contained. More secure. More connected.” Here are four topics that led the day at the summit:Rootless builds are here. Why aren’t you using one?To improve the security of build processes and the isolation of running workloads, container builds should be hermetic and reproducible. A build is hermetic if no data leaks between individual builds (i.e., one build does not impact other builds), and reproducible if it’s repeatable from source to binary (i.e., you get the same output every time).But even if your builds are hermetic and reproducible, unnecessarily running processes as root remains a potential security risk. In fact, that’s what attackers look for—how to gain privileged access to your infrastructure. “The root of all evil is unnecessarily running processes or containers as root,” said Andrew Martin, co-founder of ControlPlane, during his talk. That includes container runtimes and build tools as root.A rootless container build doesn’t require a daemon running on the host—or ideally any root privileges at all—to build the container image. This is particularly useful when building images for untrusted workloads, such as those that come from a third-party or open-source repository that can’t be independently verified.So, where can you get this magic? Fortunately, there are many options, including img, buildah, umoci, Kaniko, and many more! Note that with some of these, rootless container builds are optional, and some still require a daemon to be run as root, or use root inside the container. It’s still hard to get a completely rootless, unprivileged build today. Kaniko, for example, determines the minimum permissions by what’s needed to unpack your base image and execute the RUN commands. (If you’d rather not build the image yourself and trust Google’s security model, Cloud Build is a simple answer.)Thanks to the runc work that’s been ported upstream, “everybody can achieve rootlessness today,” Martin added. “No project has realised the fully-untrusted dream yet, but I expect us to reach utopia in 2019.”The Kubernetes community showed it’s equipped to deal with vulnerabilities over the past yearJust like any other software, Kubernetes isn’t impervious to attacks. In the past year, a handful of severe vulnerabilities surfaced, including CVE-2017-1002101, which allowed containers with subpath volume mounts to access files outside the volume; and CVE-2018-1002105, which allowed a user with relatively low permissions to escalate their privileges.Luckily, Kubernetes’ Product Security Team deftly addressed these vulnerabilities (and others) and handled the rollout of the patches. “The code is only part of the fix. When we’re talking about incident response, it’s not only the code, it’s the process,” said CJ Cullen, software engineer on the Google Kubernetes Engine (GKE) security team.If you’re running Kubernetes yourself, join kubernetes-announce to get the latest on releases, including vulnerability patches. If you’re running on GKE, the security bulletins will give you the latest, and let you know if there’s anything you need to do to stay safe. (Pro tip from top users: post the RSS feed in your security team’s Slack channel!)CIS benchmarks are still the gold standard for locking down your Kubernetes configurations—but only apply what makes sense to youThe Center for Internet Security (CIS) publishes several security guidelines, including guidelines for Kubernetes. Many users refer to these guidelines to show colleagues, regulators, and customers that they’re following Kubernetes security best practices. The CIS recently updated these benchmarks for Kubernetes 1.13 (so they’re current!) and cover a wide range of recommended configurations for both the control plane and the worker nodes in your cluster.Still, you should really think about the CIS benchmarks before you apply them. Rory McCune, Principal Consultant at NCC Group, was one of the key contributors to the Kubernetes CIS benchmarks and presented about them at the conference. “People think you should go to a benchmark and apply everything in there—but that’s the wrong approach,” he said. It’s important when applying any standard, to consider the environment that it’s being used in, and choose which controls apply to your organization’s systems.He also explained that the CIS benchmarks are more difficult to apply to hosted solutions like GKE, “because there are many things you can’t test directly.” This creates an added step where distro users have to figure out which benchmarks apply to them, and demonstrate that to an auditor. Looking ahead, he hopes that the community will develop benchmarks for specific distributions to ease the burden on the user.To test your current Kubernetes config against the CIS benchmarks, you can use kube-bench. In GKE, where you can’t access the control plane to test configurations, we’ve documented how we do this on your behalf in our “Control plane security” document. Best practices for GKE are laid out in the GKE hardening guide. Even with these extra steps, however, hosted solutions still offer much simpler security management than running them yourself. As Dino Dai Zovi, Staff Security Engineer at Square, said, “If you want to run your own, you’re playing life on hard mode.”We need to talk: the best way to improve container securityContainer security is an evolving field; users are still finding out what works for their workloads and their priorities, but attackers wait for no one. In the unconference sessions, attendees were eager to discuss some of the issues they’ve hit with running containers securely in production, including image scanning, container isolation tools, and segmentation best practices.The current container security landscape is still maturing, and two seemingly similar organizations might take very different approaches. The real risk, therefore, is failing to communicate across teams, said DevSecOps expert Ian Coldwater, in the closing keynote.“Container folks, to security people, can sometimes seem like they’re speaking a different language,” said Coldwater. But while container and security teams historically have failed to communicate, “every one of us has something to teach, and something to learn.” If you’re a developer running containers, be sure to keep the lines of communication to the security team wide open.Didn’t make it to the Container Security Summit? Check out the speaker slides. You can also learn more about container security in the Exploring Container Security blog series.
Quelle: Google Cloud Platform

Une nouvelle région Google Cloud Platform à Zurich pour développer notre accompagnement des entreprises suisses et européennes.

Notre région Google Cloud Platform (GCP) de Zurich est désormais ouverte et disponible. Sixième région en Europe et dix-neuvième dans le monde, cette nouvelle région offre aux entreprises opérant en Suisse de nouvelles opportunités en leur permettant d’accéder avec une latence réduite à leurs données et à leurs charges de travail.Un Cloud “made in Switzerland”Conçue pour accompagner les clients suisses et européens, la région GCP de Zurich (europe-west6) propose trois zones de disponibilité, permettant ainsi des charges de travail à haute disponibilité.  Les entreprises disposant d’un cloud hybride peuvent intégrer de façon homogène leur nouveaux déploiements avec ceux existant, avec l’aide de notre écosystème de partenaires régionauxet via deux points de présence Interconnect dédiés.Le lancement de la région de Zurich apporte une latence réduite aux produits et services GCP pour les entreprises opérant en Suisse. L’hébergement d’applications dans cette nouvelle région peut faire gagner jusqu’à 10ms d’amélioration de latence pour les utilisateurs finaux suisses. La latence vers la région de Zurich quelle que soit votre localisation est visible et disponible à l’adresse GCPing.com.La région de Zurich ouvre avec l’ensemble standard de nos produits, dont Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, et BigQuery.Pour tirer parti des nombreux services GCP, vous devez tout d’abord migrer vos données dans le cloud. L’Appliance de Transfert est un serveur à haute capacité qui vous permet de transférer d’importantes quantités de données vers GCP, de façon rapide et sécurisée. Elle sera disponible sur le marché suisse. Nous recommandons d’utiliser l’Appliance de Transfert pour déplacer de larges quantités de données qui prendraient plus d’une semaine à être téléchargées. Pour en faire la demande, vous pouvez vous rendre sur cette page.Cette région intègre Cloud Interconnect, notre réseau software-defined privé qui fournit un lien rapide et fiable entre chaque région du monde. Vous pouvez utiliser les services qui ne sont pas pour le moment disponibles au sein de la région de Zurich via le réseau Google, et les conjuguer avec d’autre services GCP déployés à travers le monde. Cela vous permet de vous déployer rapidement et vous développer sur différentes régions avec des produits conçus pour des entreprises de dimension mondiale.Une célébration avec nos clients suissesNous avons lancé cette nouvelle région de Zurich  avec un événement qui a attiré plus de 800 décideurs économiques et développeurs. Urs Hölzle, SVP Technical Infrastructure, a inauguré officiellement la région. Des clients des secteurs pharmaceutiques, industriels et financiers suisses et européens ont ainsi pu approfondir leurs connaissances de GCP et comprendre comment leurs activités dans le cloud peuvent bénéficier de cette nouvelle région.Ce que disent nos clients“L’activité de Swiss-AS se concentre sur le support d’AMOS, le logiciel d’engineering et de maintenance en aviation. Aujourd’hui, Google Cloud Platform nous permet de proposer notre Service Cloud AMOS dans un environnement cloud dédié dans le monde entier. Avec la présence locale de GCP à Zurich, nous pouvons offrir nos services de façon encore plus proche de nos clients AMOS installés dans les pays germanophones.”– Alexis Rapior, Hosting Team, Swiss AviationSoftware ltd.“Cette nouvelle région suisse ouvre de nouvelles opportunités très intéressantes pour le secteur de la santé. Elle va permettre au Balgrist University Hospital de proposer de nouvelles technologies de traitement en temps réel. Les collaborations en recherche et développement médicaux seront également plus faciles et plus efficaces.”– Thomas Huggler, Executive Director, University Hospital Balgrist“Nous accueillons l’arrivée de Google Cloud Platform en Suisse avec enthousiasme. Avec Google Cloud, nous pouvons concentrer nos efforts sur le développement de nouvelles fonctionnalités logicielles pour nos clients. Cela nous donne l’opportunité d’avoir de nouveaux environnements en quelques secondes.”– Marc Loosli, Chief Innovation, NeXora AG (une division de the Quickline Group)“Belimo est le constructeur leader d’actionneurs, de valves et de capteurs pour les systèmes de chauffage, de ventilation et d’air conditionné (HVAC) dans le monde.  Récemment les technologies IoT nous ont permis de proposer des systèmes HVAC contrôlés par des terminaux connectés au cloud, qui offre plus de confort, d’efficacité énergétique, de sécurité, de facilité d’installation et de maintenance. Belimo a choisi GCP car nous sommes dépendants d’une haute disponibilité, de performances de fiabilité ainsi que de l’évolutivité pour nos services cloud mondiaux. Les technologies et les outils de pointe de GCP aident nos équipes à se concentrer sur l’essentiel.”– Peter Schmidlin, Chief Innovation Officer, Belimo Automation AGCe que disent nos partenaires”Wabion est particulièrement ravi de l’arrivée de Google Cloud en Suisse. Je pense sincèrement que c’était la meilleure chose qui pouvait advenir sur le marché suisse du cloud. Nous avons des clients qui sont très intéressés par l’innovation de Google et qui n’ont pas migré jusqu’ici en raison de l’absence d’un hub suisse. La nouvelle région de Zurich comble ce vide et ouvre d’immenses opportunités pour Wabion d’accompagner des clients dans leur évolution avec Google Cloud.”– Michael Gomez, Co-Manager, WabionProchaines étapesPour plus d’information sur cette  région, n’hésitez pas à vous rendre sur la page Internet de notre région de Zurich où vous pourrez accéder à des ressources gratuites, des livres blancs, la série de vidéo à la demande “Cloud On-Air” et bien plus encore d’informations. Si vous découvrez GCP, visitez les pages Bonnes pratiques pour la sélection des régions Compute Engine et contact commercialpour commencer dès aujourd’hui.Nous ouvrirons de nouvelles zones et régions GCP au cours de l’année, à commencer par Osaka. Notre page présentant nos zones géographiques Cloud est mise à jour régulièrement avec les disponibilités de nos nouveaux services et régions supplémentaires.
Quelle: Google Cloud Platform

New GCP region in Zurich: Growing our support for Swiss and European businesses

Die deutsche Version ist hier | La version française est ici | Qui la versione ItalianaOur Google Cloud Platform (GCP) region in Zurich is now live and ready for business. Our sixth European region and nineteenth overall, this new region gives companies doing business in Switzerland more opportunities with lower latency access to their data and workloads.A cloud made for SwitzerlandDesigned to support Swiss and European customers, the Zurich GCP region (europe-west6) comes with three availability zones, enabling high availability workloads. Hybrid cloud customers can seamlessly integrate new and existing deployments with help from our regional partner ecosystem, and via two dedicated interconnect points of presence.The launch of the Zurich region brings lower latency access to GCP products and services for organizations doing business in Switzerland. Hosting applications in the new region can improve latency for end users in Switzerland by up to 10ms. Visit GCPing.com to see latency to  the Zurich region from wherever you happen to be.The Zurich region launches with our standard set of products, including Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery.To take advantage of many GCP services, you first have to get your data into the cloud.  Transfer Appliance is a high-capacity server that lets you transfer large amounts of data to GCP, quickly and securely, and it’s coming to the Swiss market. We recommend Transfer Appliance if you’re moving large quantities of data that would take more than a week to upload. You can request a Transfer Appliance here.This region comes with Cloud Interconnect, our private, software-defined network that provides a fast and reliable link between each region around the world. You can use services that aren’t presently available within the Zurich region via the Google Network, and combine them with other GCP services deployed around the world. That lets you quickly deploy and scale across multiple regions with products designed for organizations with a global footprint.Celebrating with Swiss customersWe kicked off the new region with a special event in Zurich with over 800 business leaders and developers in attendance. SVP of Technical Infrastructure Urs Hölzle officially opened the region. Customers from pharmaceutical, manufacturing, and financial businesses all over Switzerland and Europe learned about GCP and how the local region can benefit their cloud operations.What customers are saying“Swiss-AS dedicates its business exclusively to the support of AMOS, leading aviation maintenance and engineering software. Today, Google Cloud Platform enables us to deliver our AMOS Cloud Service fully dedicated cloud environment worldwide. Now with GCP’s local presence in Zurich, we can bring our service even closer to our AMOS customers based in German-speaking countries.”– Alexis Rapior, Hosting Team, Swiss AviationSoftware ltd.“The new Swiss cloud region opens up exciting opportunities for the health sector. It will enable Balgrist University Hospital to introduce new real-time processing technologies. Collaborations in medical research and development will also be easier and more effective.”– Thomas Huggler, Executive Director, University Hospital Balgrist“We are very excited about the arrival of Google Cloud Platform in Switzerland. With Google Cloud, we can focus our efforts on developing new innovative software features for our customers. It gives us the opportunity to have new environments ready within seconds.”– Marc Loosli, Chief Innovation, NeXora AG (part of the Quickline Group)“Belimo is the leading global manufacturer of actuators, valves, and sensors used in heating, ventilation, and air conditioning (HVAC) systems. Recently, IoT technologies have allowed us to offer HVAC systems controlled by cloud-connected devices, which deliver additional comfort, energy efficiency, safety, ease of installation and maintenance. Belimo chose GCP because we depend on high availability, reliable performance as well as scalability for our global cloud services. The cutting-edge technology and tools from GCP help our teams to focus on the essential.”– Peter Schmidlin, Chief Innovation Officer, Belimo Automation AGWhat partners are saying”Wabion is more than just excited to see Google Cloud coming to Switzerland. Frankly, I believe this is the best thing that could happen to the Swiss cloud market. We have customers that are very interested in Google’s innovation who haven’t migrated because of the lack of a Swiss hub. The new Zurich region closes this gap, unlocking huge opportunities for Wabion to help customers on their Google Cloud journey.”– Michael Gomez, Co-Manager, WabionWhat’s nextFor more details about this region, please visit our Zurich region page where you’ll get access to free resources, whitepapers, the “Cloud On-Air” on-demand video series, and more. If you’re new to GCP, check out Best Practices for Compute Engine Region Selection and contact sales to get started on GCP today.We are launching more GCP zones and regions later this year, starting with Osaka. Our locations page provides updates on the availability of additional services and regions.
Quelle: Google Cloud Platform

La nuova area geografica di GCP a Zurigo: espandiamo il supporto alle imprese Svizzere ed Europee

L’area geografica di Google Cloud (GCP) a Zurigo è ora disponibile ed operativa per migliorare l’attività delle imprese. Sesta in Europa e diciannovesima nel mondo, questa nuova area geografica offre alle aziende che operano in Svizzera maggiori opportunità riducendo la latenza di accesso ai propri dati e al carico di lavoro.Il Cloud fatto per la SvizzeraProgettata per supportare i clienti svizzeri ed europei, l’area geografica GCP di Zurigo (europe-west6) è dotata di tre zone, che consentono carichi di lavoro ad alta disponibilità. I clienti di cloud ibrido possono integrare senza problemi le implementazioni nuove e quelle esistenti con l’aiuto del nostro ecosistema di partner dell’area geografica e tramite due Point of Presence dedicati.L’apertura dell’area geografica di Zurigo ridurrà la latenza all’accesso di prodotti e servizi di Google Cloud per le organizzazioni che operano in Svizzera. Le applicazioni di hosting in questa nuova area possono diminuire la latenza fino a 10 millisecondi per gli utenti finali in Svizzera. Si può visitare il sito GCPing.com per visualizzare il tempo di latenza dall’area di Zurigo, dovunque tu sia nel mondo.L’area di Zurigo si presenta con il nostro standard di prodotti e servizi, inclusi Compute Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, e BigQuery.Per ottenere tutti i vantaggi dei diversi servizi di GCP, è necessario prima di tutto trasferire i dati nel cloud. Transfer Appliance, che sta arrivando nel mercato svizzero, è un server ad alta capacità che consente di trasferire grandi quantità di dati su GCP, in modo rapido e sicuro. Transfer Appliance è consigliata quando si muovono grandi quantità di dati che potrebbero richiedere  più di una settimana per essere caricati. Qui è possibile accedere a Transfer Appliance.Questa area geografica include Cloud Interconnect, il nostro private software-defined network che fornisce un collegamento veloce ed affidabile tra ogni area del mondo. È infatti possibile usare servizi che non sono presenti all’interno dell’area di Zurigo attraverso la Google Network, e combinarli con altri servizi di GCP distribuiti in tutto il mondo. Ciò permette di usare velocemente in più aree geografiche, i prodottidi GCP, creati per organizzazioni che operano a livello globale.Festeggia con i clienti svizzeriAbbiamo dato il via alla nuova area geografica con un evento speciale a Zurigo insieme a oltre 800 imprenditori e sviluppatori. L’area geografica è stata ufficialmente aperta da Urs Hölzle, Senior Vice President Technical Infrastructure. I clienti di aziende farmaceutiche, manifatturiere e finanziarie di tutta la Svizzera e d’Europa hanno imparato a conoscere GCP ed in che modo l’area geografica può beneficiare delle operazioni Cloud.Cosa dicono i nostri clienti”Swiss-AS dedica la propria attività esclusivamente al supporto di AMOS, leader nel software di manutenzione e ingegneria aeronautica. Oggi, GCP ci consente di offrire il nostro AMOS Cloud Service in tutto il mondo. Ora con la presenza locale diGCP a Zurigo, possiamo portare il nostro servizio ancora più vicino ai nostri clienti AMOS con sede in paesi di lingua tedesca.”– Alexis Rapior, Hosting Team, Swiss AviationSoftware ltd.”La nuova cloud region svizzera apre interessanti opportunità per il settore sanitario. Permetterà al Balgrist University Hospital di introdurre nuove tecnologie di elaborazione dei dati in tempo reale. Le collaborazioni nella ricerca e nello sviluppo della medicina saranno ancora più facili e più efficaci “.- Thomas Huggler, direttore esecutivo, University Hospital Balgrist”Siamo molto felici dell’arrivo di Google Cloud Platform in Svizzera. Con Google Cloud, possiamo concentrare i nostri sforzi sullo sviluppo di nuove  e innovative funzionalità software per i nostri clienti. Abbiamo così l’opportunità di avere nuovi ambienti digitali pronti in pochi secondi. “– Marc Loosli, Chief Innovation, NeXora AG (part of the Quickline Group)”Belimo è il principale produttore globale di attuatori, valvole e sensori utilizzati nei sistemi di riscaldamento, ventilazione e condizionamento dell’aria (HVAC). Recentemente, le tecnologie IoT ci hanno permesso di offrire sistemi HVAC controllati da dispositivi connessi al cloud, che offrono ulteriore comfort, efficienza energetica, sicurezza, facilità di installazione e manutenzione. “Belimo ha scelto Google Cloud Platform perchè l’alta disponibilità, le prestazioni affidabili e la scalabilità per i nostri servizi cloud globali sono fondamentali. La tecnologia e gli strumenti all’avanguardia di GCP aiutano i nostri team a concentrarsi sull’essenziale.”– Peter Schmidlin, Chief Innovation Officer, Belimo Automation AGCosa dicono i nostri partners”Wabion è più che entusiasta di vedere Google Cloud arrivare in Svizzera. Francamente credo che questa sia la cosa migliore che potrebbe accadere al mercato svizzero del cloud. Abbiamo clienti molto interessati all’innovazione di Google che non hanno ancora migrato i propri dati in Cloud per la mancanza di un hub svizzero. La nuova area geografica di Zurigo è un’ottima notizia, che può portare nuove opportunità a Wabion per aiutare i clienti nel loro viaggio su Google Cloud. “– Michael Gomez, Co-Manager, WabionQuali saranno le prossime novitàPer maggiori dettagli su questa area geografica, visita la nostra pagina dell’area geografica di Zurigo dove sarà possibile accedere a risorse gratuite, white paper, una serie di video on-demand “Cloud On-Air” e altro ancora. Se non si ha familiarità con GCP, consulta le Best Practices per selezionare l’area geografica di Compute Engine e contattare il team di vendita per iniziare a utilizzare GCP oggi.Entro la fine dell’anno saranno aperte alcune nuove aree geografiche, a partire da quella di Osaka. La nostra pagina delle località Cloud fornisce aggiornamenti costanti sulla disponibilità di servizi e nuove aree geografiche.
Quelle: Google Cloud Platform

Advancing academic research through our relationship with Internet2

Research from universities and higher education institutions plays an important role in driving innovation and social impact, and an increasing number of these organizations are turning to the cloud to do it. Today, we’re announcing a new agreement with Internet2, the advanced technology consortium of universities and research institutions, to offer special benefits under Google Cloud Platform (GCP) for their members.Internet2 is a member-driven advanced technology community that includes 316 U.S. universities, 60 government agencies, 59 corporations, and 43 regional and state education networks. In addition, it collaborates with research and education network partners that represent more than 100 countries. Their NET+ program provides a portfolio of reliable cloud and trust solutions to help higher education and research institutions solve common technology challenges. As part of this program, we’re offering Internet2 higher education members key enhancements to our standard GCP education terms.“We’re really excited about this development with Google Cloud,” said Kevin Morooney, vice president of trust and identity at Internet2. “Many of our stakeholders are already leveraging Google Cloud services and this is another way that NET+ can help campuses create the relationships they need with key infrastructure providers.”Key benefits for Internet2 communityNET+ Google Cloud Platform for Education provides access to GCP with the following benefits:Discounted educational pricing for Internet2 member institutions, as well as egress waiversfor data egress fees.Free deployment and training for Internet2 member institutions to facilitate onboarding and training.Successful completion of the peer-driven NET+ Service Validation process to help facilitate community security, accessibility, and contractual standards.Free Orbitera cloud billing reporting and analytics and Business Associates Agreements through Carahsoft, a leading IT solutions provider.Pre-negotiated, custom contract with Internet2 member institutions for the NET+ communityAccess to Google TPUs (10-30x faster than GPUs), and services like AutoML and Cloud ML Engine,Vertically-integrated security model with low latency and high responsiveness from the world’s largest private cloud network, as well as provisions addressing compliance with key regulations and standards, including FERPA and FedRAMP, among others.Layer 3 routed access to Google for greater speed and security through Internet2 Cloud Exchange.Enabling peer-driven review through NET+ Service ValidationMember Internet2 institutions can also benefit from customizing their GCP services through a peer-driven review process. The NET+ Service Validation process helps them collectively identify and vet cloud solutions that the community believes can be effective in addressing their security and accessibility requirements, allowing them to negotiate contractual and pricing terms specifically for their teaching, learning, and research needs. “The NET+ GCP service validation process has given Indiana University access to a unique offering. This includes a community of technical resources and a collaborative environment to think strategically on how to design this cloud service offering—something we could not have done on our own,” said Bob Flynn, Manager, Cloud Technology Support at Indiana University. “Cloud adoption is essential for our community to stay competitive in the global marketplace. We are excited to provide the GCP toolset to our teaching, learning, and research communities to see where their imaginations can take it.”Meet us at Internet2’s Global SummitWe’ve been hosting sessions sharing our best practices on topics like trust and identity management and improving reproducibility in scientific research during the 2019 Internet2 Global Summit in Washington, D.C., March 5-8th. Participating institutions like Columbia University will share how they’re using Google Cloud to develop cloud architectures and tools that support their research. If you’ll be attending the summit, please stop by the Google Cloud booth—we’d love to say hello. Talk to members of our team or set up a meeting to get started.To learn more about GCP, visit our website or contact us.
Quelle: Google Cloud Platform