12 tantalizing sessions for data management at Next '19

Data is the backbone of many an enterprise, and when cloud is in the picture it becomes especially important to store, manage and use all that data effectively. At Next ‘19, you’ll find plenty of sessions that can help you understand ways to manage your Google Cloud data, and tips to store and manage it efficiently. For an excellent primer on Google Cloud Platform (GCP) data storage, sign yourself up for this spotlight session for the basics and demos. Here are some other sessions to check out:1. Tools for Migrating Your Databases to Google CloudYou can choose different ways to migrate your database to the cloud, whether lift-and-shift to use fully managed GCP or a total rebuild to move onto cloud-native databases. This session will explain best practices for database migration and tools to make it easier.2. Migrate Enterprise Workloads to Google Cloud PlatformThere is a whole range of essential enterprise workloads you can move to the cloud, and in this session you’ll learn specifically about Accenture Managed Services for GCP, which makes it easy for you to run Oracle databases and software on GCP.3. Migrating Oracle Databases to Cloud SQL PostgreSQLGet the details in this session on migrating on-prem Oracle databases to Cloud SQL PostgreSQL. You’ll get a look at all the basics, from assessing your source database and doing schema conversion to data replication and performance tuning.4. Moving from Cassandra to Auto-Scaling Bigtable at SpotifyThis migration story illustrates the real-world considerations that Spotify used to decide between Cassandra and Cloud Bigtable, and how they migrated workloads and built an auto-scaler for Cloud Bigtable.5. Optimizing Performance on Cloud SQL for PostgreSQLIn this session, you’ll hear about the database performance tuning we’ve done recently to considerably improve Cloud SQL for PostgreSQL. We’ll also highlight Cloud SQL’s use of Google’s Regional Persistent Disk storage layer. You’ll learn about PostgreSQL performance tuning and how to let Cloud SQL handle mundane, yet necessary, tasks.6. Spanner Internals Part 1: What Makes Spanner Tick?Dive into Cloud Spanner with Google Engineering Fellow Andrew Fikes. You’ll learn about the evolution of Cloud Spanner and what that means for the next generation of databases, and get technical details about how Cloud Spanner ensures strong consistency.7. Thinking Through Your Move to Cloud SpannerFind out how to use Cloud Spanner to its full potential in this session, which will include best practices, optimization strategies and ways to improve performance and scalability. You’ll see live demos of how Cloud Spanner can speed up transactions and queries, and ways to monitor its performance.8. Technical Deep Dive Into Storage for High-Performance ComputingHigh-performance computing (HPC) storage in the cloud is still an emerging area, particularly because complexity, price and performance have caused concern. This session will look at companies that are using HPC storage in the cloud across multiple industries. You’ll also see how HPC storage uses GCP tools like Compute Engine VMs and Persistent Disk.9. Driving a Real-Time Personalization Engine With Cloud BigtableSee how one company, Segment, built its own Lambda architecture for customer data using Cloud Bigtable to handle fast random reads and BigQuery to process large analytics datasets. Segment’s CTO will also describe the decision-making process around choosing these GCP products vs. competing options, and their current setup, with tens of terabytes stored in multiple systems and super-fast latency.10. Building a Global Data PresenceCome take a look at how Cloud Bigtable’s new multi-regional replication works using Google’s SD-WAN. This new feature makes it possible for a single instance of data, up to petabyte size, to be accessed within or between five different continents in up to four regions. Your users can access data globally with low latency, and get a fast disaster recovery option for essential data.11. Worried About Application Performance? Cache It!In-memory caching can help speed up application performance, but it brings challenges too. Take a closer look in this session to learn about cache sizing, API considerations and latency troubleshooting.12. How Twitter Is Migrating 300 PB of Hadoop Data to GCPThis detailed look at Twitter’s complex Hadoop migration will cover their use of the Cloud Storage Connector and open-source tools. You’ll hear from Twitter engineers on how they planned and managed the migration to GCP and how they solved some of their unique data management challenges.For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there.
Quelle: Google Cloud Platform

Helping SaaS partners run reliably with new SRE tools and training

Our Customer Reliability Engineering (CRE) team is on a mission to help make everyone more reliable by making it easy to adopt Site Reliability Engineering (SRE) principles and practices. Lately, we’ve been spending a lot of time with our SaaS company partners,  helping them reduce the operational burden on their systems, become more agile, and run reliable services for their users and customers.We’ve been doing this work with these SaaS partners for more than a year now, and we’ve learned some lessons along the way:Most companies are still in the early stages of their site reliability engineering (SRE) journey. Interest in learning more about SRE principles, best practices, and tooling is coming from a wide variety of roles, many of which aren’t specifically called “SRE.” We’ve gotten consistent feedback that companies want self-paced, interactive online resources, such as a Coursera course, to learn more about SRE.While companies have unique combinations of customer requirements and solutions, we’ve found that they share many common architectural patterns as it relates to their customers’ experiences. Overwhelmingly, customers want to be able to build service-level objectives (SLOs) quickly and effectively.The concept of reliability goes beyond defining and monitoring metrics. We’ve heard that companies want to prevent unanticipated failures and build resilient systems that can gracefully handle previously unknown failure modes when they first occur. They also want to take advantage of the collective knowledge and experience of Google engineers.As we continue our mission to support all SaaS companies to operate reliably on Google Cloud, we have been working on making it easy for newcomers to get started on their SRE journey in several ways.Introducing a new Coursera course on Site Reliability EngineeringWe want to make it easy for developers to start learning the basics of SRE concepts and help the larger SRE community establish baselines. We designed this new course to distill years of collective Google SRE experience with designing and managing complex systems that meet their reliability targets. We hope that it helps you as developers learn at your own pace and provides insight for new and experienced SREs alike. You can enroll for the class here.Introducing SLO Guide, a tool that helps you discover what you should measureAt Google, we’ve always believed in building tools to solve complex problems at scale. A goal of our CRE team—our first customer-facing SRE team—is to help every single SaaS company in the world run reliably on Google Cloud Platform (GCP). In the pursuit of this mission, we’ve built SLO Guide, a new tool to help SaaS companies discover what they should measure based on common architectures and critical user journeys (CUJ). Simply put, it will help you quickly create SLOs that measure what your users actually care about.  The SRE course and SLO Guide are available now as a few of the key benefits for our Google Cloud SaaS partners. If you’re an existing partner, you can request access to the tool here. If you’re not a Google Cloud SaaS partner yet, you can become one here.
Quelle: Google Cloud Platform

Mitigating risk in the hardware supply chain

At Google, security is among our primary design criteria as we build hardware, software, and services. We think comprehensively about potential risks, no matter how small, and the best ways to mitigate them and stay ahead of attackers.We take a “defense in depth” approach to security, which means that we don’t rely on any one thing to keep us secure, but instead build layers of checks and controls. Even if an attacker were to circumvent one of our safeguards, they would be met with many more carefully designed protections to keep them out.One area where we’ve put a lot of thought, and which we continue to focus on, is the security of our hardware supply chain. Today, I’d like to go into a few of the things we do specifically in this area.Hardware design and provenanceA Google data center consists of thousands of servers connected to a local network. In most cases, both the server boards and the networking equipment are custom-designed by Google. We vet component vendors and choose components with care, working with vendors to audit and validate the security properties provided by the components. We also design custom chips, such as the Titan hardware security chip that we’re rolling out on both servers and peripherals, which help us securely identify and authenticate legitimate Google devices at the hardware level.Hardware tracking and disposalGoogle meticulously tracks the location and status of all equipment within our data centers—from acquisition to installation to retirement to destruction—via barcodes and asset tags. Metal detectors and video surveillance are implemented to help make sure no equipment leaves the data center floor without authorization. If a component fails to pass a performance test at any point during its lifecycle, it is removed from inventory and retired.When a hard drive is retired, authorized individuals verify that the disk is erased by writing zeros to the drive and performing a multiple-step verification process to ensure the drive contains no data. If the drive cannot be erased for any reason, it is stored securely until it can be physically destroyed. Depending on available equipment, we either crush and deform the drive or shred the drive into small pieces. Each data center adheres to a strict disposal policy and any variances are promptly addressed.Secure boot stack and machine identityGoogle servers use a variety of technologies to ensure that they are booting the correct software stack. We use cryptographic signatures over low-level components like the BIOS, bootloader, kernel, and base operating system image. These signatures can be validated during each boot or update. The components are Google-controlled, built, and hardened. With each new generation of hardware we strive to continually improve security: for example, depending on the generation and type of server, we root the trust of the boot chain in either a lockable firmware chip, a microcontroller running Google-written security code, or the above mentioned Google-designed security chip.Each server in the data center has its own specific identity that can be tied to the hardware root of trust and the software with which the machine booted. This identity is used to authenticate API calls to and from low-level management services on the machine.Google has developed automated systems which ensure servers run up-to-date versions of their software stacks (including security patches), detect and diagnose hardware and software problems, and remove machines from service if necessary.Defense-in-depthAs mentioned, while these are examples of protections designed to address specific attack vectors in a potential supply chain attack, they are by no means the only defense. Google’s infrastructure and Google Cloud have been designed with a defense-in-depth approach so that we have opportunities to mitigate potential vulnerabilities at other layers of our stack. For example, even if a piece of server hardware were compromised, our network infrastructure is designed to be able to detect and automatically prevent the command-and-control communications that are often necessary to take advantage of compromised hardware. Similarly, by encrypting and authenticating network traffic we are able to prevent a compromised network device from accessing sensitive data.Google will continue to invest in our platform to allow you to benefit from our services in a secure and transparent manner. To learn more about our approach to infrastructure security, visit our Infrastructure Security page, and download our Infrastructure Security whitepaper.
Quelle: Google Cloud Platform

TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models

Since it was open-sourced in 2015, TensorFlow has matured into an entire end-to-end ML ecosystem that includes a variety of tools, libraries, and deployment options to help users go from research to production easily. This month at the 2019 TensorFlow Dev Summit we announced TensorFlow 2.0 to make machine learning models easier to use and deploy.TensorFlow started out as a machine learning framework and has grown into a comprehensive platform that gives researchers and developers access to both intuitive higher-level APIs and low-level operations. In TensorFlow 2.0, eager execution is enabled by default, with tight Keras integration. You can easily ingest datasets via tf.data pipelines, and you can monitor your training in TensorBoard directly from Colab and Jupyter Notebooks. The TensorFlow team will continue to work on improving TensorFlow 2.0 alpha with a general release candidate coming later in Q2 2019. Making ML easier to useThe TensorFlow team’s decision to focus on developer productivity and ease of use doesn’t stop at iPython notebooks and Colab, but extends to make API components integrate far more intuitively with tf.keras (now the standard high level API), and to TensorFlow Datasets, which let users import common preprocessed datasets with only one line of code. Data ingestion pipelines can be orchestrated with tf.data, pushed into production with TensorFlow Extended (TFX), and scaled to multiple nodes and hardware architectures with minimal code change using distribution strategies.The TensorFlow engineering team has created an upgrade tool and several migration guides to support users who wish to migrate their models from TensorFlow 1.x to 2.0. TensorFlow is also hosting a weekly community testing stand-up for users to ask questions about TensorFlow 2.0 and migration support. If you’re interested, you can find more information on the TensorFlow website.Upgrading a model with the tf_upgrade_v2 tool.Experiment and iterateBoth researchers and enterprise data science teams must continuously iterate on model architectures, with a focus on rapid prototyping and speed to a first solution. With eager execution a focus in TensorFlow 2.0, researchers have the ability to use intuitive Python control flows, optimize their eager code with tf.function, and save time with improved error messaging. Creating and experimenting with models using TensorFlow has never been so easy.Faster training is essential for model deployments, retraining, and experimentation. In the past year, the TensorFlow team has worked diligently to improve training performance times on a variety of platforms including the second-generation Cloud TPU (by a factor of 1.6x) and the NVIDIA V100 GPU (by a factor of more than 2x). For inference, we saw speedups of over 3x with Intel’s MKL library, which supports CPU-based Compute Engine instances.Through add-on extensions, TensorFlow expands to help you build advance models. For example, TensorFlow Federated lets you train models both in the cloud and on remote (IoT or embedded) devices in a collaborative fashion. Often times, your remote devices have data to train on that your centralized training system may not. We also recently announced the TensorFlow Privacy extension, which helps you strip personally identifiable information (PII) from your training data. Finally, TensorFlow Probability extends TensorFlow’s abilities to more traditional statistical use cases, which you can use in conjunction with other functionality like estimators.Deploy your ML model in a variety ofenvironments and languagesA core strength of TensorFlow has always been the ability to deploy models into production. In TensorFlow 2.0, the TensorFlow team is making it even easier. TFX Pipelines give you the ability to coordinate how you serve your trained models for inference at runtime, whether on a single instance, or across an entire cluster. Meanwhile, for more resource-constrained systems, like mobile or IoT devices and embedded hardware, you can easily quantize your models to run with TensorFlow Lite. Airbnb, Shazam, and the BBC are all using TensorFlow Lite to enhance their mobile experiences, and to validate as well as classify user-uploaded content.Exploring and analyzing data with TensorFlow Data Validation.JavaScript is one of the world’s most popular programming languages, and TensorFlow.js helps make ML available to millions of JavaScript developers. The TensorFlow team announced TensorFlow.js version 1.0. This integration means you can not only train and run models in the browser, but also run TensorFlow as a part of server-side hosted JavaScript apps, including on App Engine. TensorFlow.js now has better performance than ever, and its community has grown substantially: in the year since its initial launch, community members have downloaded TensorFlow.js over 300,000 times, and its repository now incorporates code from over 100 contributors.How to get startedIf you’re eager to get started with TensorFlow 2.0 alpha on Google Cloud, start up a Deep Learning VM and try out some of the tutorials. TensorFlow 2.0 is available through Colab via pip install, if you’re just looking to run a notebook anywhere, but perhaps more importantly, you can also run a Jupyter instance on Google Cloud using a Cloud Dataproc Cluster, or launch notebooks directly from Cloud ML Engine, all from within your GCP project.Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.Along with announcing the alpha release of TensorFlow 2.0, we also announced new community and education partnerships. In collaboration with O’Reilly Media, we’re hosting TensorFlow World, a week-long conference dedicated to fostering and bringing together the open source community and all things TensorFlow. Call for proposals is open for attendees to submit papers and projects to be highlighted at the event. Finally, we announced two new courses to help beginners and learners new to ML and TensorFlow. The first course is deeplearning.ai’s Course 1 – Introduction to TensorFlow for AI, ML and DL, part of the TensorFlow: from Basic to Mastery series. The second course is Udacity’s Intro to TensorFlow for Deep Learning.If you’re using TensorFlow 2.0 on Google Cloud, we want to hear about it! Make sure to join our Testing special interest group, submit your project abstracts to TensorFlow World, and share your projects in our #PoweredByTF Challenge on DevPost. To quickly get up to speed on TensorFlow, be sure to check out our free courses on Udacity and DeepLearning.ai.
Quelle: Google Cloud Platform

It's raining APIs: How AccuWeather shares data with developers using Apigee

Editor’s note: We’re hearing today from AccuWeather, the popular weather data provider. The company has evolved into a digital business through the years, and the company’s APIs are essential to what it offers. Here’s how AccuWeather uses Google’s Apigee API management platform to make it all work smoothly.Since AccuWeather was founded in 1962, our company has become the world’s leading provider of weather forecasts and warnings. We maintain a huge, accurate and comprehensive collection of weather warning data.Back then, we brought data to local forecasts, newspapers, radio stations and small businesses. While we started by putting pen to paper and providing solutions to business customers, AccuWeather has really evolved into a digital platform over the past decade. This entire transformation was powered by APIs. We are extremely proud of how broadly our enterprise APIs are used. They provide life-saving weather information and warnings to major companies worldwide, including nine out of the 10 major smartphone OEMs, IoT producers, and others in some of the world’s biggest industries, including more than half of Fortune 500 companies and thousands more globally.You can see more about AccuWeather’s APIs in this short documentary:Bringing weather data to new audiencesWe faced an interesting challenge when we moved to expand our reach and engage new audiences, especially small- to mid-sized businesses, entrepreneurs, individual developers, and students. We knew a long onboarding process wouldn’t work for these developers, and we knew we had to make it easy for these developers to access our APIs quickly without a lot of overhead.Increasingly, these prospective customers needed an easy, frictionless and automated sign-up process to evaluate and integrate our APIs as quickly as possible into the applications they are developing. To facilitate that innovation and development, we needed to give developers fast, simple, and cost-effective access to AccuWeather’s unique weather data. We source our global data in real time from multiple sources, both public and private, and blend it in our Global Forecast System with custom software algorithms, artificial intelligence, and machine learning. That’s then combined with the experience of more than 100 operational meteorologists to generate detailed, accurate, and localized forecasts. That data has been proven most accurate in the weather industry for the past three years in an independent study.Building a developer portalTo give these smaller, specialized audiences access to all this weather data, we began partnering with Google’s Apigee to use its API management solutions to expand our reach.We built the AccuWeather API Developer Portal, which provides turnkey package options so developers can access detailed global weather forecasts and warnings on the Apigee platform.Apigee’s monetization module was a key selling point for AccuWeather. This allowed us to package our APIs into set products, which enables developers to purchase our APIs (or test them for free), so they can tailor their API consumption to their specific needs. Since AccuWeather offers so many types of data, and many variations of specific data, these API packages let developers and small businesses pick and choose data content as they need it. Data points include extended forecasts or specific forecast periods like hourly or daily.The analytics capabilities offered by Apigee have helped us customize our API products to the needs of developers by revealing traffic patterns and making sure users get weather data when and how they want it to best achieve their desired outcomes. Using these traffic patterns, we can see which developers are most active, which APIs are most heavily used, what time of day people look at the weather, what clients are growing fast and which ones may need more support. This lets us be proactive to continue building useful products.What’s next for AccuWeather and ApigeeWe have been thrilled with the results. Since partnering with Apigee and launching the AccuWeather API Developer Portal in May 2017, we have watched the number of developers who have signed up to use our APIs grow to more than 60,000.We’re now reaching important new developer audiences who are exploring ways to incorporate our troves of weather data into their own applications. We’re excited to make our APIs available to more developers—any of whom might be working on the next big thing. Innovation has a better chance to bubble up with the right tools, and the AccuWeather API Developer Portal, powered by Apigee, provides the right recipe to inspire developers to produce something powerful and innovative.
Quelle: Google Cloud Platform

Now generally available: Plug-in for VMware vRealize Automation

Today, we’re announcing that our plug-in for VMware vRealize Automation (vRA) is now generally available for all users, providing an additional way for VMware customers to manage and consume Google Cloud resources.IT operators can use Google-provided blueprints or build their own blueprints for Google Cloud resources such as VM instances, Google Kubernetes Engine clusters, and Cloud Storage buckets to publish to the vRA service catalog. End users can select and launch resources in a predictable manner using familiar tools.In this launch, we have added a number of new features and enhancements based on customer feedback. The following are some of the key features and improvements in addition to reliability and performance updates:New featuresSupport for new services: GKE, Cloud SQL, Cloud Spanner, Cloud Pub/Sub, Cloud Key Management Service, Cloud Filestore (beta), and IAM Service AccountsImproved VM Instance workflows including set Windows password, execute SSH command, retrieve serial port output, and restore from snapshotEnhancementsSupport for http proxy settings when creating a connection.Workflows to simplify the import of XaaS custom resources and blueprints into vRA.Default options for the create VM Instance workflow.View estimated monthly cost in the create VM Instance workflow.Workflows to capture errors and optionally email to support.Improved connection synchronization handling on vRealize Orchestrator (vRO) clusters.First-class support for health check management.User documentation for vRO scripting objects.To download the plug-in and get started, visit the Google solutions page.To learn more about how you can adapt your existing technology to a hybrid cloud, visit our hybrid cloud solutions page. You can also find more information on vRA and the plug-in by reading VMware’s blog.
Quelle: Google Cloud Platform

Massive Entertainment hosts Tom Clancy’s The Division 2 on Google Cloud Platform

As multiplayer games continue to increase in popularity, game developers need a reliable cloud provider with a flexible global infrastructure to support real time AAA gaming experiences. At Google Cloud we’ve spent many years building a world class infrastructure and easy to use solutions so that gaming companies and development studios can focus on what they’re most passionate about—building great games.With the recent release of Tom Clancy’s The Division 2 by Massive Entertainment, a Ubisoft studio, we’re excited to share that Google Cloud was selected as the public cloud provider to host game servers globally for the highly anticipated sequel. Massive and Google Cloud worked together to deliver a smooth online experience and services for all players at launch.”Google Cloud performed beautifully in our early tests and private beta, and we are thrilled with its ability to scale in the early days of our launch,” said Fredrik Brönjemark, Online & Live Operations Director at Massive. “But more importantly, we were looking for a partner to trust with our game. Google Cloud’s team of engineers and gaming experts get it; they’ve played our games, and were always available to us with deep technical expertise, from when we initially designed the game infrastructure to private beta and now launch.”Massive Entertainment studio was looking for reliable and scalable cloud services that could keep pace with global player demand. Google Cloud provides Massive with the ease, flexibility and scalability to ensure consistently high game performance.Google Cloud’s secure, global high speed fiber network allows for consistent high performance experiences for players across regions. The scalable infrastructure also supports game data and core services required for game play including matchmaking, high scores, stats, and inventory.You can learn more about how game developers are using Google Cloud for game server hosting, platform services, and machine learning and analytics here. And for more information about game development on Google Cloud, visit our website.
Quelle: Google Cloud Platform

Five habits of highly effective capital markets firms who run in the cloud

Every time I meet with our customers in the capital markets, they share new ways they are reinventing their businesses. Recently, I met with a CIO from a large investment bank looking to take the next step in the bank’s cloud adoption journey. We talked about everything from creating a plan for public cloud migration of mission-critical workloads and communicating it to regional regulators, to developing a roadmap for adopting engineering-driven software operations methodologies across the organization. The CIO repeatedly emphasized the bank’s collective commitment to creating a culture of innovation. What would it take to achieve this evolutionary transformation?IT leaders in capital markets are asking the same question. Google Cloud recently contracted Aite Group, an independent research and advisory firm focused on business, technology, and regulatory issues and their impact on the financial services industry. Aite surveyed 19 capital markets firms regarding their respective public cloud adoption journeys. Here are valuable insights into what these firms do to bring metamorphic change:1. They learn from the tech industry.Technology is becoming more and more vital to non-tech companies, but innovation can stall if you don’t fundamentally change how you build software. Successful capital markets firms have taken cues from traditional tech companies, adopting software operations methodologies such as continuous integration and continuous delivery (CI/CD), code reviews, unit and integration testing, incremental rollout, blameless post-mortems, and more. These practices accelerate ROI and support innovation, and are a significant reason why the tech industry builds software more effectively than other industries. Even though following these practices may slow down new code development in the short-term, it significantly reduces time spent on code maintenance down the road, freeing developers to innovate.Most importantly, innovative capital markets firms adopt a “lifelong learning” attitude within the organization, emphasizing “training first” to reduce ramp-up times and respond in a fast-changing capital markets environment. They recognize that every employee can be a cloud worker, connected 24/7; security and workplace policies support this reality.2. They foster a front-office culture of “everyone is a programmer” and bring AI to the middle and back office.By democratizing the ability to build solutions across the business rather than isolating those capabilities in innovation labs, firms can build better products for their clients. Especially because code is easier to follow, audit and test than with traditional tools such as spreadsheets. The front office may finally be less wedded to management via spreadsheet, if the tools are more fit for purpose.In the middle and back office, machine learning (ML) and artificial intelligence (AI) may bring much needed relief in areas such as trade surveillance, where sophisticated malicious attacks make identifying breaches increasingly challenging. Moving from a rules-based review of electronic communications and compliance data to natural language processing refines data results. It allows firms to more seamlessly integrate electronic communications flags within the overall surveillance infrastructure. Similarly, cybersecurity could also benefit from more comprehensive and proactive activity monitoring by way of ML- and AI-based tools.3. They use data openly with strong controls and security.One CIO at a tier-1 global bank predicts that in the future, regulations such as GDPR will require data access to be granted by the end client—whether a retail investor or a large pension fund. Storing data in a manner where access can be granted or revoked by users easily across service providers—from large custodians through small service providers—will be essential to retaining business moving forward. Cloud-based services that incorporate tools for data loss prevention, obfuscation, tokenization, encryption and logging can help firms meet the security, privacy and data lineage requirements of emerging data-related regulations and user preferences.4. They adopt production ML systems.There’s more to ML than implementing an algorithm. Production ML systems equipped for data collection, verification, machine resource management, analysis and other functions enable firms to improve monitoring, prediction scaling, error diagnosis, reporting and other tasks that support trading operations. For example, a proprietary trading firm in Singapore uses TensorFlow, an open-source machine learning library for numerical computation, with the Google Cloud Bigtable NoSQL database service, to “listen” to live market data and make trading decisions.5. They commit to open-source code with serverless applications.Using open-source code rather than starting all software projects from scratch also speeds up innovation, provides tighter security and offers freedom from vendor lock-in. Plus publicly sharing changes to open-source software permits a richness of thought and a continuous feedback loop with users. Numerous capital markets firms have begun to champion open-source development and participate in related industry groups, such as the Fintech Open Source Foundation (FINOS).To learn more about how these innovators are transforming their firms for greater efficiency and competitive differentiation using cloud-based thinking, check out our latest white paper, “Cloud as an Innovation Platform in Capital Markets.”
Quelle: Google Cloud Platform

Introducing a new Coursera course on Site Reliability Engineering

Our Customer Reliability Engineering (CRE) team is on a mission to help every business become more reliable by making it easy to adopt Site Reliability Engineering (SRE). SRE is a discipline founded here at Google that utilizes prescriptive methods and principles for building and running reliable systems. With CRE, we work with customers and partners to reduce the operational burden of your systems, become more agile, and help you run reliable services for your users and customers.We want to make sure that teams everywhere can adopt SRE and implement these principles. That’s why we’re pleased to introduce a new Coursera course that’s dedicated to helping you get started with SRE. The new course, Site Reliability Engineering: Measuring and Managing Reliability, distills years of collective Google SRE experience with designing and managing complex systems that meet their reliability targets. We’re making it easy for developers to start learning the basics of SRE concepts and help the larger SRE community continue on their journey. You’ll learn at your own pace and find insight, whether you’re a new or experienced SRE.Some of the terms and concepts you’ll learn include:How to describe and measure the desired reliability of a serviceWhat it means to operate reliablyWhat SLOs, SLIs and SLAs areWhat error budgets are and how to use themHow to measure against your metrics and assess whether they’re realistic.Getting started with Coursera and SREIn the SRE course, you’ll learn about the basics, including how it came to be part of Google engineering, and what kind of tools SREs use to make decisions. You’ll start by learning about the goals of a reliable system and how that relates to user expectations. You’ll also learn about common monitoring practices, pros and cons of different measurement strategies, and specific recommendations on how to choose your own metrics.The course also dives into the details you’ll need to build your own set of service-level indicators (SLIs) and service-level objectives (SLOs), using a case study. You’ll see a method for performing risk analysis and see how to incorporate those findings into your long-term reliability goals. Additionally, you’ll cover documenting SLOs and assigning responsibilities to ensure you’re setting up a sustainable SRE practice.Get started today with SRE on Coursera as the next step in your SRE journey!
Quelle: Google Cloud Platform

Matching jobs and candidates across locations and languages with Cloud Talent Solution

Since launching Cloud Talent Solution last year, we’ve been working closely with employers and job boards to help improve the discoverability of jobs, as well as matching those jobs with the right candidates. Companies around the globe have told us that reaching a larger talent pool is consistently top of mind, and we also hear from job seekers about their unique job search and employment needs.We’re always working to add new features and functionality to connect employers and job seekers. Last year, we added job search by U.S. military occupational specialty code for Cloud Talent Solution customers in the United States. Veteran job board RecruitMilitary and employers such as Encompass Health are reporting strong engagement on their sites.To help companies reach even more candidates, we’re adding new functionality to Cloud Talent Solution’s job search API, supporting commute search by walking and cycling and enhancing our job search capabilities in more than 100 languages. We’ll be showcasing these features and more in our Cloud Next ‘19 session on April 11 in San Francisco.Search by commute now includes walking and cyclingCommute is a top consideration for all workers, so we built a commute search feature that lets our customers provide their users the ability to search for jobs via driving and public transit options. We’re now announcing the addition of walking and cycling to our commute search functionality. This feature enhancement was inspired by research studies with clients and users who have taught us many important nuances. In the United States, for example, cycling is often the only commuting option for those living in low-income communities. Cities across the country have started to prioritize their cyclists with added investments in dedicated bike lanes and multi-use paths. Outside the U.S., in Copenhagen 41% of commuters commute by cycling, and in Barcelona it is very common to walk to work.Employers such as Cox Communications are working with SmashFly and its recruitment platform to offer this commute search functionality to all candidates who visit their career site.“For more than 120 years, Cox has had a purposeful commitment to its employees and the communities where we live and work,” said Adam Glassman, Senior Manager of Employment Branding. “One of the ways the company lives those values is by building a diverse workforce and by creating an inclusive environment. This includes embracing the unique talents of people with a variety of backgrounds, perspectives and needs. We’re happy to be a Google Cloud Talent Solution partner and expect that these enhancements will open up our amazing company to a wider group of candidates, no matter how they chose to commute or what language they use to search.”Cox’s use of SmashFly helps them find job candidates. “SmashFly was founded to achieve a very simple mission: To fundamentally change how companies connect with talent,” said Thom Kenney, CEO at SmashFly. “We believe that Google Cloud Talent Solution helps us continue that mission, bringing machine learning to job search to truly transform the candidate experience for our clients. Google’s been a fantastic partner and we’re thrilled to continuously add new features, like commute search and military occupational code translation. We look forward to sharing this advanced functionality with more employers and job seekers.”Job boards are also sharing this functionality with their clients and users. “At College Recruiter, we’re very excited about the enhancement to the Cloud Talent Solutions commute search option,” said Steven Rothberg, President and Founder of College Recruiter. “Many of the job seekers who use our site are looking for part-time, seasonal, and internship opportunities while they’re in school, and many of them would strongly prefer to work within walking or cycling distance so they can avoid the cost and hassle of driving or using public transportation. Now, they can search for a part-time retail job within a 10-minute walk from their apartment instead of having to weed through dozens or even hundreds of part-time, retail jobs which are listed within their city.”Search in over 100 languages also returns jobs in EnglishIn addition to commute and lifestyle needs, language preference is another personal element of the job search experience. More than 100 languages are spoken at home across the U.S., especially in metropolitan cities such as Chicago, Dallas, and Philadelphia. To help companies reach candidates in whatever language they choose to speak, we’ve improved our support for job searches in more than 100 different languages by returning the relevant job postings that are often written in English. This way, employers and job sites can ensure they aren’t deterring users who prefer to search in a language other than that of the original job posting. Job seekers now can see jobs in the language they searched in, as well as jobs in English. Here’s an example of a search result for “enfermera,” the Spanish word for “nurse,” on Encompass Health’s career site, powered by Jibe and Cloud Talent Solution:All of these features are available to any of the more than 4,000 sites using Cloud Talent Solution to power their job search.And if you are an employer or running a job board or staffing agency and want to help more people find the right job opportunities on your site, join us at Cloud Next ‘19 in San Francisco to learn more. We’d love to see you at our session, Inclusive by Design: Engage and Recruit Diverse Talent with AI, on Thursday, April 11. You can also visit our website to get started with Cloud Talent Solution today.
Quelle: Google Cloud Platform