Google Cloud networking in-depth: Series digest

With everything from physical cables to software for building the next generation of cloud-native applications, Google Cloud’s networking portfolio is deep and wide. Sometimes, it can help to think of networking features as under one of five key functions: connect, scale, secure, optimize and modernize. Recently, we’ve been discussing these capabilities in our Google Cloud networking in-depth series. We have several more installments in that series coming up, but now is a good time to do a recap of what we’ve discussed so far.Resilient connectivity is the foundation of hybrid cloudWithin the connect pillar, we made several advancements in our hybrid connectivity portfolio. With High Availability (HA) VPN, enterprises can connect their on-premises deployment to a Google Cloud Platform (GCP) VPC with an industry-leading SLA of 99.99% by creating redundant VPNs. 100 Gbps Dedicated Interconnect enables and accelerates bandwidth-heavy applications with 10X the circuit bandwidth for your hybrid and multi-cloud deployments.We’ve also made major strides with Cloud DNS. Cloud DNS private zones (GA), peering (beta), and logging (beta)help improve the flexibility of your private cloud architecture, while providing you visibility into your private DNS traffic.Building for scale and performanceGoogle has eight services that serve over a billion users every day. At the core of our infrastructure are distributed software defined systems such as the highly-scalable Jupiter network fabric and high-performance, flexible Andromeda virtual network stack. With Andromeda 2.2, we were able to increase VM-to-VM bandwidth by nearly 18X as well as reduce latency by 8X—all without introducing any downtime. In addition, you can now raise the egress bandwidth cap to 32 Gbps for same-zone VM-to-VM traffic, and we’ll soon raise the  bandwidth caps for VMs with eight NVIDIA V100 or four T4 GPUs attached to 100 Gbps.Software-defined principles are ingrained in our DNA. Unlike traditional load balancing solutions, even our load balancing solutions are designed as large-scale distributed software-defined systems. This blog provides a comprehensive view of our load balancing portfolio. Content delivery is another key requirement for enterprises, helping you scale your applications around the world. Cloud CDN lets you deliver content closer to your users. It caches content in 96 locations around the world, and hands it off to 134 network edge locations with industry-leading performance and throughput.  Choice matters when it comes to optimizing your networkWith Network Service Tiers, GCP lets you customize your underlying network, letting you optimize for performance or cost on a per workload basis. Premium Tier delivers exceptional performance around the globe by taking advantage of Google’s well-connected, high-bandwidth, low-latency, highly reliable global backbone network, whereas the standard tier offers regional networking with performance comparable to that of other cloud service providers. Comprehensive network security should be top of mindThe need for trust is one of the biggest hurdles for enterprises operating in the cloud. Google Cloud was recently named a leader in the Forrester Wave™: Data Security Portfolio Vendors, Q2 2019 report.GCP offers a robust set of network security controls that help you reduce risk and protect your resources and environment, helping you adopt a comprehensive defense-in-depth security strategy: Secure your internet-facing servicesSecure your VPC for private deploymentsMicro-segment access to your applications and servicesNetworking innovations for application modernization At Google Cloud we continue to innovate so we can empower you to modernize your applications. Read this blog to read more on enterprise modernization enabled by our migration and networking portfolio. We hope you’ve enjoyed our Google Cloud networking in-depth series so far. Stay tuned for future installments, in particular, a deep dive about the new Layer 7 Internal Load Balancer.
Quelle: Google Cloud Platform

A CIO's guide to the cloud: hybrid and human solutions to avoid trade-offs

What do CIOs and CTOs deliver for the company? If you said “technology,” that’s just the beginning. According to their research, McKinsey found that 85% of CIOs and CTOs interviewed in the spring of 2019 said they were essential for at least two of the three most common CEO priorities—revenue acceleration, improved agility and time to market, and cost reduction.IT modernization – including migrating to the cloud – is key to business growth and agility. Yet, according to a recent McKinsey study, 80% of CIOs report that regardless of their level of cloud migration, they still haven’t reached their projected agility and business benefits. Sometimes, this is because of issues like training and skills gaps in the IT workforce. Surprisingly often though, the barrier to reaching the goals is based on trade-offs that CIOs themselves feel they must make to strike a balance between the perfect and the possible.But what if you could have it all without the trade-offs? As Will Grannis, Managing Director of the CTO Office at Google, and Arul Elumalai, Partner at McKinsey & Company discussed in our recent digital conference, many of the compromises CIOs make can be avoided with new technology, modern architectures and by encouraging a transformation mindset across the business. In interviews, CIOs explained how they’ve leveraged the best of the cloud without compromising on security, agility, and flexibility. Here’s how these leaders avoid three of the top perceived trade-offs—both with technology and by transforming their operating model.Trade-off #1: Developer agility vs. control and governanceMoving to the cloud offers new opportunities for speed, but 69% of organizations indicate that stringent security guidelines and code review processes can slow developers significantly. One CISO of a multinational company mentioned that cloud development was so fast that they had to institute manual checks on their developers’ code. So much for agility. To overcome this trade-off and maintain both speed and security, some respondents found success in DevOps, hiring security-experienced talent and introducing automation for security and quality. Building in security into the CI/CD pipeline and increasing automation don’t just eliminate the tradeoff, they result in higher quality and faster innovation.At Google Cloud, we’ve also observed that customers with strong DevOps practices have increased speed-to-market and product/service quality. From our own journey, we’ve learned seven critical lessons essential to adopting a DevOps model, ranging from taking up small projects and embracing open source to building an overall DevOps culture.Trade-off #2: Single-vendor benefits vs. freedom from lock-inCIOs perceive benefits to using the fewest number of clouds, specifically avoiding introducing multiple systems that require their teams to develop and maintain multiple skillsets. Unfortunately, 83% of the CIOs interviewed said that while they would prefer fewer clouds, the potential financial and technical lock-in drives them to multiple providers. Successful CIOs said that they can avoid lock-in pitfalls not just with contractual guardrails and executive and board education, but with evolving hybrid cloud technologies that provide additional choices. Hybrid cloud platforms based on containers can further mitigate the risk of using a single cloud vendor. The key to successful hybrid architectures is the infrastructure abstraction and portability that containers create for them, enabling disparate environments to work together. This notion has been at the heart of our strategy at Google Cloud with Anthos, which provides an abstraction layer and an application modernization platform for hybrid and multi-cloud environments. Enterprises can use Anthos to modernize how they develop, secure, and operate hybrid-cloud environments and enable consistency across cloud environments.Trade-off #3: Best-of-breed tools vs. standardization and familiarityOptimizing tool chains for different environments can improve productivity, but many CIOs believe that this means reduced functionality and tools. While 77% of CIOs said they had to standardize to the lowest common denominator, some have found a better solution. Rather than giving up the languages, libraries, and frameworks that their teams prefer, effective leaders said that they found success by investing in training programs to upscale talent and adopting new open and vendor-agnostic solutions. Architectures that are based on open-source components have been the keys that helped remove this tradeoff, and eliminate the notion of a lowest common denominator. This is why we have built Anthos on open-source components like Kubernetes, Istio and Knative. Anthos gives your business the choice you need. With the ability to create code that works in most environments using the tools, languages, and systems you prefer, you can do more without major changes to how you work.Regardless of your current cloud adoption level, check out “Unlock business acceleration in a hybrid cloud world” to discover more about McKinsey’s findings, including how CIOs drive agility, methods to make trade-offs unnecessary, and how to prepare your team for the cloud. Then, stay tuned for subsequent posts that  take a closer look at how hybrid solutions and strategies can help CIOs drive a transformation mindset across the business—without compromising on security, agility, and flexibility.
Quelle: Google Cloud Platform

Migrating Teradata and other data warehouses to BigQuery

Traditional, on-premises data warehouses collect and store what is often an organization’s most valuable data—which helps drive growth and innovation. Organizations depend on this data to make informed and timely decisions that can shape the future of their business. But we know that traditional data warehouses can be expensive, hard to maintain, and unable to keep up with business needs. As data rapidly increases in volume, velocity and variety, it’s especially hard to get business needs met. We know that businesses are turning to BigQuery, our highly scalable and serverless enterprise data warehouse, to perform fast, real-time analysis of their data.When migrating your data warehouse, you’re moving what’s essentially the center of gravity of your entire data analytics and business intelligence environment. Many business applications depend on your data warehouse for reports, data feeds, and dashboards, and the users of these business applications expect to have minimal to no disruption during the migration. With all this in mind, we’ve created a new data warehouse migration guide to help walk you through data warehouse migrations with as little complexity and risk as possible. In the guide, you’ll find prescriptive, end-to-end guidance to securely migrate legacy data warehouses to BigQuery. Though the guide contains some sections specific for migrations from Teradata, you’ll be able to use the vast majority of the guide for any enterprise data warehouse migration.Building the migration frameworkA migration can be a complex and lengthy endeavor, but it can be made simpler with planning. As part of the migration guide, you’ll find our suggested structured framework for data warehouse migrations, based on Agile principles. The framework facilitates the application of project management best practices, helping to bring incremental and tangible business value while managing risk and minimizing disruptions. The framework adheres to the phases shown in the following diagram, with more details below:1. Prepare and discover: In this initial phase, the focus is on preparation and discovery. It’s about affording yourself and your stakeholders an early opportunity to discover the use cases you’re planning for BigQuery, raise initial concerns, and, importantly, conduct an initial analysis around the expected benefits.2. Assess and plan: The assess-and-plan phase is about taking the input from the prepare-and-discover phase, assessing that input, and then using it to plan for the migration. This phase can be broken down into the following tasks:Assess the current stateCatalog and prioritize use casesDefine measures of successCreate a definition of “done”Design and propose a proof-of-concept (POC), short-term state, and ideal end stateCreate time and cost estimatesIdentify and engage a migration partner (if applicable)Find more details here on these tasks.3. Execute: After you’ve gathered information about your legacy data warehouse platform, and created a prioritized backlog of use cases, you can group the use cases into workloads and proceed with the migration in iterations.An iteration can consist of a single use case, a few separate use cases, or a number of use cases pertaining to a single workload. Which option you choose for the iteration depends on the interconnectivity of the use cases, any shared dependencies, and the resources you have available to undertake the work. For example, a use case might have the following relationships and dependencies:Purchase reporting can stand alone and is useful for understanding monies spent and requesting discounts.Sales reporting can stand alone and is useful for planning marketing campaigns.Profit and loss reporting, however, is dependent on both purchases and sales, and is useful for determining the company’s value.With each use case, you’ll want to decide whether it will be offloaded or fully migrated. Offloading focuses on time to delivery, where speed is the top priority, and fully migrating is about ensuring all upstream dependencies are also migrated. The following diagram shows the execution process and flow in greater detail:During the execute phase, the work to fully migrate or offload the use case or workload should focus on one or more of the following steps. Our guide includes documents dedicated to each of these steps:Setup and data governance:Setup is the foundational work that’s required in order to let the use cases run on Google Cloud Platform (GCP). Setup can include configuration of your GCP projects, network, virtual private cloud (VPC), and data governance. Data governance is a principled approach to manage data during its lifecycle, from acquisition to use to disposal. Take a look at the data governance document to help define your governance program in the cloud, which should include an outline of the policies, procedures, responsibilities, and controls surrounding your data activities.Migrate schema and data: The schema and data transfer document provides extensive information on how you can move your data to BigQuery and offers recommendations for updating your schema to take full advantage of BigQuery’s features. The associated quickstart guides you step by step through an actual schema and data migration from Teradata to BigQuery.Translate queries: The query translation document addresses some of the challenges that you might encounter while migrating SQL queries from Teradata to BigQuery, and explains when SQL translation is required. The associated quickstart simplifies this and takes you through an exercise to translate some queries using the Teradata SQL to the standard ISO:2011 SQL supported by BigQuery, starting with manual translation and evolving into a more automated approach. The SQL translation reference details the similarities and differences in SQL syntax between Teradata and BigQuery.Migrate business applications: Depending on your organization, your business applications might include dashboards, reports and operational pipelines. The reporting and analysis document explains how you can take advantage of the full suite of business intelligence tools and applications integrated with BigQuery—this includes the reporting and analysis applications that you may be using with your legacy data warehouse.Migrate data pipelines: The data pipelines document helps you understand what a data pipeline is, what procedures and patterns it can employ, and which migration options and technologies are available in relation to the larger data warehouse migration.Optimize performance: The performance optimization document helps you understand the factors that can impact performance in BigQuery and helps you apply essential techniques to improve it.Verify and validate: At the end of each iteration, validate that the use case was successfully migrated according to your definition of done, and verify that data governance concerns have been met, the schema and data have been migrated, and that business applications are producing the expected results. Understanding the migration architectureAfter each iteration in the execution phase, you’ll likely have some use cases offloaded to BigQuery, some fully migrated, and some still in your on-premises data warehouse. This iterative approach is enabled by an architecture where both your data warehouse and BigQuery can be actively used in parallel. This architecture allows you to take data warehouse migration one step at a time, breaking down its complexity and reducing risk. The next diagram illustrates the architecture, showing Teradata working on-premises and BigQuery on GCP, where both can ingest from the source systems, integrate with your business applications, and provide access to the users who need it. Importantly, you can also see in the diagram that data is synchronized from Teradata to BigQuery.The data warehouse migration guide provides a wealth of prescriptive guidance so you can structure your migration project carefully and undertake each one of its challenges in a systematic manner. Our professional services organization and our partners are ready to assist you further in your migration journey, no matter how complex it may be. And check out our migration offer for help creating a streamlined path to a modern data warehouse.
Quelle: Google Cloud Platform

Web application vulnerability scans for GKE and Compute Engine are generally available

As the number of platforms you build and run your applications on increases, so does the challenge of understanding what applications you have deployed and their security state. Without visibility, it can be difficult to know if there are any latent vulnerabilities in your applications—much less how to fix them.Today, we’re excited to announce the general availability of Cloud Security Scanner for Google Kubernetes Engine (GKE) and Compute Engine, joining Cloud Security Scanner for App Engine. Now, no matter where you run your applications on Google Cloud, you can quickly gain insights into your web app’s vulnerabilities and take action before a bad actor can exploit them. Web application vulnerabilities can occur during the development process. Some of these vulnerabilities include the incorrect setup of an app’s security framework, the incorrect implementation of an app into a production environment, or systems that weren’t patched or updated. Cloud Security Scanner can surface a wide range of web application vulnerabilities as findings; here are a few examples of its capabilities:Identity and notify you of common external vulnerabilities in your applications such as Flash Injection or mixed content Detect vulnerabilities such as cross-site scripting bugs due to JavaScript breakageAlert you of accessible GIT and SVN repositoriesSurface mixed content vulnerabilities that a man-in-the-middle attacker could exploit to gain full access to the website that loads the resource or monitor users’ actions.Notify you if an application appears to be transmitting a password field in plain text, displaying HTTP header issues, including misspellings, mismatching values in a duplicate security header, or invalid headersCloud Security Scanner surfaces these vulnerabilities as findings in Cloud Security Command Center (Cloud SCC), our Cloud Security Posture Management (CSPM) tool, so you can gain visibility into misconfigurations, vulnerabilities, and threats and quickly respond to them from a centralized dashboard. Then, when you click on a finding, you can see a description of the issue and an actionable recommendation on how you can fix and prevent it in the future.Cloud Security Scanner is not on by default. To activate it, complete this quickstart and then go to Security Sources within Cloud SCC to ensure it’s active. You can also create customized scans for your applications using the Cloud Security Scanner UI. Once Cloud Security Scanner is on, it scans your application, following all links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handles as possible. The scans run using the Chrome and Safari browsers, and those embedded in Blackberry and Nokia phones. For more flexibility, you can also schedule scans.For additional protection of your applications running on GKE instances, you can also use the Container Registry vulnerability scanning to discover vulnerable container images before they are deployed into production. It’s easy to get started with Cloud Security Scanner and protect your applications. If you are new to GCP, start your free GCP trial and enable Cloud SCC then Cloud Security Scanner. If you are an existing customer, simply enable Cloud Security Scanner from Security Sources in Cloud SCC, and start using it for free. For more information on Cloud Security Scanner, read our documentation.
Quelle: Google Cloud Platform

New protections for users, data, and apps in the cloud

At Google Cloud, we’re always looking to make advanced security easier for enterprises so they can stay focused on their core business. Already this year, we’ve worked to strengthen user protection, make threat defense more effective, and streamline security administration through a constant stream of new product releases and enhancements. We continue to push our pace of security innovation, and today at Google Cloud Next ‘19 Tokyo, we’re announcing four new capabilities to help customers protect their users, data, and applications in the cloud. 1. Bringing Advanced Protection Program to the enterpriseGoogle’s Advanced Protection Program helps safeguard the personal Google Accounts of anyone at risk of targeted online attacks. We are now introducing the Advanced Protection Program to G Suite, Google Cloud Platform (GCP) and Cloud Identity customers. Enterprise admins can allow their users most at risk of targeted attacks to enroll into the program. Examples of users who would benefit from the protections of the Advanced Protection Program include IT administrators, business executives, and employees in security-sensitive verticals such as finance and government.With Advanced Protection Program for the enterprise, we’ll enforce a specific set of policies for the users you identify, including:Enforcing the use of FIDO security keys, like Titan Security Keys, or compatible hardware from other vendors, to secure your account against phishing and account takeovers. Automatically blocking access to third-party apps that your company has not explicitly marked as trusted.Enabling enhanced scanning of incoming email for phishing attempts, viruses, and attachments for malicious content.The beta for Advanced Protection Program for the enterprise will be rolling out in the coming days. Learn more.2. Making Titan Security Keys available in Japan, Canada, France, and the UKFIDO security keys provide the strongest protection against phishing, targeted attacks, and automated bots and other techniques that seek to compromise user credentials. Last year, Google launched our own Titan Security Keys with availability in the United States. Starting today, Titan Security Keys are also available on the Google Store in Canada, France, Japan, and the United Kingdom (UK).Titan Security KeysTitan Security Keys can be used anywhere FIDO security keys are supported, including Google’s Advanced Protection Program. Learn more in our detailed blog post.3. Using machine learning to detect anomalous activity in G SuiteStaying on top of activity that impacts the organization’s security is top of mind for most admins. Starting today, G Suite Enterprise admins can now automatically receive anomalous activity alerts in the G Suite alert center. Our machine learnings models analyze security signals within Google Drive to detect potential security risks such as data exfiltration or policy violations related to unusual external file sharing and download behavior.Anomaly detection is available in beta for G Suite Enterprise and G Suite Enterprise for Education customers. Learn more.4. Enabling one-click access to thousands of additional appsAs organizations expand their use of SaaS apps, they need to reduce friction for users while maintaining security. Cloud Identity and G Suite already enable single sign-on (SSO) for apps that use modern identity standards like SAML and OIDC, but just as important in meeting organizations where they are in their cloud journey is the ability to support legacy apps that still require a username and password to authenticate. We’re pleased to announce that support for password vaulted apps will be generally available for Cloud Identity in the coming days. The combination of standards based- and password-vaulted app support will deliver one of the largest app catalogs in the industry, providing seamless one-click access for users and a single point of management, visibility, and control for admins.Creating environments that are secure—and keeping them that way—is critical for organizations that run in the cloud. These new features will help strengthen protection and securely enable cloud workloads and business processes. If you are at Next Tokyo, learn more by checking out our security sessions. You can also watch our most recent round of Google Cloud Security Talks here, and register for our next round of security talks here.
Quelle: Google Cloud Platform

Reaching for the sky: Japanese businesses embrace Google Cloud for digital transformation

Since our last Google Cloud Next Tokyo in September 2018, we’ve been busy growing and expanding our commitment to Japanese businesses. In May,we launched our Osaka cloud region, complementing our existing cloud region in Tokyo, and were recognized as a Leader in the 2019 Gartner Magic Quadrant for Cloud Infrastructure as a Service, Japan. We’ve also invested ina new undersea cable system that will connect Japan to Guam and Australia. When it goes online in early 2020, it will be the third cable system that Google has invested in to land in Japan. It’s a key part of our cloud network in Asia Pacific, helping to bring greater agility, flexible capacity and better performance to our customers throughout the region.Demand from Japanese customers who want to build and scale their business on Google Cloud continues to grow, and we’ve been thrilled to welcome new customers in the past year like Asahi Group Holdings, Kyocera Communications Systems, SHARP, and Yamaha. Since launching our Advanced Solutions Lab (ASL) in 2018 we’ve been working side-by-side with leaders from many different industries to develop AI-powered solutions to solve business challenges, and we’ve heard from customers like Fast Retailing who’ve already benefited from ASL immersive training. And to better support all our Japanese customers, earlier this year we launched 24×7 Japanese language support across all channels for Platinum and Enterprise customers.We continue to be inspired by all the ways Japanese businesses are transforming in the cloud, and are thrilled to welcome so many as we kick off Google Cloud Next in Tokyo this week. Here are just a few of their stories. Fast connections: East Japan Railway evolves and grows its services with Google CloudServing 17.9 million passengers on 12,000 trains each day, East Japan Railway Company, or JR-EAST, is the nation’s largest railway company. It is also essential to one’s daily life in Japan, beyond transportation.Safety is a top priority for JR-EAST’s management and a key tenet of its corporate vision “Move UP 2027.” In addition to pursuing forward-looking projects like mobility as a service (MaaS), JR-EAST has started working with Google Cloud to deliver the highest possible level of transportation safety and deepen trust with customers and the community.“It’s been one year since we launched our corporate vision ‘Move UP 2027’ with an aim to create ‘customer-centric values and services’. This vision promotes future-oriented programs including MaaS and places ‘ultimate safety’ as the top priority,” said Masaki Ogata, Vice Chairman, JR-EAST. “For example, we will be working with Google Cloud to revolutionize maintenance works of lines and rails. This collaboration with Google Cloud will be a trigger to innovate maintenance operations within our railway business and also to spur innovation essential to our non-rail business.”Applying smart analytics and AI for better customer experiences and business outcomesThe opportunity to unlock transformative business insights continues to be a driver for cloud adoption in Japan. In fact, we’ve seen a number of Japanese businesses turn to Google Cloud for their data management and smart analytics needs.Serving millions of customers with services like housing information, hotel and restaurant reservations, and job listings, scalability has been a huge priority for Recruit Group. With Google Cloud, it has the improved management and scalability it was looking for.“Volume and complexity of data processing in our services is increasingly challenging,” says Sogo Ohishi Recruit Technologies Co., Ltd. Corporate Executive Officer who leads infrastructure management across the groups’ multiple products. “We’re very satisfied with the high stability and performance improvement powered by GCP. I’m particularly excited about the recent migration of our EOSL (End of Service Life) Hadoop cluster to BigQuery and Cloud Dataproc, which enabled us to create an integrated data mart 14 times faster. In an era of ever growing data, we look forward to continue improving agility, scalability and operational efficiency by leveraging robust cloud native architecture. “Organizations that have robust data processing and analysis are frequently the most successful in applying AI and machine learning (ML). Accordingly, we’ve seen growth in the number of Japanese enterprises adopting AI to transform their business.One of Japan’s most popular mobile-based internet service companies, DeNA is using ML to improve the new player onboarding experience for its game “Gyakuten Othellonia”. With AI, the game provides new players recommendations for in-game strategies and offers scenarios for new players to practice and gain experience before challenging skilled players. As a result, it saw an increase in new player activity and the win rate for beginners grew by five percentage points. Plus, players that used the new recommendation service have shown higher lifetime value (LTV) than those that did not, which has the potential to positively impact the game’s bottom line.“We view AI as an important element of transforming the gaming experience on our platform,” says Kenshin Yamada, Director of AI Dept, DeNA Co., Ltd. “By collaborating with Google Cloud, we have been able to leverage Google’s expertise in AI as well as building and serving different components in our game. We are also able to leverage Google Cloud’s open and serverless technologies to host our AI models without worrying about scalability of infrastructure or portability of code.”For Zozo, a popular Japanese fashion retailer, the product search capabilities in its website “ZOZOTOWN” are essential to meeting customer needs. To provide the best search experience possible, it relies on ML. But managing and optimizing its ML model requires frequent updates with new data.To speed up model training, Zozo turned to Cloud TPUs.  “Visual search for apparel is very important for our users, and training useful machine learning models that produce accurate search results is critical for our user experience,” says Imamura Masayuki, VPoE of ZOZO Technologies, Inc. “With Cloud TPU, we are now able to train our TensorFlow models 55x faster, going from one week to under three hours of training time. The combination of running TensorFlow on Google Cloud using Cloud TPU has helped us consistently test, improve, and serve better models that delight our users.”One of Japan’s leading insurance companies, Sompo Holdings is using AI to speed up the estimation of insurance premiums for customers. “The ability to provide an instant quote is very important for a good customer experience,” says Koichi Narasaki, Group Chief Digital Officer, Executive Vice President and Executive Officer, Sompo Holdings, Inc. “With the help of the Google Cloud Vision API, we are able to use a smart device to extract information from insurance documents and feed the data into our premium calculators in real-time. This process lets users receive instant premium estimates.”Looking aheadThese are just a few of the many stories we’ve heard from customers in Japan and Asia Pacific as they embrace the cloud to modernize their infrastructure, develop new applications, manage their data, gain insights through smart analytics, and increase productivity and collaboration.  In addition to the stories here, we’ve also heard from customers like NTT Communications and ANZ Bank that are using Anthos, our new hybrid and multi-cloud platform, to accelerate application development and take advantage of transformational technologies like containers, service mesh and microservices. You can learn more on that in this blog post.We look forward to continuing to work with more and more businesses wherever they may be to put the power of the cloud to work for them for meaningful transformation. To find more stories from Google Cloud customers in Japan and Asia Pacific, visit our website.
Quelle: Google Cloud Platform

New GCP database options to power your enterprise workloads

Databases power critical applications and workloads for enterprises across every industry, and we want to make it easier for businesses to use and manage those databases. Our goal is to provide you the capabilities to run any workload, existing and future, on Google Cloud. That’s why we offer secure, reliable and highly available database services, and have been working to deeply integrate open-source partner services running on Google Cloud Platform (GCP) to give you the freedom of choice when it comes to how to manage your data.Today, we’re announcing a number of enhancements to our database portfolio:Cloud SQL for Microsoft SQL Server in alpha Federated queries from BigQuery to Cloud SQLElastic Cloud on GCP now available in Japan and coming soon to Sydney Open the window to the cloud with Cloud SQL for Microsoft SQL Server When you move to the cloud, you shouldn’t have to reinvent the wheel. Our goal at GCP is to make it easy for everything you use on-premises to work as expected once you move to the cloud. Cloud SQL for Microsoft SQL Server (currently in alpha) lets you bring existing SQL Server workloads to GCP and run them in a fully managed database service. We’ve heard great feedback from our early access customers using Cloud SQL for SQL Server. For enterprises, this option means they can now experience fully managed SQL Server with built-in high availability and backup capability. You can lift and shift SQL Server workloads without changing apps, then use the data from these apps with other GCP services like BigQuery and AI tools to create more intelligent applications.Federated queries from BigQuery to Cloud SQLData can only create value for your business when you put it to work, and businesses need secure and easy-to-use methods to explore and manage data that is stored in multiple locations. To help, we’re continuing to expand federated queries to more GCP products so you can bring analysis to data, wherever it is and right within BigQuery. We currently support querying non-BigQuery native storage systems like Cloud Storage, Cloud Bigtable and Sheets, and today we’re extending the federated query capability to include Cloud SQL. This is just part of our continuing efforts to integrate our services across products to provide a seamless customer experience and build strong ecosystems around our products.   Elastic Cloud on GCP available in Japan; Sydney coming soonMigrating to the cloud can be challenging. That’s something we keep in mind as we develop our database products and integrate them with the rest of GCP. We do this with GCP-built services like Cloud SQL and Cloud Spanner, as well as through deeply integrated partner services running on GCP. These open source-centric strategic partners are in line with our belief that open source is a critical component of the public cloud. We’re pleased to announce the expanded availability of Elastic Cloud in Google Cloud’s Japan region, with Sydney region availability coming soon. As the creators of the Elastic Stack, built on Elasticsearch, Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for search use cases, like logging, security, and analytics. With more integration to come, you will soon be able to use your GCP commits toward Elastic Cloud, with a single bill from Google Cloud. How customers are using GCP databasesWe’ve heard from customers here in Japan that they’ve found new flexibility and scale with GCP databases, along with strong consistency and less operational overhead.   Merpay is a provider of secure online payment technology. Merpay’s Mercari platform has about 13 million monthly active users and depends on the performance and scalability of GCP database technology to run smoothly. “We adopted Cloud Spanner with little experience in order to store the data of a new smartphone payment service, to keep consistent with our culture of ‘Go Bold,’ where we encourage employees to take on challenges,”says Keisuke Sogawa, CTO of Merpay, Inc. “Additionally, adopting a microservices architecture based on Google Kubernetes Engine (GKE) made it possible for teams to freely and quickly develop services and maintain their service levels even as the organization grew larger. As a result, we released the Merpay service within a short span of 15 months, and today our provision of the service remains reliable for our customers.”Learn more about GCP databases, and get details about Forrester naming Google Cloud a Leader in The Forrester Wave Database-as-a-Service and The Forrester Wave Big Data NoSQL.Check out these Tokyo Cloud Next ’19 sessions for more about GCP databases:Announcing Federated Cloud SQLIntroduction to serverless application development using Cloud FirestoreRunning Redis on GCP: 4 つの構築シナリオCloud Spanner in Action
Quelle: Google Cloud Platform

Bringing hybrid and multi-cloud to our APAC customers with Anthos

At Google Cloud Next ‘19 in April, we announced Anthos, Google Cloud’s new open platform that lets you run applications anywhere—simply, flexibly and securely. Embracing open standards, Anthos lets you run your applications unmodified on existing on-prem hardware investments or in the public cloud. Anthos also includes capabilities to help you automate policy and security at scale across your deployments using Anthos Config Management. Anthos accelerates application development by giving teams only one set of modern tools to learn, equips your business with transformational technologies like containers, service mesh and microservices, and prevents you from being locked in to a single cloud provider. To support these transformations, customers frequently tell us they need to be able to modernize. Having the power to modernize your workloads in place from existing on-prem data centers or with a move to the cloud is key to Google meeting you where you are. To continue supporting that flexibility, today, we are happy to report the beta availability of Migrate for Anthos, which allows you to take VMs from on-prem or Google Compute Engine and move them directly into containers running in Google Kubernetes Engine (GKE). We’re also expanding the list of supported sources so that you can migrate VMs directly from Amazon EC2 and Microsoft Azure into containers in GKE. With this launch, customers can now use Migrate for Anthos to automatically modernize their VMs and move them to containers without any of the complex, manual processes of traditional container modernization strategies. Our new approach gives you more flexibility to modernize your existing infrastructure investments with ease, even for VMs you’d previously written off as not being able to modernize. By making the move to containers in GKE you gain a wealth of benefits and automation, like no longer having to manually maintain and patch your OS, as one example. Atos, a systems integrator with a large presence in Japan, has been using Migrate for Anthos to accelerate its hybrid cloud journey. “Containers are already a part of our cloud landscape, giving us a powerful way to manage and maintain our systems as well as customer environments. At the same time, we have a lot of VMs in production and we are always looking for optimized ways of migrating these over to hybrid cloud delivery models,” said Michael Kollar, SVP for Cloud Engineering at Atos. “Migrate for Anthos gives us an additional fantastic tool in transformational projects and it will further accelerate our cloud success.” Anthos Adoption in Asia PacificGartner predicts that by 2021, more than 75% of midsize and large organizations will have adopted a multi-cloud and/or hybrid IT strategy1. Organizations are taking this approach to prevent vendor lock-in and benefit from better server density, reduced management overhead, and improved management through integration with services like Stackdriver for logging, monitoring, and debugging. NTT Communications is one of the world’s largest telecommunications providers, and has been one of the earliest adopters of Anthos in Asia Pacific. “NTT Communications Corporation was one of the early customers of Anthos in Japan when it was announced in April this year. Since then, we have embarked on an interesting proof of concept (POC) with medical institutions and system integrators to analyze clinical data to see if we can improve the quality of rehabilitation programs offered to patients. Using GKE On-Prem which is part of the Anthos platform, we securely store clinical data on our Enterprise Cloud which comprises physically distributed servers connected through a closed and secure network, and access smart analysis tools to process the data,” said Akiko Kudo, Senior Vice President, Head of Fifth Sales Division.We’re thrilled to work with our customers in Asia Pacific to help them bring their environments into the digital future. Modernizing your IT environment is a dynamic journey and will likely involve multiple strategies. Technologies like our Anthos platforms are here to help simplify that path. For more information on migrating and modernizing with Google Cloud, be sure to visit our Anthos, and Migrate for Anthos pages. Sign up here if you are interested in trying out Anthos.1. Smarter With Gartner, 5 Approaches to Cloud Applications Integration, May 14, 2019
Quelle: Google Cloud Platform

Driving enterprise modernization with Google Cloud infrastructure

Organizations are adopting modern cloud architectures to deliver the best experience to their customers and benefit from greater agility and faster time to market. Google Cloud Platform (GCP) is at the center of this shift, from enabling customers to adopt hybrid and multi-cloud architectures to modernizing their services. Today, we’re announcing important additions to our migration and networking portfolios to help you with your modernization journey:Migration from Microsoft Azure to Google Compute Engine betaTraffic Director general availability Layer 7 Internal Load Balancer betaMigrate to GCP from more cloudsBusinesses migrate virtual machines from on-prem to Google Cloud all the time and, increasingly, they also want to move workloads between clouds. That’s why we’re announcing today that Migrate for Compute Engine is adding beta support for migrating virtual machines directly out of Microsoft Azure into Google Compute Engine (GCE). This complements Migrate for Compute Engine’s existing support for migrating VMs out of Amazon EC2. As a result, whether you’re migrating between clouds for better agility, to save money, or to increase security, you now have a way to lift and shift into Google Cloud—quickly, easily and cost-effectively.Trax, which uses GCP to digitize brick-and-mortar retail stores, has significantly accelerated its migration and freed up developer time thanks to the ease of use and flexibility of Migrate for Compute Engine. “Migrate for Compute Engine allowed our DevOps team to successfully move dozens of servers within a few hours and without utilizing developers or doing any manual setup,” said Mark Serdze, director of cloud infrastructure at Trax. “Previous migration sprints were taking as long as three weeks, so getting sprints down to as little as three hours with Migrate for Compute Engine was a huge time and energy saver for us. And being able to use the same solution to move VMs from on-prem, or from other cloud providers, will be very beneficial as we continue down our migration path.”Simplify transformation with enterprise-ready Service Mesh and modern load balancing As enterprises break monoliths apart and start modernizing services, they need solutions for consistent service and traffic management at scale. Organizations want to invest time and resources in building applications and innovating, not on the infrastructure and networking required to deploy and manage these services. Service mesh is rapidly growing in popularity because it solves these challenges by decoupling applications from application networking and service development from operations. To ease service mesh deployment and management, we’re announcing two enterprise-ready solutions that make it easier to adopt microservices and modern load balancing: general availability of Traffic Director and beta availability of Layer 7 Internal Load Balancer (L7 ILB).Traffic Director, now available in Anthos, is our fully managed, scalable, resilient service mesh control plane that provides configuration, policy and intelligence to Envoy or similar proxies in the data plane using open APIs, so customers are not locked in. Originally built at Lyft, Envoy is an open-source high-performance proxy that runs alongside the application to deliver common platform-agnostic networking capabilities, and together with Traffic Director, abstracts away application networking. Traffic Director delivers global resiliency, intelligent load balancing and advanced traffic control like traffic splitting, fault injection and mirroring to your services. You can bring your own Envoy builds or use certified Envoy builds from Tetrate.io.”Service mesh technologies are integral to the evolution from monolithic, closed architectures to cloud-native applications,” said Vishal Banthia, software engineer at Mercari, a leading online marketplace in Japan. “We are excited to see Traffic Director deliver fully-managed service mesh capabilities by leveraging Google’s strengths in global infrastructure and multi-cloud service management.”We’re also taking the capabilities of Traffic Director a step further for customers who want to modernize existing applications. With L7 ILB, currently in beta, you can now bring powerful load balancing features to legacy environments. Powered by Traffic Director and Envoy, L7 ILB allows you to deliver rich traffic control to legacy services with minimal toil—and with the familiar experience of using a traditional load balancer. Deploying L7 ILB is also a great first step toward migrating legacy apps to service mesh. “L7 ILB makes it simple for enterprises to deploy modern load balancing,” said Matt Klein, creator ofEnvoy Proxy. “Under the hood, L7 ILB is powered by Traffic Director and Envoy, so you get advanced traffic management simply by placing L7 ILB in front of your legacy apps.”Both L7 ILB and Traffic Director work out-of-the-box with virtual machines (Compute Engine) and containers (Google Kubernetes Engine or self-managed) so you can modernize services at your own pace.Deliver resilient connectivity for hybrid environments Networking is the foundation of hybrid cloud, and fast, reliable connectivity is critical, whether it’s with a high performance option like Cloud Interconnect or Cloud VPN for lower bandwidth needs. For mission-critical requirements, High Availability VPN and 100Gbps Dedicated Interconnect will soon be generally available, providing resilient connectivity with industry leading SLAs for deploying and managing multi-cloud services.We look forward to hearing how you use these new features. Please visit our website to learn more about our networking and migration solutions, including Migrate for Anthos.
Quelle: Google Cloud Platform

APIs take root as the path for healthcare interoperability

The healthcare industry is at a turning point. Patients and providers are eager for advances in value-based care, patient engagement, and machine learning as they look to usher in a new era of constantly-improving health outcomes and well-being. Interoperability is key to removing the barriers between the healthcare industry and the future it seeks to build.Having a strong and continued commitment to data interoperability will offer faster data insights to improve patient outcomes, increase productivity, and reduce physician burnout. Our commitment to interoperability involves investment in our own products like theCloud Healthcare API, as well as our contributions to the open source tools including  Google’s FHIR protocol buffers andApigee Health APIx, which will help developers embrace and implement these standards with ease.We believe it is important to work with stakeholders across the ecosystem – patients, providers, insurers, researchers ,tech providers – to unlock important data and enable patients and their care teams to derive insights when it is needed most. Today, we are very proud to announce that Google Cloud is joined with Amazon, IBM, Microsoft, Oracle, and Salesforce in support of healthcare interoperability with the following statement at theBlue Button 2.0 Developer Conference at the White House.Full statement below: As healthcare evolves across the globe, so does our ability to improve the health and wellness of communities. Patients, providers, and health plans are striving for more value-based care, more engaging user experiences, and broader application of machine learning to assist clinicians in diagnosis and patient care.Too often, however, patient data are inconsistently formatted, incomplete, unavailable, or missing—which can limit access to the best possible care. Equipping patients and caregivers with information and insights derived from raw data has the potential to yield significantly better outcomes. But without a robust network of clinical information, even the best people and technology may not reach their potential.Interoperability requires the ability to share clinical information across systems, networks, and care providers. Barriers to data interoperability sit at the core of many process problems. We believe that better interoperability will unlock improvements in individual and population-level coordination, care delivery, and management. As such, we support efforts from ONC and CMS to champion greater interoperability and patient access.This year’s proposed rules focus on the use of HL7® FHIR® (Fast Healthcare Interoperability Resources) as an open standard for electronically exchanging healthcare information. FHIR builds on concepts and best-practices from other standards to define a comprehensive, secure, and semantically-extensible specification for interoperability. The FHIR community features multidisciplinary collaboration and public channels where developers interact and contribute. We’ve been excited to use and contribute to many FHIR-focused, multi-language tools that work to solve real-world implementation challenges. We are especially proud to highlight a set of open-source tools including: Google’s FHIR protocol buffers and, Microsoft’s FHIR Server for Azure, Cerner’s FHIR integration for Apache Spark, a serverless reference architecture for  FHIR APIs on AWS, Salesforce/Mulesoft’s Catalyst Accelerator for Healthcare templates, and IBM’s Apache Spark service.Beyond the production of new tools, we have also proudly participated in developing new specifications including the Bulk Data $export operation (and recent work on an $import operation), subscriptions, and analytical SQL projections. All of these capabilities demonstrate the strength and adaptability of the FHIR specification. Moreover, through connectathons, community events, and developer conferences, our engineering teams are committed to the continued improvement of the FHIR ecosystem. Our engineering organizations have previously supported the maturation of standards in other fields and we believe FHIR version R4—a normative release—provides an essential and appropriate target for ongoing investments in interoperability. We have seen the early promise of standards-based APIs from market-leading Health IT systems, and are excited about a future where such capabilities are universal. Together, we operate some of the largest technical infrastructure across the globe serving many healthcare and non-healthcare systems alike. Through that experience, we recognize the scale and complexity of the task at hand. We believe that the techniques required to meet the objectives of ONC and CMS are available today and can be delivered cost-effectively with well-engineered systems.At Amazon, Google, IBM, Microsoft, Oracle, and Salesforce, we are fortunate to work with many teams and partners that draw on experiences across industries to support and accelerate the delivery of FHIR APIs in healthcare. Moreover, we are committed to introducing tools for the healthcare developer community. After the proposed rule takes effect, we commit to offering technical guidance based on our work including solution architecture diagrams, system narratives, and reference implementations to accelerate deployments for all industry stakeholders. We will work diligently to ensure these blueprints provide a clear and robust path to achieving the spirit of an API-first strategy for healthcare interoperability. As a technology community, we believe that a forward-thinking API strategy as outlined in the proposed rules will advance the ability for all organizations to build and deploy novel applications to the benefit of patients, care providers, and administrators alike. ONC and CMS’s continued leadership, thoughtful rules, and embrace of open standards help move us decisively in that direction.Signed,Amazon, Google, IBM, Microsoft, Oracle, and Salesforce
Quelle: Google Cloud Platform