Google Cloud and ServiceNow announce strategic partnership to enable intelligent digital workflows

Today, we’re announcing a new strategic partnership with ServiceNow to help more enterprises take advantage of the cloud for IT operations management.For many organizations, IT operations can be a labor-intensive process that can lack transparency and slow down business agility. As a result, many enterprises are leveraging the cloud to transform these manual working processes into intelligent digital workflows that are faster, more available, and more efficient.Bringing ServiceNow IT Operations Management (ITOM) to Google CloudAs the first step of the partnership, ServiceNow will integrate its IT Operations Management (ITOM) capabilities with Google Cloud’s core infrastructure so that customers can use ServiceNow for the discovery and service mapping of Google Cloud’s services. This will allow enterprises to improve IT infrastructure management and monitoring, help them fine-tune the delivery of services on behalf of their customers, and better manage spend across hybrid cloud deployments. The initial phase of the integration focuses on visibility and discovery of services and is currently available. The second phase focuses on deployment policy and self-service provisioning, including support for Google Deployment Manager (GDM), and is expected to launch over the summer. Moving forward, we’ll work together with ServiceNow to continue to enhance these integrations and expand into the areas of cloud cost reporting, optimization and governance.“Our collaboration with Google Cloud will help customers optimize their investments while leveraging Google Cloud’s global infrastructure,” says Pablo Stern, Senior Vice President of IT Workflow Products at ServiceNow. “Ultimately, we’re taking a big step forward on our shared goal of helping enterprises navigate digital transformation while simplifying and streamlining the critical work of IT.”Adding Google Cloud AI and ML capabilities to ServiceNowLooking ahead, ServiceNow will integrate Google Cloud AI with their digital workflow capabilities to provide a seamless, automated experience for greater efficiencies. ServiceNow’s IT Service Management (ITSM) system will leverage Google Cloud’s AutoML Translation to dynamically translate user input into an IT service agent’s preferred language and to help scale IT support globally with real-time language translation. This can help enterprises improve IT service efficiency while minimizing costs and increasing employee satisfaction. This integration will be available for enterprise customers in the fall, and Google Cloud and ServiceNow will also explore developing additional AI and ML capabilities around document, speech and image understanding.Get started todayCustomers can begin leveraging our collaboration on discovery and service mapping capabilities beginning today. To learn more, sign up here.
Quelle: Google Cloud Platform

Improving data quality for machine learning and analytics with Cloud Dataprep

Editor’s note: Today’s post comes to us from Bertrand Cariou at Trifacta, and presents some steps you might take in Cloud Dataprep to clean your data for later use for your analytics or in training a machine learning model.Data quality is a critical component of any analytics and machine learning initiative, and unless you’re working with pristine, highly-controlled data, you’ll likely face data quality issues. To illustrate the process of turning unknown, inconsistent data into trustworthy assets, we will leverage the example of a forecast analyst in the retail (consumer packaged goods) industry. Forecast analysts must be extremely accurate in planning the right quantities to order. Supplying too much product results in wasted resources, whereas supplying too little means that they risk losing profit. On top of that, an empty shelf also risks consumers choosing a competitor’s product, which can have a harmful, long-term impact on the brand.To strike the right balance between appropriate product stocking levels and razor-thin margins, forecast analysts must continually refine their analysis and predictions, leveraging their own internal data as well as third-party data, over which they have no control.Every business partner, including suppliers, distributors, warehouses and other retail stores, may provide data (e.g. inventory, forecast, promotions, or past transactions) in various shapes and level of quality. One company may use palettes instead of boxes as a unit of storage, pounds versus kilograms, may have different categories nomenclature and namings, may use a different date format, or will most likely have product SKUs that are a combination of internal and other supplier IDs. Furthermore, some data may be missing or may have been incorrectly entered.Each of these data issues represents an important risk to reliable forecasting. Forecast analysts must absolutely clean, standardize, and gain trust in the data before they can report and model on it accurately. This post reviews key techniques for cleaning data with Cloud Dataprep and covers new features that may help improve your data quality with minimal effort.Basic conceptsCleaning data with Cloud Dataprep corresponds to a three-step iterative process:Assessing your data qualityResolving or remediating any issues uncoveredValidating cleaned data, at scaleCloud Dataprep constantly profiles the data you’re working on, from the moment you open the grid interface and start preparing data. With Dataprep’s real-time Active Profiling, you can see the impact of each data cleaning step on your data.The profile result is summarized at the column header with basic data points to point out key characteristics in your data, in the form of an interactive visual profile. By clicking one of these profile column header bars, Cloud Dataprep suggests some transformations to remediate mismatched or missing values. You can always try a transformation, preview its impact, select it or tweak it. At any point, you can always revert to a specific previous step if you don’t like the result.With these basic concepts in mind, let’s cover Cloud Dataprep data quality capabilities.1. Assessing your data qualityAs soon as you open a dataset in the grid interface, you can access to data quality signals that help you assess data issues and guide your work in cleaning the data.Rapid profilingYou’ll likely scan over your column headers and identify the potential quality issues to understand which columns may need your attention. Mismatched values (red bar) based on the inferred data types, missing values (black) and uneven value distribution (bars) can help you quickly identify which columns need your attention.In this particular case, our forecast analyst knows she’ll have to drill down on the `material` field that includes some mismatched and missing values. How should these data defaults impact her forecast and replenishment models?Intermediary data profilingIf you click on a column header, you’ll see some extra statistics in the right panel of Dataprep. This is particularly useful if you expect a specific format standard for a field and want to identify the values that don’t comply to the standard. In the example below, you can see that Cloud Dataprep discovered three different format patterns for the order_date. You might have follow-up questions: can empty order dates be leveraged in the forecast? Can mismatched dates can be corrected and how can you correct them?Advanced profilingIf you click “Show more”, or click the column header menu and “column details” in the main grid, you’ll land on a comprehensive data profiling page with some details about mismatched values, value distribution, or outliers. You can also navigate to the pattern tab to explore the data structure within a specific column.These three data profiling capabilities are dynamic by nature in the sense that Cloud Dataprep reprofiles the data in real time at each step of a transformation, to always present you with the latest information. This helps you clean your data faster and more effectively.The value for the forecast analyst is that she can immediately validate as she goes through the process of cleaning and transforming the data so that it fits the format she expects for her downstream modeling and reporting.2. Resolving data quality issuesDynamic profiling helps you assess the data quality at hand, and it is also the point of entry to start cleaning the data. Graph profiles are interactive and offer transformation suggestions as soon as you interact with them. For example, clicking the missing value space in the column header displays transformation suggestions such as deleting the values or setting the values to a default one.Resolving incorrect patternsYou can efficiently resolve incorrect patterns in a column (such as the recurrent date formatting issue in the order_data column) by accessing the pattern tab in the column details screen. Cloud Dataprep shows you the most frequent patterns. Once you select a target conversion format, Cloud Dataprep displays some transformation suggestions on the right panel, to convert all the data to fit the selected pattern. Watch the animation below, and try it for yourself:Highlight over data contentAnother interactive way to clean your data is to highlight over some portion of a value in a cell. Cloud Dataprep will suggest a set of transformations based on your selection, and you can refine the selection by highlighting over some additional content from another cell. Here is an example that extracts the month from the order date in order to calculate the volume per month:Format, replace, conditional functions, and moreYou can find most of the functions you’ll use to clean up data in the Column menu from the format or replace sections, or in the conditional formulas in the icon bar as shown below. These can be useful to convert all product or category names into uppercase or trim the names that have often quotes after import from a CSV or Excel file.Format functionsExtract functionsThe extract functions can be particularly useful to extract a subset of a value within a column. For example, you may want to extract from the product_id “Item: ACME_66979905111536979300 – PASTA RONI FETTUCINE ALFR” each individual component by splitting it on the “ – ” value.Conditional functionsConditional functions are useful for tagging values that are out of scope. For example you can write a formula that will tag records when a quantity is over 10,000, which wouldn’t be valid for the order sizes you typically encounter.If none of the visual suggestions give you what you require for cleaning your data, you can always edit a suggestion or manually adding a new step in a Dataprep recipe. Type in a search box what you want to do and Cloud Dataprep will suggest some transformations you can then edit and apply to the dataset.StandardizationStandardizing values is a way to group similar values into a single, consistent format. This problem is especially prevalent with free-form entries like product, product categories, company names. You can access the standardization feature from the Column menu. Additionally, Cloud Dataprep can group similar values together by string similarities or by pronunciation.Tip: You can mix-and-match standardization algorithms. Some values may be standardized using spelling, while others are more sensibly standardized based on international pronunciation standards.3. Validation at scaleThe last, critical step of a typical data quality workflow in Cloud Dataprep is to validate that no single data quality issue remains in the dataset, at scale.Leveraging sampling to clean dataSometimes, the full volume of a dataset won’t fit into Cloud Dataprep via your browser tab (especially when leveraging BigQuery tables with hundreds of millions of records or more). In that case, Cloud Dataprep automatically samples the data from BigQuery to fit it in your local computer’s memory. That might lead you to question: how can you ensure you’ve standardized all the data from one column (e.g. product name, category, region, etc.) or you have cleaned all the date formats from another?You can adjust your sampling settings by clicking the sampling icon at the top right and choosing the sampling technique that fits your requirements.Select anomaly-based to keep all the data mismatched or missing for one of multiple columnsSelect stratified to retrieve every distinct value for a particular column (particularly useful for standardization)Select filter-based to retrieve all the data based on particular formula (i.e format does not match dd/mm/yyyy)Profiling the data at scaleAt this point, hopefully you’re happy and confident that your recipe will produce a clean dataset, but until you run it at scale across the whole data set, you can’t ensure all your data is valid. To do so, click the ‘Run Job’ button and check that Profile Results is enabled.If in the job results you still see some red, this most likely means you need to adjust your data quality rules and try again.SchedulingTo ensure that the data quality rules you create are applied on a recurring basis schedule your recipes to run automatically. In the case of forecasting, data may change on a weekly basis, so users must run the job every week to validate that all the profile results stay green over time. If not, you can simply reopen and adapt the recipe to address the new data inconsistencies you discovered.In the flow view, select Schedule Flow to define the parameters to run the job on a recurring basis.ConclusionOur example here is retail-specific, but regardless of your area of expertise or industry, you may encounter similar data issues. Following this process and leveraging Cloud Dataprep, you can become more effective and faster at cleaning up your data for analytics or feature engineering.We hope you that by using Cloud Dataprep, the toil of cleaning up your data and improving your data quality is, well, not so messy. If you’re ready to start, log in to Dataprep via Google Cloud Console to start using this three-step data quality workflow on your data.
Quelle: Google Cloud Platform

Google Cloud and Cisco expand partnership to bring Anthos to our shared customers

Two years ago, we announced our partnership with Cisco as part of our shared vision to help more businesses modernize in the cloud and develop and deploy applications anywhere. From day one, we focused on hybrid cloud, helping customers move on-premises investments to the cloud at their own pace and on their terms. But we heard from customers again and again: What if I want to run not just on-prem and in one cloud, but across multiple clouds?Today, we’re taking a big step forward toward our shared goal: We’re expanding our partnership with Cisco to deliver Anthos to customers to help them run their applications wherever is best for their business and achieve true multi-cloud flexibility.Anthos will be tightly integrated with Cisco data center technologies such as Cisco HyperFlex, Cisco ACI, Cisco SD-WAN and Cisco Stealthwatch Cloud, offering a consistent, cloud-like experience whether on-prem or in the cloud with auto upgrades to the latest versions and security patches. With Cisco’s support for Anthos, customers will be able to:Benefit from a fully-managed service, like Google Kubernetes Engine (GKE), and Cisco’s hyperconverged infrastructure, networking, and security technologies.Operate consistently across an enterprise data center and the cloud.Consume cloud services from an enterprise data center.Modernize now on premises with the latest cloud technologies.Take advantage of edge computing.“Our partnership with Google Cloud continues to expand as we find more ways to help our mutual customers succeed, particularly in areas like hybrid and multi-cloud. Cisco and Google will release a joint reference design that will help customers adopt and channel partners to deploy the integrated joint solution,” says Kip Compton, Senior Vice President of Cloud Platforms and Solutions at Cisco. “Our support for, and integrations with, Anthos are a big step forward. Customers will be able to take advantage of Cisco data center, networking and security technologies with Anthos, enabling a more secure and seamless solution. We’re excited to help customers accelerate this.”“Working with Cisco and Google, Anthos on HyperFlex has already helped us solve the challenges associated with deploying a modern infrastructure framework leveraging our trusted data center products like Cisco HyperFlex and Cisco ACI,” said Keith Silvestri, CTO at KeyBank.Customers in our early access program are already sharing feedback with us—and we’ve been thrilled by the reception. These early access customers have helped inform the direction of our products and will continue to shape the future of Anthos—including our joint offerings with Cisco.For businesses with legacy workloads that aren’t yet ready to be containerized, Apigee, our full lifecycle API management platform, can help securely connect legacy apps and data that’s running on premises to any cloud through APIs.We’re delighted to continue expanding our partnership with Cisco which now spans areas including hybrid, multi-cloud, collaboration and more. Cisco was our 2018 Container Technology Partner of the Year, and has been a key partner for our customers’ hybrid cloud journey. In the coming days, there will be additional announcements as part of our partnership with Cisco, so keep an eye out.You can learn more about Cisco on Google Cloud by reading Cisco’s blog post.
Quelle: Google Cloud Platform

Watch and learn: Identity & access management sessions at Next '19

We have been hard at work building enterprise-ready identity and access management (IAM) services for our customers and partners, and made multiple product announcements at Next ‘19. During the show, we also hosted 10 sessions on IAM, all of which you can now watch on-demand, from any device.  Managing access to your infrastructureBest Practices for Identity and Authorization With GCPAnthos and Hybrid IdentityBest Practices for Using Microsoft Active Directory (AD) and Apps on Google CloudReduce AD Dependency With Cloud Identity and Secure LDAPManaging the identity of your employeesUnifying User, Device, and App Management With Cloud IdentityHow Airbnb Secured Access to Their Cloud With Context-Aware AccessThe Future of Security Keys: Using Your Phone in the Fight Against PhishingUnify Mobile and Desktop Management From the CloudHow Google Securely Enables Modern End-User ComputingManaging the identity of your customers and partnersScaling with Google’s New Identity PlatformWatching these sessions will give you a great foundation on Google’s approach to IAM, and how you can incorporate it into your environment. Then, stay tuned for more enhancements to our identity and access management services in the coming months.
Quelle: Google Cloud Platform

Last month today: April on GCP

There were lots of product announcements along with learning opportunities. Not surprisingly, our top stories from April were all from the big event. Read on to catch up!Next ‘19 at a glanceWhether you attended Next ‘19 or not, you can catch up on all that happened from our list of all 122 announcements from the show. Read about the news in compute and infrastructure, as well as a ton of launches tied to identity and security on GCP. There are also new features to explore in data analytics and AI/ML, details on running Windows workloads on GCP and improvements in productivity and collaboration with G Suite. Finally, scroll down the list to learn how customers are using GCP. Our blog now even has its own dedicated Next section where you can find all the posts from the event.The future of the cloud is openAt Next ‘19, we introduced Anthos, our hybrid cloud platform that will let you write once, run anywhere. It’s designed so you can write an app and run it without modifying the code across platforms: GCP, Google Kubernetes Engine (GKE), GKE On-Prem, and, soon, third-party clouds. Anthos is completely software-based, using open APIs so users can easily build and manage hybrid clouds. In addition, Anthos Migrate (available in beta) can auto-migrate VMs into GKE containers.Also at Next, we announced new partnerships with seven open source-centric database and analytics providers. This means that you can use their managed services through GCP, with the added benefits of unified management, billing and support. A ton of new applications being developed today run these partners’ open-source database systems, ranging from general purpose databases to specific ones for time-series, graph and search use cases.No servers, no problemCloud Run, our new serverless compute offering, also entered beta last month. Cloud Run lets you run stateless HTTP-driven containers, while handling all infrastructure management, including server provisioning, configuring, scaling and management behind the scenes. With Cloud Run, you can scale your containers up or down quickly (even to zero), giving you the flexibility of containers and velocity of serverless. Cloud Run is based on Knative, an open-source API and runtime environment.Extending the tools developers useAs cloud evolves, so does application development. Cloud Code is a set of plug-ins for IntelliJ and Visual Studio (VS) Code that bring automation and assistance to every phase of the software development lifecycle, using the tools developers already use. Integrated development environments (IDEs) can automate a lot of a developer’s work, but can be challenging for cloud development. Cloud Code uses command-line container tools such as Skaffold, Jib and Kubectl under the hood, so you see continuous feedback as you’re building your project in a Kubernetes environment.That’s a wrap for April. We’ll see you next month.
Quelle: Google Cloud Platform

Principles and best practices for data governance in the cloud

Today’s businesses both generate and consume data at unprecedented rates. Diversity of data types and sources means that organizations have to grapple with data access, security, governance, and let’s not forget—regulatory compliance. These concerns give some customers pause when they consider moving their sensitive data to the cloud. That’s why we published a white paper that outlines best practices and guidelines to help organizations establish data governance in a cloud-first world. This white paper intentionally takes a platform-agnostic approach that you can use when building out your governance capabilities.Data governance encompasses the ways that people processes and technology can work together to enable auditable compliance with defined and agreed-upon policies. Ultimately, organizations want their data to work for them and governance is an essential part of making data work for your business.Every enterprise should think about the entire data governance lifecycle, including  data intake and ingestion, cataloging, persistence, retention, storage management, sharing, archiving, backup, recovery, disposition, and removal and deletion. Many organizations find these requirements overwhelming, so the white paper outlines best practices and guidelines for governance in the cloud, including:Data discovery and assessment, so that you know what data assets you haveProfiling and classifying sensitive data, to understand which governance policies and procedures apply to your dataMaintaining a data catalog that contains structural metadata, data object metadata, and the assessment of levels of sensitivity in relation to your company’s governance directivesDocumenting data quality expectations, techniques, and tools that support the data validation and monitoring processDefining identities, groups, and roles, and assigning access rights to establish a level of managed accessPerforming regular audits of the effectiveness of controls in order to quickly mitigate threats and evaluate overall security healthInstituting additional methods of data protection to ensure that exposed data cannot be read, including encryption at rest, encryption in transit, data masking, and permanent deletionUsing these best practices, enterprises can create an effective data governance strategy and operating model, gaining a path for organizations to establish control and maintain visibility into their data assets. Organizations will likely reap immense benefits as they promote a data-driven culture within their organizations, including :improved decision making, better risk management, and achieving regulatory compliance.You can read or download the full white paper here,or you can find more information about how we secure and govern your data on Google Cloud Platform here.
Quelle: Google Cloud Platform

From the data warehouse: Urs Hӧlzle explains how data analytics and ML can transform your business

As businesses collect and analyze more and more data with every passing year, traditional infrastructure is challenged: It’s not just that there is more data; it’s coming from more sources, with different contexts and uses than the enterprise has seen in the past. Not only that, internal and external customers expect results at a faster pace, challenging both the tools and practices of traditional infrastructure.The solution is to do well what technology has always aimed to do: Automate the rote stuff, so you can get faster to more value-added work. There are a number of ways to do this, but increasingly the most valuable is to use Artificial Intelligence, in particular Machine Learning, either overtly, or in the form of labor-saving tools and services that rely on ML.  Today, we’ll talk with one of Google’s early distinguished engineers, Urs Hӧlzle,who now plans, designs, and supports the infrastructure behind the growing user base for a number of Google products, as well as the infrastructure that serves all of our Google Cloud customers.Urs has played an essential role at Google from nearly the beginning, leading the development of the computing and data infrastructure that first revolutionized Internet search, and eventually became a platform for maps, mobility, cloud computing, and artificial intelligence engines—systems that predict deadly illnesses and prevent Google’s own data centers from overheating.Urs and I recently sat down to talk about how machine learning simplifies problem-solving for businesses.Note: This interview has been abridged and edited.Quentin: Urs, as you’ve expanded infrastructure and capacity to process information at a higher velocity, process data from multiple angles, and think of data as a much more dynamic asset, how do today’s larger quantities of data change the way people work?Urs: The ecosystem really changed a lot, because previously, you had to do a lot of planning: you had to carefully pick which insight you wanted to go after. Now, a data analyst with a simple SQL query can at least prototype this insight at their own pace–maybe in half a day or a week. And they don’t need a software team, they don’t need an analyst, and it’s not actually a software development project anymore, and that means that the number of questions you can answer from your data just explodes.Quentin: So you can have far more projects, you can think in novel ways, you can test at a deeper level.Urs: Often, you’re going after the right thing, but your initial understanding is actually incorrect. As you go through it iteratively, your understanding of the problem improves. At that point, you’re asking better questions than you asked on day one. And if you can do that every day, and ask a better question every day, then just in a matter of two weeks, you might actually fundamentally change how you think about a particular customer segment—because you have a much deeper understanding of how it behaves.Quentin: One could see AI and machine learning as a kind of a natural outgrowth of cloud computing, right? Because it’s a fundamentally better way to sort through the data, find patterns, and test things?Urs: Yes, and in fact we’re starting to see [the worlds of machine learning and cloud infrastructure] merge. Traditionally when you had data, then you wrote the data processing, or maybe you had queries, that was the first step: “I’m just trying to find a data point again.” That was databases. Then came analytics: “let me actually analyze the data, compute statistics on it.” But, it was still relatively manual. Now, ML gives you a more powerful way to look at the data, that also does well with unstructured data like images, sound, or other data types, where traditional analytics just doesn’t work at all.[Modern data analytics tools] really make sense and make use of the data you already have. So on BigQuery today, our data warehouse, you actually have [built-in] ML functionality in your data analytics warehouse. It’s a very natural way to say, “Gee, I have this data here, can I actually make a prediction function for things where I don’t have the data?” And the answer is that yes you can, and it’s actually very easy. You can do it in a SQL statement that is roughly 10 lines long, so you don’t even need to understand how machine learning works.Quentin: What are some of the most interesting ML problems that customers are bringing to you these days?Urs: I think the biggest problems that companies have are in two main areas.First, they believe that ML is the biggest opportunity, but they need to be able to translate that into actual outcomes. So it’s essential that we offer tools in our stack that make it much easier for you to use ML without being an expert. BigQuery can actually do predictions with ML, without you needing to know too much about the underlying techniques. For example, AutoML, our ML [training tools]: you can take your set of images in which you want to recognize objects, and we can automatically construct a machine learning system that recognizes them with very high accuracy. Only a year ago, you needed an expert to do that.The second problem is really how to deal with the transition to the cloud. Every large user is going to run in a hybrid configuration for a while. Now you have two environments, and they have different rules, so you need to have two different teams and train them differently in order to figure out how these things work together.Quentin: Doesn’t putting out a cloud management tool like Kubernetes help with coordination?Urs: Yes, absolutely. That is one of the hardest problems, and our answer to that is Kubernetes and Google Kubernetes Engine (GKE). Now you can use Kubernetes to manage your workloads both on premise and in the cloud—with not just the same code, but of equal importance, the same configuration.Integrated machine learning is core to Google’s products, helping businesses turn data into insights and make smarter decisions. Learn more about BigQuery or read about our broader suite of data analytics solutions. If you already use BigQuery and you’re interested in generating ML-based insights, you can read about BigQuery ML.
Quelle: Google Cloud Platform

Making API development faster with new Apigee Extensions

As API programs gain traction, we know many companies want to empower developers to quickly build and deliver their API products. To aid them in this effort, we recently announced the availability of new capabilities in Apigee, the enterprise API management platform of Google Cloud Platform (GCP), to help enterprise IT teams speed up their API development. With faster API development within GCP, you can innovate faster and create connected customer experiences, plus increase developer productivity. You can also speed the time to market for API products and ensure security and scalability.With this launch, our new Apigee Extensions will let developer teams building APIs access several GCP services: Google Cloud Functions, Cloud Authentication, Cloud Data Loss Prevention (templates support), Cloud Machine Learning Engine, and BigQuery services, all from within Apigee.When you’re building APIs, you often need to connect to various cloud services. Until now, connecting to those services securely required using a combination of ServiceCallout and other out-of-the-box Apigee policies to deal with credentials, manage tokens, and access the required cloud services. This process is error-prone and has to be repeated for every single API proxy that is built within Apigee.By removing the repetitive and redundant work required to configure and apply policies to API proxies, Apigee Extensions simplifies the process of securely accessing cloud services. An API developer can pick from the policy list and use the necessary services using a first-class policy interface, as shown here:Once configured, policies for cloud services can be reused across all API proxies.This launch also adds support for Salesforce into the growing list of third-party services supported by Apigee Extensions. The Apigee Salesforce extension lets API developers easily interact with data in their company’s Salesforce instance by reducing the complexity of accessing the Salesforce REST API.How customers are using Apigee Extensions to build APIsSince the announcement of Apigee Extensions last year, we’ve heard from many of you who want to help API developers be more productive. A great success story of this adoption is Global Payments Inc., which builds solutions to help businesses offer a customer-friendly payment experience. Previously, accessing a cloud service for API development was a tedious and laborious task. Gopika Patel, vice president of enterprise integrations and architecture at Global Payments, experienced this firsthand when her team was implementing logging policies for the company’s APIs.Before Gopika’s team adopted Apigee Extensions, the typical process for the implementation of API logging policies required creating a service account, generating and downloading the keys, creating KVMs in their environment, assigning the project_id, log_id, jwt_issuers and privte_key to Apigee context variables, using those variables to generate the token, caching the token, composing the log message, and connecting to the service to post the composed log message, asynchronously. Now, using Apigee’s Stackdriver extensions, Gopika and her team have considerably boosted productivity and accelerated API development by simplifying log policy enforcement experience.“Previously, our developers had to perform repetitive and time-consuming work in order to ensure that the Global Payments APIs are compliant with our logging policies,” Patel said. “Apigee has simplified this process with contextual access to cloud services with Apigee Extensions. Using Apigee and Apigee Extensions, we have been able to speed up API development, while complying with strict security and compliance requirements.”Another Apigee customer, Designer Brands (formerly DSW, Inc.), one of North America’s largest designers, producers and retailers of footwear and accessories and parent company of DSW Designer Shoe Warehouse, has also had great success using Apigee to cut down on development overhead and accelerate speed to market. Using Apigee has been a key part of developing DSW’s VIP loyalty program and customer-facing applications. Jon Herbst, director of data integration at Designer Brands, has been a key supporter of the company’s digital transformation into an API-first architecture, which helps the company adapt to changing consumer behavior and a rapidly evolving technology landscape. Jon and the rest of the DSW IT team use Google Cloud services, including Apigee, for managing all interactions with the company’s APIs.Since Designer Brands implemented Apigee and Apigee Extensions and incorporated Google Cloud services such as Cloud Pub/Sub, Cloud Dataflow, and Cloud SQL, among others, the company has experienced a steady increase in digital customer engagement. Cyber Monday 2018 was a big transaction volume day for DSW. The company was able to scale up its digital operations without any performance issues so customers could have a great experience buying their favorite shoes and accessories. Ultimately, this led to the company’s best year-over-year comparable sales performance on that day since 2011. DSW is also using Apigee to give sales associates tools to improve customer experience by quickly verifying the customer’s information and checking their order status through automated functions.“Google has been at the heart of DSW’s API-first strategy as true partners who’ve enabled us to scale our API development efforts.” Herbst said. “Since we implemented Apigee, we’ve been able to innovate and get to market faster, enabling our customers to engage with us on the platform of their choice. With Apigee Extensions, our team has been able to access Google Cloud Services when developing APIs from within the Apigee interface, effectively boosting their productivity. This new approach has yielded higher customer engagement, improved customer satisfaction, and huge leaps in transaction volume. Our developers are happy, our customers are happy, and our internal stakeholders are happy. This is a win, win, win!”To learn more about how to accelerate API development with Apigee Extensions and Google Cloud Services, join our upcoming webcast with Apigee Extensions Product Manager Prithpal Bhogil.
Quelle: Google Cloud Platform

Announcing the winners of the Confidential Computing Challenge

Confidential computing aims to protect the integrity and confidentiality of applications and data being processed in the public cloud. At Google, one approach to confidential computing is Asylo, an open-source framework that we released for creating enclaves (sometimes referred to as trusted execution environments, or TEEs) to help protect sensitive data and code with hardware-backed protections.  This emerging technology is promising and sought-after by customers that want to preserve the security and privacy of critical code and sensitive user data.That’s what inspired us to collaborate with Intel on the Confidential Computing Challenge (C3)–an online, global competition to accelerate the field of confidential computing. In February, we invited participants to explore the advantages confidential computing can bring, and they did not disappoint!”As an industry, we’ve made a lot of progress towards our common goals of protection for data-in-use, and we’re only just getting started in terms of understanding the potential applications of trusted execution environments,” explained Simon Johnson, Sr. Principal Engineer & Intel® Software Guard Extensions (Intel® SGX) Architect, who was also one of the C3 judges. “This is one of the primary reasons we decided to co-sponsor the Confidential Computing Challenge along with Google Cloud—to invite the world’s most brilliant minds to collaborate with us and share their ideas so we can collectively grow this nascent space.”We received entries from around the world that covered practical and creative use cases for confidential computing, including machine learning, data analytics, multi-party computation, and hardening existing security features like Transport Layer Security (TLS). It was so inspiring and energizing to see the effort participants put into developing their C3 ideas that we decided to expand our original plan and award not just a first place prize, but also a runner-up and two honorable mentions.With that, please join us in congratulating the winners of C3!First place: TF Trusted – Confidential Machine Learning with TensorFlow and AsyloTF Trusted is an open-source framework built on top of Asylo and TensorFlow Lite to compute a prediction without revealing the model or input vector to the host computer. This is achieved by performing computations inside of an Intel SGX device; the user can then perform private computation inside the enclave with any collection of operations supported by TensorFlow. This private computation can be performed in whole, as a TensorFlow Lite model. The enclave’s computation can be extended as a custom TensorFlow Operation for use in broader TensorFlow computation graphs and libraries like TF Encrypted.”We believe that TF Trusted is an important step towards empowering enterprises, data scientists, and machine learning engineers to leverage confidential machine intelligence to realize the true potential of artificial intelligence,” said Gavin Uhma, CEO and co-founder of Dropout Labs, a distributed startup from France, Canada, and the USA focused on secure, privacy-preserving machine learning. “Solutions like this are especially applicable to industries such as finance, healthcare, and transportation, which are interested in moving to the public cloud but have concerns around data confidentiality. It is great that the Confidential Computing Challenge provided us with a platform with which to share these ideas more broadly.”Runner up: PrivateLearnRecommendation systems typically learn their models from user data. PrivateLearn provides a potential solution to ensure that the learning process preserves the privacy of such sensitive data, backed by a strong security guarantee.”There are two phases where leakage may happen on the server side — one is data leakage during the training phase and the other is data leakage from the learned model,” said Ruide Zhang, PhD candidate at Virginia Tech. “To encourage adoption of new IoT and AI applications, machine learning frameworks need to guarantee user privacy. PrivateLearn recognizes this need and aims to address it. PrivateLearn also shows that porting existing application into Asylo framework is practical.”For more information, head on over to the PrivateLearn GitHub here.Honorable mention: GeneCrypt – putting users in control of their genetic dataGeneCrypt helps protect genomic data while also allowing it to be used for the benefit of the individual. “Unlike many other contexts, in this use case, you have a massive amount of sensitive data, but you don’t need all the raw data for practical purposes—just a computationally derived value,” explained Martin Thiim, a software and security engineer based in Denmark. “This could, for instance, be a boolean value indicating the presence or absence of some genetic variant. Enclaves lend themselves well to be the filters that extract just the relevant information.”This novel idea utilizes confidential computing principles, and particularly Asylo/Intel SGX enclaves, to realize its goals. You can read more about and try out GeneCrypt here.Honorable mention: Intel SGX-based Certificate TransparencyThis idea proposes to harden the security of a Certificate Transparency (CT) scheme using Intel SGX, by making query authentication much more lightweight, and paving the way for an efficient, secure and practical CT scheme.”Our proposal aims at hardening the security and building trustworthy systems of CT log servers and monitors,” said Dr. Yuzhe Tang, assistant professor in the department of Electrical Engineering & Computer Science at Syracuse University. “Intel SGX-based CT systems will help significantly reduce operational costs for both domain owners and organizations, without sacrificing security. This will eventually increase the adoption rate of CT among organizations and individual users in mainstream and mobile environments.”Intel SGX-based CT is being built on top of enclave Log-Structured Merge-tree (eLSM), a high-performance key-value store that leverages Intel SGX enclaves, developed earlier by Dr. Tang’s team. You can find the source code for eLSM here and the corresponding technical paper here. For more information, you can also check out the project website.Stay in touchCongratulations to the winners and a huge thank you to all our C3 participants! Thank you also to our judges for the time and energy they spent reviewing and providing feedback on the awesome C3 entries.If reading this has inspired you to develop your own confidential computing idea, you can start by learning more about Asylo here and Intel SGX here. We can’t wait to hear from you and see what you build next!
Quelle: Google Cloud Platform

Cloud rendering platform Zync Render gets a major update

Zync Render, part of Google Cloud Platform (GCP), is our cloud-hosted rendering platform that helps visual effects and animation studios realize their creative vision. Zync Render has helped render everything from major Hollywood feature films and TV advertising to brand design.As it’s our mission to continue to enable the users behind these projects to create visually stunning content, we’ve spent the last several months optimizing our core engineering infrastructure. We’re excited to launch Zync version 2.0 on GCP. This consists of a complete Google-native rewrite of the application, providing benefits such as faster job start-up time, increased compute scalability, and several other new features that users have identified as critical to workflows.Zync also now offers up to 48,000 CPU rendering cores, allowing even the largest jobs to compute quickly and efficiently. Additionally, we’ve implemented the ability to set usage quotas on a per-site, per-project and per-user basis, giving more control to larger organizations with multiple locations and artists. Here’s a look at Zync user quotas:Zync has taken advantage of the multitude of GPU offerings available on GCP. These offerings work with some of the leading software vendors of GPU rendering technology so users can render using more high-performing cloud resources for better performance than what they could typically achieve on-premises.Additional Zync updates include supporting Chaos Group’s V-Ray for Maxon Cinema 4D, one of the most popular renderers on the Maxon platform, and reducing the price for all our V-Ray supported offerings by up to 37 percent for more cost-effective project rendering on GCP.Learn more about Zync Render here. To try cloud rendering on GCP, sign up for a free trial.
Quelle: Google Cloud Platform