The time for digital excellence is here—Introducing Apigee X

Digital transformation has been a top enterprise priority for years, and in the wake of the global pandemic, that urgency has only increased. Many industries have had to manage in weeks or months what previously would have taken years. According to surveys conducted for our “State of the API Economy 2021” report, three-quarters of enterprises remained focused on digital transformation in 2020, and two-thirds of those companies actually increased their investments. APIs are the backbone of digital transformation, and to help organizations navigate today’s challenging landscape, we’re announcing Apigee X. A major release of our API management platform, Apigee X seamlessly weaves together Google Cloud’s expertise in AI, security and networking to help enterprises efficiently manage the assets on which digital transformation initiatives are built. “APIs have become one of the most crucial steps for enterprises to achieve digitalization. APIs are key to adopting modern architecture patterns such as microservices, EDA, serverless or hybrid/multicloud,” wrote research & advisory firm Gartner in its July 2020 report “Gartner Market Share Analysis: Full Life Cycle API Management, Worldwide, 2019.” “As enterprises reopen post-COVID-19, they will have to find their own path to the new normal. The most successful will have started rescaling and reinventing themselves during the crisis, but the bulk of them will start at reopening. Rescaling and reinventing goes through a decomposition and a recomposition of their operating practices, and the role of an API platform in those activities is paramount. The more effective and extensive the API platform is, the quicker and easier rescaling and reinventing will be.”  Because APIs are how software talks to software and how developers leverage data and functionality at scale, APIs are not just a component in the software stack, but rather products that developers use to execute business strategies and achieve innovation at scale. Like all products, APIs need to be managed, and as Apigee turns 10 this month, we bring a decade of deep expertise and experience from working with over a thousand customers globally. “Apigee provided guidance on how we should roll out our API strategy and how we can think strategically about digital transformation using APIs,” said Rick Schnierer, Vice President, Annuity Technology, at Nationwide Insurance. “What used to take us two to three months to develop as a monolithic service now takes days as a microservice. Apigee has also allowed us to federate development, meaning our developers are empowered to create and share APIs on their own rather than going through a centralized model. We have business connections coming through the Apigee API management platform that we wouldn’t have even thought to initiate on our own.”  “At Deutsche Bank we are looking forward to using Apigee X as we design and implement API solutions integrated into our ecosystem,” said Shaun Cotter, Managing Director, Corporate Bank Technology at Deutsche Bank. “The effective and secure use of API-led integration is a key component of our Google Cloud partnership, and will enable the bank to better connect services internally, innovate with third parties and offer our products to a broader client base.” Achieving digital excellence with Apigee XAs increased digital transformation investments may suggest, competitiveness is increasingly less about transformation ambitions and more about actual transformation. It’s not enough to simply use the cloud, have APIs, or even adopt API management. Rather, the requirement is digital excellence: the ability to rapidly and repeatedly deploy and scale, and to consistently deliver on digital programs. It involves adopting digital as a core enterprise strategy for building profitable API-based platforms and delivering measurable business outcomes. Helping customers make this leap–from gradual transformation and API-based programs to digital excellence and API-based platforms–has been our core goal for Apigee X. “At Pitney Bowes we are always looking for ways to provide the best experience for our clients and Apigee’s technology helps us make this possible. We are very excited about the launch of Apigee X, as it can help businesses elevate API-led programs, and accelerate digital transformation even more,” said James Fairweather, Chief Innovation Officer at Pitney Bowes. “During these uncertain times, organizations worldwide are doubling-down on their API strategies to operate anywhere, automate processes, and deliver new digital experiences quickly and securely. By powering APIs with new capabilities like reCAPTCHA Enterprise, Cloud Armor (WAF), and Cloud CDN, Apigee X makes it easy for enterprises like us to scale digital initiatives, and deliver innovative experiences to our customers, employees and partners.” What Differentiates Apigee XLet’s take a closer look at Apigee X.Global reach, high performance & reliabilityWith shifting market conditions and dynamic work environments, organizations are scaling API programs for global expansion and supporting distributed workforces. Apigee X makes it easy for customers to harness the power of Cloud CDN to maximize the availability and performance of APIs globally. Customers can now deploy their APIs across 24 Google Cloud regions and enhance caching at more than 100 locations. Multi-layer security & privacyScaling API programs also opens up more doors for fraudulent activities, both internally and outside of the organizational boundaries. As our “State of the Economy 2021” report elaborates, in the past year, Apigee saw an increase in abusive API traffic of over 170%. Apigee X offers an integrated approach for applying capabilities like Cloud Armor web application firewall for enhanced API security and Cloud Identity and Access Management (IAM) for authenticating and authorizing access to the Apigee platform. It gives businesses more control over encrypted data with CMEK while allowing them to store data in the region of their choice and control the network locations from which users can access data by using VPC Service Controls.AI-powered automationWith the increasing adoption of APIs for powering enterprise business-critical applications, there’s growing pressure on operations and security teams to ensure they’re always available, secure and performing as expected. Apigee X applies Google’s industry-leading AI and machine learning capabilities to historical API metadata to autonomously identify anomalies, predict traffic for peak seasons, and ensure APIs adhere to compliance requirements. This helps API operators and security admins focus on programs that really matter to their business, rather than spending time on trivial tasks.Being an industry leader in the space of API management, and having worked across customers for a decade, we’ve seen how enterprises can truly transform their businesses by leveraging APIs to build new digital experiences, more powerful and intelligent automations, and more impactful data-driven applications. Today’s launch continues to expand what API management can do, and it offers businesses an onramp to achieve digital excellence over the next decade. We can’t wait to see what you’ll do next with us. Click here to try the new release of Apigee for free.Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.Related ArticleTop 5 trends for API-powered digital transformation in 2021Google Cloud’s State of APIs report investigates digital transformation in 2020 and where trends point in 2021 and beyond.Read Article
Quelle: Google Cloud Platform

Google Cloud AI leaders share tips for getting started with AI

Machine learning (ML) can help you solve hard business problems in new ways, but getting started can feel overwhelming. We are fortunate to have some great leaders in Google Cloud AI who have decades of experience in artificial intelligence (AI) and have generously agreed to share a few words of advice from their learnings. In the following videos they share tips for businesses and organizations getting started in AI, as well as what’s top of mind for them in Cloud AI this year.How do you enjoy these revenue and efficiency gains? Here’s why this field of artificial intelligence has the business world so enthralled. According to a recent McKinsey & Company study, AI is expected to increase economic output by $13 trillion in the next decade. The firm states businesses that fully absorb this technology could double their cash flow in that time, while companies that don’t could see a 20% decline. Businesses in every sector and across the globe are seeing this opportunity and choosing Google Cloud AI to solve some of their toughest challenges. From Etsy, which exemplifies the new era of scaling a business, to deluged government agencies like the Illinois Department of Employment Security—organizations in every industry are using our Cloud AI services to solve problems and innovate. There’s lots of ways to get started with Google Cloud AI: from prepackaged solutions that integrate with your existing systems and workflows, to our managed AI Platform for building and managing the entire ML model development lifecycle, to pretrained models accessible via APIs, to easily add sight, language, conversation and data into your apps.   If you’d like to take our AI Platform for a spin, you can explore labs on Qwiklabs and other course offerings in our ML learning path to gain more ML experience on Google Cloud. And there’s a $300 creditand free tier to start experimenting today.Related ArticleEmpowering teams to unlock the value of AIThe latest and greatest AI and machine learning news from Google CloudRead Article
Quelle: Google Cloud Platform

Give app teams autonomy over their DNS records with Cloud DNS peering

In large Google Cloud environments, Shared VPC is a very scalable network design that lets an organization connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs. Typically shared by many application teams, a central team (or platform team) often manages the Shared VPC’s networking configuration while application teams use the network resources to create applications in their own service projects.In some cases, application teams want to manage their own DNS records (e.g., to create new DNS records to expose services, update existing records…). There’s a solution to support fine-grained IAM policies using Cloud DNS peering. In this article, we explore how to use it to give your application teams autonomy over their DNS records, while ensuring that the central networking team maintains fine-grained control over the entire environment.Understanding the Cloud DNS peering solutionImagine that you, as an application team (service project) owner, want to be able to manage your own application (service project) DNS records without impacting other teams or applications. DNS peering is a type of zone in Cloud DNS that allows you to send DNS requests from a specific sub-domain to another Cloud DNS zone configured in another VPC—and it lets you do just that!DNS peering in actionCloud DNS peering is not to be confused with VPC peering, and it doesn’t require you to configure any communication between the source and destination VPC. All the DNS flows are managed directly in the Cloud DNS backend: each VPC talks to Cloud DNS and Cloud DNS can redirect the queries from one VPC to the other.So, how does DNS peering allow application teams to manage their own DNS records? By using DNS peering between a Shared VPC and other Cloud DNS zones that are managed by the application teams.For each application team that needs to manage its own DNS records, you provide them with:Their own private DNS subdomain (for example <applicationteam>.<env>.<customer>.gcp.com)Their own Cloud DNS zone(s) in a dedicated project, plus a standalone VPC with full IAM permissionsYou can then configure DNS peering for the specific DNS subdomain to their dedicated Cloud DNS zone. In this VPC, application teams have Cloud DNS IAM permissions only on their own Cloud DNS instance and can manage only their DNS records.The central team, meanwhile, manages the DNS peering and decides which Cloud DNS instance is authoritative for which subdomain, thus allowing application teams to only manage their own subdomain. By default, all VMs in the Shared VPC use Cloud DNS in the Shared VPC as their local resolver. This Cloud DNS instance answers for all DNS records in the Shared VPC, uses DNS peering to the application teams’ Cloud DNS instances and VPC peering or forwarding to on-prem for on-prem records.High-level design of the Cloud DNS peering solutionAs detailed above, the flow is the following:A VM in any project of the Shared VPC uses Cloud DNS as its local DNS resolver.This VM tries to resolve app1.team-b.gcp.com, which is a DNS record owned by team B that exposes a local application (a Compute Engine instance or a Cloud Load Balancer).This VM sends the DNS request to the Shared VPC Cloud DNS. This Cloud DNS is configured with DNS peering that sends everything under the “team-b.gcp.com” subdomain to Cloud DNS in the DNS project for team B.Team B is able to manage its own DNS records, but only in its dedicated DNS project. It has a private zone there for “*.team-b.gcp.com” and an A record for “app1.team-b.gcp.com” that resolves to “10.128.0.10”.When the VM receives its DNS answer, it tries to reach 10.128.0.10 using the VPC routing table. If the corresponding firewall rules are open, the request is successful!Terraform codeAre you interested in trying out this solution for yourself? You can find an end-to end-example in Terraform, which provisions the following architecture:This Terraform code should allow you to get started quickly and can be reused to integrate this design into your Infrastructure as Code deployment. Additional considerationsIn the above example, we used a standalone project dedicated to DNS per application team. You can also use the application team’s service project by creating a local VPC in the application project and configuring DNS peering to this local DNS project.The tradeoffs are the following:Security and autonomyMany organizations need the security and centralized control that Shared VPC provides. But with this architecture based on Cloud DNS peering, you can also grant application teams the autonomy they need to maintain their own DNS records—freeing the central networking team from that burden! For more on managing complex networking environments, check out this document on DNS best practices.
Quelle: Google Cloud Platform

Customers who make data sing and analytics product news to cure your data FOMO

In December, we predicted that a “revolution was coming for data and the cloud in 2021.” Well, January came and gone: our team has been busy delivering new capabilities, content and best practices to help kick your year into high gear. Our work is guided by our customers; we’re always listening to your needs and working to build innovative solutions that will help you succeed.Here is a quick digest of what’s happening in data analytics at Google this month.The Data Democracy Trilogy This past week we released the third and final installment of our “data democratization trilogy,” a series of blogs aimed at helping our community deliver on their mission to become more data-driven.  Our blogs include best practices from incredible organizations like AB Tasty, Sunrun, Veolia, Geotab and AES Digital Hub who have empowered business users, expanded the use of machine learning and made real-time analytics ubiquitous. The democratization of insights has been a key theme for our customers and a personal passion of mine, and it will be front and center of our plans for 2021. If you want to find out how Dataflow, together with Pub/Sub, can help the challenges posed by traditional streaming systems or how the combination of BigQuery, Connected Sheets, Looker and Data QnA can provide faster answers to your employees, be sure to bookmark these blogs and share them with your teams and colleagues.And, if you’re ready for more, check out our design pattern catalog. This past week, we released a set of resources to help you perform demand forecasting at scale using BQML and Data Studio. The best way to understand this pattern is to watch the video below and to register for our webinar next week: How to do demand forecasting with BigQuery ML.As you navigate through the catalog, you’ll find everything you need from predicting customer lifetime value, building propensity to purchase models, or architecting product recommendation and anomaly detection systems. You’ll probably wonder how we came up with such an impactful list of best practices. The answer is simple: our customers!  Our customers guide everything we do and we pride ourselves in building the solutions you need across any and all industries. That’s why, when you navigate through our catalog, you’ll find that these resources are applicable across many industries, from retail and manufacturing to financial services, telecommunications and many more.From staying up until 3am to relaxing and eating ice cream To give you an example of the commitment we make to our customers, I want to point you to an outstanding conversation we posted last week between Chad Jennings, Data Analytics product manager and two of our greatest customers: the New York Times & The Major League Baseball.The video is accompanied by a great blog, authored by The New York Times’ Executive Director for Data Products, Edward Podojil. In the piece, Ed talks about his company’s data architecture evolution and how he went from staying “up until three in the morning one night trying to keep data running for their needs” to “relaxing and eating ice cream” because he could now “more easily manage his data environment, set and meet higher expectations for data ingestion, analysis and insight.” This is the kind of story that truly warms my heart; I hope you’ll enjoy it too!Innovators in all industriesOur customers work on some of the most meaningful and interesting issues. We pride ourselves in serving them and paying attention to their progress. Great publications like Diginomica and Healthcare business and policy site FierceHealthcare documented the journeys of some of them this month:We hope you’ll find value in how The Home Depot describes their journey and documented how BigQuery allowed them to achieve their “one version of the truth”.  You might have been inspired by Highmark Health’s decision to tackle the data fragmentation experienced in the healthcare industry by partnering with Google Cloud to tap into our AI and Analytics technology.Our goal is to enable every industry to accelerate their ability to digitally transform and reimagine their business through data-powered innovation. And we mean every industry.  If you’re in the entertainment industry for instance, you’ll want to read about why BMG selected Google Cloud, BigQuery and Dataproc to tap into relevant data across the music lifecycle with smarter analytics tools.“We actually migrated all of our data warehouse to BigQuery over the last three years. The upside of that is now we have a lot more of this data together. There’s only one place of truth, so there’s never an argument in our organization about whether your copy of the data is the real truth or my copy of the data is the real truth.” – The Home Depot”The Living Health model takes the information and preferences that a person provides us, applies the analytics developed with Google Cloud, and creates a proactive, dynamic, and readily accessible health plan and support team that fits an individual’s unique needs.” – Highmark HealthProduct capabilities you’re not going to want to missOur customers inspire us to do more every day and we aim to continuously introduce new functionality that makes your work easier, more robust, and better integrated.  In January, we introduced radical usability improvements with our new BigQuery Cloud Console UI: you can now experience new multi-tab navigation, a new resource panel & new SQL editor. Find out more. Beyond usability, customers value scale and we hear that you want our help in making queries and use cases virtually limitless. This is why, this month, we introduced support for the BigNUMERIC datatype. BigQuery already supports a wide range of data types for storing numeric data. Of these data types, NUMERIC supports the highest degree of precision with 38 digits of precision and 9 digits of scale. But, as large web-scale datasets expand to support time, location or finance-based information with an expanded degree of precision, the current precision and scale in NUMERIC was not sufficient to support the data. We introduced BIGNUMERIC, which supports 76 digits of precision and 38 of scale, in public preview in all regions. Read more here. Finally, many of you have reached out to us to ask how you can use BigQuery with Open Source engines like Apache Spark. Chris Crosbie, product manager on Dataproc, produced an outstanding tutorial video introducing our Spark-BigQuery-connectorthrough the use of three common use cases for data engineers and data scientists.Want to take BigQuery for spin? Get started with the BigQuery sandbox here. While you’re at it, you might want to refer to this January blog on how to let users upload their complex CSV file into BigQuery using Google SheetsMore community news!If you’re subscribing to this blog, you know that our teams are focused on enabling the community and partnering with you to advance the field of data analytics, machine learning and data science. Let us know how we can participate in your success!  This past month, I had the opportunity to speak about X-Analytics with Justin Borgman, the CEO of Starburst Data, in preparation for his company’s upcoming event: Datanova. I hope you can make time for it: the two-day virtual conference kicks off on February 9th and Bill Nye, the “science guy” is the keynote! Find out more about it here.
Quelle: Google Cloud Platform

Donating Docker Distribution to the CNCF

We are happy to announce that Docker has contributed Docker Distribution to the Cloud Native Computing Foundation (CNCF). Docker is committed to the Open Source community and open standards for many of our projects, and this move will ensure Docker Distribution has a broad group maintaining what is the foundation for many registries. 

What is Docker Distribution?

Distribution is the open source code that is the basis of the container registry that is part of Docker Hub, and also many other container registries. It is the reference implementation of a container registry and is extremely widely used, so it is a foundational part of the container ecosystem. This makes its new home in the CNCF highly appropriate.

Docker Distribution was a major rewrite of the original Registry code which was written in Python and was a much earlier design not using content addressed storage. This new version, written in Go, was designed to be an extensible library, so that different backends and subsystems could be designed. Docker formed the Open Container Initiative (OCI) in 2015, in the Linux Foundation, in order to standardise the specifications for the container ecosystem, including the registry and image formats.

Why are we donating Docker Distribution to the CNCF?

There are now many registries, with a lot of companies and organizations providing registries internally or as a service. Many of these are based on the code in Docker Distribution, but we found that many people had small forks and changes that they were not contributing to the upstream version, and the project needed a broader group of maintainers. To make the project clearly an industry wide collaboration, hosting it in the CNCF was the obvious place, as it is the home of many successful collaborative projects, such as Kubernetes and Containerd.

We approached the major users of the Docker Distribution code at scale to become maintainers of the project. This includes maintainers from Docker, GitHub, GitLab, Digital Ocean, Mirantis and the Harbor project which is itself a graduated CNCF project that extends the core registry with other services. In addition, we have invited a maintainer from the OCI, and we are open to more participation in the future. The project is now simply called “Distribution” and can be found at github.com/distribution/distribution.

The Distribution project has been accepted into the CNCF Sandbox, but as it is a mature project we will be proposing that it moves to incubation shortly. We welcome the new maintainers and look forward to the new contributions and future for the project in the CNCF.
The post Donating Docker Distribution to the CNCF appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/