Amazon Corretto Oktober 2022 vierteljährliche Updates

Am 18. Oktober 2022 kündigte Amazon vierteljährliche Sicherheits- und kritische Updates für Amazon-Corretto-LTS (Long-Term Supported)-Versionen von OpenJDK an. Corretto 19.0.1, 17.0.5, 11.0.17, 8u352 können jetzt heruntergeladen werden. Amazon Corretto ist eine kostenfreie, plattformübergreifende, produktionsbereite Multiplattform-Distribution von OpenJDK.
Quelle: aws.amazon.com

Amazon Interactive Video Service umfasst nun Web- und Mobile-SDKs für IVS-Stream-Chat

Amazon Interactive Video Service (Amazon IVS) bietet jetzt SDKs für Stream-Chat mit Unterstützung für Web, Android und iOS. Die Stream-Chat-SDKs von Amazon IVS unterstützen häufige Funktionen für die Verwaltung von Chat-Room-Ressourcen, das Senden und Empfangen von Nachrichten und das Verwalten von Chat-Room-Teilnehmenden. Weitere Informationen finden Sie in der Dokumentation zum Amazon-IVS-Chat. Der Einsatz von Stream-Chat-SDKs von Amazon IVS verursacht keine zusätzlichen Kosten, die über die normalen Amazon-IVS-Nutzungskosten hinausgehen.
Quelle: aws.amazon.com

Google Cloud and HashiCorp deliver a more efficient approach for Cloud Support Services

Cloud customers of all sizes require a means to reduce unplanned downtime, scale, and increase productivity while extracting the optimal value from their cloud environments. According to a leading cloud analyst, a majority of today’s businesses operate multi-cloud environments, so essential support services must also be prepared to efficiently address complex cloud environments to meet each organization’s business imperatives for sustaining a competitive advantage. After careful collection of customer feedback, Google Cloud and HashiCorp have engaged in a joint collaboration with a focus to develop a more effective support model for customers who subscribe to both Google Cloud and HashiCorp products. This innovative approach enables Google Cloud Premium Support customers and HashiCorp Terraform customers to benefit from a seamless support process which answers the who, where, when and how for technical issues and enables a faster route to resolution.A robust cloud support approachImproving customer satisfaction remains at the heart of the service challenge for technical support requirements for both organization’s customers. As a result, this proven approach was designed to deliver a simplified yet efficient cloud support experience enabling customers to access a seamless, multi-cloud support service where issues are identified, addressed and resolved. This responsive support model eliminates customer uncertainty and ensures that technical issues, no matter the origin of submission, both Google Cloud and HashiCorp support teams place each customer’s issue as a priority in their respective support queues to progress the technical case with awareness made available to both organizations till the issue has been resolved.“Google Cloud is an important partner to HashiCorp, and our enterprise customers use HashiCorp Terraform and Google Cloud to deploy mission critical infrastructure at scale. With 70 million downloads of the Terraform Google Provider this year and growing, we’re excited to collaborate closely with Google Cloud to offer our joint customers a seamless experience which we believe will significantly enhance their experience on Google Cloud.” – Burzin Patel, HashiCorp VP, Global Partner AlliancesManaging cloud investments using multiple cloud providers and apps can require complex troubleshooting. That’s why Google Cloud, Third-Party Technology Support is included as a feature with Premium Support for Google Cloud and is focused to resolve multi-vendor issues in a seamless manner along with organization setup, configuration, and troubleshooting. HashiCorp, a Google technology partner, engages in ongoing collaborations with Google Cloud to develop and ensure infrastructure innovation in the cloud. Premium Support for Google Cloud customers receive technical support services that enable them to focus on their core business and include the world-class capabilities of Technical Account Management (i.e. named Technical Account Manager); Active Assist Recommendations API (i.e. generates proactive system recommendations), Operational Health Reviews (i.e. monthly system improvement reports), and Third-Party Technology Support (i.e. service that streamlines support for multi-cloud environments), while Terraform Cloud and Terraform Enterprise secure the most expedient route to resolve their technical issues (see Table 1).Table 1: Providers and products supportedIn this joint support approach, customers with Google Cloud or HashiCorp support gain the option to submit a support case with either organization. With each case submission, the customer receives the best time-to-resolution as both organizations can help resolve the case. The submitted case initiates a detailed workflow for case progression where both organizations collaborate throughout the life-of-the-case. This ensures each customer receives the right level of technical expertise throughout the entirety of the case, delivering an end-to-end, connected support experience.When a Premium Support for Google Cloud customer chooses to contact Google Cloud Support to initiate a technical case, the Premium Support team leads the troubleshooting for the submitted issue. Should the Premium Support team determine that the issue is isolated to HashiCorp components, the customer will be instructed to open a case with HashiCorp. This is when Premium Support shares the previously collected information with the HashiCorp Support team. The Premium Support team retains the case as open until it is confirmed that HashiCorp Support has driven the case to resolution (see Figure 1). This streamlined, behind-the-scenes approach remains seamless to the customer and ensures ease of use and access to case information not otherwise made available to cloud customers. The same process remains true if or when a Google Premium Support customer initiates their technical issue with the HashiCorp Support team.Figure 1: Collaborative cloud support modelIn summaryAfter strategic collaboration and in direct response to customer feedback, Google Cloud Support and HashiCorp Support have developed a more efficient cloud support service model for their customers in common. This support model enables Premium Support for Google Cloud customers and Terraform support for HashiCorp customers to eliminate uncertainty for submission of technical issues and enables the reduction of the time-to-resolution. With the majority of today’s businesses having the complexity of multi-cloud environments, Google Cloud and HashiCorp jointly deliver a more simplified process for subscribed cloud customers to submit and resolve their technical issues.To learn more visit:Google CloudThird-party Technology SupportCustomer Care Services  Premium Support for Google CloudTerraformTerraform with Google Cloud – Best PracticesTerraform Cloud Product OverviewTerraform Google ProviderTerraform Google Beta ProviderFor questions, email:  hashicorp-terraform-gcp-support@google.com
Quelle: Google Cloud Platform

How Deutsche Bank is building cloud skills at scale

Deutsche Bank (DB) is the leading bank in Germany with strong European roots and a global network. DB was eager to reduce its workload for managing legacy infrastructure, so that their engineering community could instead focus on modernizing their financial service offerings. The bank’s desire for solutions that could dynamically scale to meet demand and reduced time to market for new applications was a key driver for migrating its infrastructure to the cloud. Deutsche Bank and Google Cloud signed a strategic partnership in late 2020 to accelerate the bank’s transition to the cloud and co-innovate the next generation of cloud-based financial services. This multi-year partnership is the first of its kind for the financial service industry. In the process of migrating its core on-premises systems to Google Cloud, Deutsche Bank became acutely aware of the need to increase its technical self-sufficiency internally through talent development and enterprise-wide upskilling. Demand for cloud computing expertise has been surging across all sectors, and growth in cloud skills and training has been unable to keep pace with industry-wide cloud migration initiatives. Asrecent reports suggest, organizations need to be taking proactive steps to grow these talent pools themselves. For Deutsche Bank, the scale of the skills and talent development challenge it was facing was significant. Following many years of drawing help from outside contractors, much of the bank’s engineering capability and domain knowledge was now concentrated outside their full-time workforce. This was exacerbated by fierce competition for cloud skills expertise across the industry as a whole. There was a clear and present need to reinvigorate DB’s engineering culture, so developing, attracting, and retaining talent became a key dimension of the bank’s cloud transformation journey. A recent IDC study1 demonstrates that comprehensively trained organizations drive developer productivity, boost innovation, and increase employee retention. With around 15,000 employees in their Technology, Data and Innovation (TDI) division across dozens of locations, DB needed to think strategically about how to deliver comprehensive learning experiences across multiple modalities, while still ensuring value for money. Through the strategic partnership, Deutsche Bank could now draw upon the expertise and resources of Google Cloud Customer Experience services, such as Google Cloud Premium Support, Consulting and Learning services, to develop a new structured learning program that could meet its businesses’ needs and target its specific skill gaps. With Premium Support, Deutsche Bank was able to collaborate with a Technical Account Manager (TAM) to receive proactive guidance on how to ensure the proposed learning program supported the bank’s wider cloud-migration processes. To guarantee this project’s success, the TAM supporting Deutsche Bank connected with a wide range of domains across the Deutsche Bank, including apps and data, infrastructure and architecture, and onboarding and controls. Cloud Consulting services also worked with DB to consider the long-term impacts of the program and how it could be continuously improved to help build a supportive, dynamic engineering culture across the business as whole. Google Cloud Learning services made this talent development initiative a reality by providing the necessary systems, expertise, and project management to help Deutsche Bank implement this enterprise-wide certification program. In a complex, regulated industry like financial services, the need for content specificity is particularly acute. This new Deutsche Bank Cloud Engineering program leverages expert-created content and a cohort approach to provide learners with content tailored to their business needs, while also enabling reflection, discussion, and debate between peers and subject matter experts. Instructor-led training is deliberately agile and is being iterated across multiple modalities to help close any emerging gaps in DB employees’ skill sets, and to ensure the right teams are prioritized for specific learning opportunities.Google Cloud Skills Boost is another essential component of Deutsche Bank’s strategy to increase its technical self-sufficiency. With Google Cloud’s help, Deutsche Bank was able to create curated learning paths designed to boost cloud skills in a particular area. Through a combination of on-demand courses, quests, and hands-on labs, DB provided specialized training across multiple teams simultaneously, each of whom have different needs and levels of technical expertise. Google Cloud Skills Boost also provides a unified learning profile so that individuals can easily track their learning journeys, while also providing easier cohort management for administrators. It was equally important to establish an ongoing, shared space for upskilling to reinforce a culture of continuous professional development. Every month Deutsche Bank now runs an “Engineering Day” dedicated to learning, where every technologist is encouraged to focus on developing new skills. Many of these sessions are led by DB subject matter experts, and they explore how the bank is using a certain Google Cloud product or service in their current projects. Alongside this broader enterprise-wide initiative, a more targeted approach was also taken to provide two back-to-back training cohorts with the opportunity to learn directly from Google Cloud’s own artificial intelligence (AI) and machine learning (ML) engineers via the Advanced Solutions Lab (ASL). This allowed DB’s own data science and machine learning (ML) experts to explore the use of MLOps onVertex AIfor the first time, allowing them to build end-to-end ML pipelines on Google Cloud, automating the whole ML process. “The Advanced Solutions Lab has really enabled us to accelerate our progress on innovation initiatives, developing prototypes to explore S&P stock prediction and how apps might be configured to help partially sighted people recognize currency in their hand. These ASL programs were a great infusion of creativity, as well as an opportunity to form relationships and build up our internal expertise.” — Mark Stokell, Head of Data & Analytics, Cloud & Innovation Network, Deutsche Bank In the first 18 months of the strategic partnership, over 5,000 individuals were trained —adding nearly 10 new Google Cloud Certifications a week—and over 1,400 engineers were supported to achieve their internal DB Cloud Engineering certification. Such high numbers of uptake and engagement with this new learning program signals its success and the value of continuing to invest in ongoing professional development for TDI employees. “Skill development is a critical enabler to our long-term success. Through a mix of instructor-led training, enhancing our events with gamified Cloud Hero events, and providing opportunities for continuous development with Google Cloud Skills Boost, it genuinely feels like we’ve been engaging with the whole firm. With our cohort-based programs, we are pioneering innovative ways to enable learning at scale, which motivate hundreds of employees to make tangible progress and achieve certifications. With consistently high satisfaction scores, our learners clearly love it.” — Andrey Tapekha, CTO of North America Technology Center, Deutsche BankAfter such a successful start to its talent development journey, Deutsche Bank is now better prepared to address the ongoing opportunities and challenges of its cloud transformation journey. Building on the shared resources and expertise of their strategic partnership, DB and Google Cloud are now turning their attention toassessing the impact of this learning program across the enterprise as a whole, and considering how the establishment of a supportive, dynamic learning culture can be leveraged to attract new talent to the company. To learn more about how Google Cloud Customer Experience services can support your organization’s talent transformation journey, visit: ● Google Cloud Premium Support to empower business innovation with expert-led technical guidance and support ● Google Cloud Training & Certification to expand and diversify your team’s cloud education ● Google Cloud Consulting services to ensure your solutions meet your business needs 1. IDC White paper, sponsored by Google Cloud Learning, To Maximize Your Cloud Benefits, Maximize Training, March 2022, IDC #US48867222.
Quelle: Google Cloud Platform

Best practices for migrating Hadoop to Dataproc by LiveRamp

AbstractIn this blog, we describe our journey to the cloud and share some lessons we learned along the way. Our hope is that you’ll find this information helpful as you go through the decision, execution, and completion of your own migration to the cloud.IntroductionLiveRamp is a data enablement platform powered by identity, centered on privacy, integrated everywhere. Everything we do centers on making data safe and easy for businesses to use. Our Safe Haven platform powers customer intelligence, engages customers at scale, and creates breakthrough opportunities for business growth.Businesses safely and securely bring us their data for enrichment and use the insights gained to deliver better customer experiences and generate more valuable business outcomes. Our fully interoperable and neutral infrastructure delivers end-to-end addressability for the world’s top brands, agencies, and publishers. Our platforms are designed to handle the variability and surge of the workload and guarantee service-level agreements (SLAs) to businesses. We process petabytes of batch and streaming data daily. We ingest, process (join and enhance), and distribute this data. We receive and distribute data from thousands of partners and customers on a daily basis. We maintain the world’s largest and most accurate identity graph and work with more than 50 leading demand-side and supply-side platforms.Our decision to migrate to Google Cloud and DataprocAs an early adopter of Apache Hadoop, we had a single on-prem production managed Hadoop cluster that was used to store all of LiveRamp’s persistent data (HDFS) and run the Hadoop jobs that make up our data pipeline (YARN). The cluster consisted of around 2500 physical machines with a total of 30PB or raw storage, ~90,000 vcores, and ~300TB of memory.  Engineering teams managed and ran multiple MapReduce jobs on these clusters. The sheer volume of applications that LiveRamp ran on this cluster caused frequent resource contention issues, not to mention potentially widespread outages if an application was tuned improperly. Our business was scaling and we were running into constraints related to data center space and power in our on-premises environment. These constraints restricted our ability to meet our business objectives so a strategic decision was made to leverage elastic environments and migrate to the cloud. The decision required financial analysis and a detailed understanding of the available options, from do-it-yourself and vendor-managed distributions to leveraging cloud-managed services. LiveRamp’s target architectureWe ultimately chose Google Cloud and Dataproc, a managed service for Hadoop, Spark, and other big data frameworks. During the migration we made a few fundamental changes to our Hadoop infrastructure:Instead of 1 large persistent cluster managed by a central team, we have decentralized the cluster ownership to individual teams. This gave the teams flexibility to recreate, perform upgrades or change configurations as they see fit. This also gives us better cost attribution, less blast radius for errors, and less chance that – a rogue job from one team will impact the rest of the workloads.Persistent data is no longer stored in HDFS on the clusters, it is in Google Cloud Storage, which, conveniently, served as a drop in replacement, as GCS is compatible with all the same APIs as HDFS. This means we can delete all the virtual machines that are part of the cluster without losing any data.Introduced autoscaling clusters to control compute cost, and to dramatically decrease request latency. On premise you’re paying for the machines so you might as well use them. Cloud compute is elastic so you want to burst when there is demand and scale down when you can.For example, one of our teams runs about 100,000 daily Spark jobs on 12 Dataproc clusters that each independently scale up to 1000 VMs. This gives that team a current peak capacity of about 256,000 cores. Because the team is bound to its own GCP Project inside of a GCP Organization, the cost attributed to that team is now very easy to report. The team uses architecture represented below to distribute the jobs across the clusters. This architecture allows them to bin similar workloads together so that they can be optimized together. Below is the logical architecture of the above workload:There will be a blog post in future that will talk about this workload in detail.Our approachOverall migration and post migration stabilization/optimization of the largest of our workloads took us about several years to complete. We broadly broke down the migration into multiple phases.Initial Proof-Of-ConceptWhen analyzing solutions for cloud-hosted big data services, any product had to meet our clear acceptance criteria:1. Cost: Dataproc is not particularly expensive compared to similar alternatives, but our discount with the existing managed Hadoop partner made it expensive. We have initially accepted that the cost would remain the same.  We did see cost benefits post migration, after several rounds of optimizations.2. Features: Some key features (compared to current state) that we were looking for are built-in autoscaler, ease of creating/updating/deleting clusters, managed big data technologies etc.3. Integration with GCP: As we had already decided to move other LiveRamp-owned services to GCP, a big data platform with robust integration with GCP was a must. Basically, we’d like to be able to leverage GCP features without a lot of effort on our end (custom vms, preemptible vms, etc).4. Performance: Cluster creation, deletion, scale up, and scale down should be fast. This will allow teams to iterate and react quickly. These are some rough estimates of how fast the cluster operations should be: Cluster creation:  <15 minutesCluster Deletion: <15 minutesAdding 50 nodes: <20 minutesRemoving 200 nodes: <10 minutes 5. Reliability: Bug free and low downtime software that has concrete SLAs on clusters and a strong commitment to the correct functioning of all of its features.An initial prototype to better understand Dataproc and Google Cloud helped us prove that target technologies and architecture will give us reliability and cost improvements. This also fed into our decisions around target architecture. This was then reviewed by the Google team before we embarked on the migration journey. Overall migrationTerraform moduleOur ultimate goal is to create self-service tooling that allows our data engineers to deploy infrastructure as easily and safely as possible. After defining some best practices around cluster creation and configuration, the central team’s first step was to build a terraform module that can be used by all the teams to create their own clusters. This module will create a dataproc cluster along with all supporting buckets, pods and datadog monitors:A dataproc cluster autoscaling policy that can be customizedA dataproc cluster with LiveRamp defaults preconfiguredSidecar applications for recording job metrics from the job history server and for monitoring the cluster healthPre configured datadog cluster health monitors for alertingThis Terraform module is also composed of multiple supporting modules underneath. This allows users to call the supporting modules directly in your project terraform as well if such a need arises.  The module can be used to create a cluster by just setting the parameters like project id, path to application source (Spark or Map/Reduce), subnet, VM instance type, auto scaling policy etc.Workload migrationBased on our analysis of Dataproc, discussions with GCP team and the POC, we used following criteria:We prioritized applications that can use preemptibles to achieve cost parity to our existing workloads We prioritized some of our smaller workloads initially to build momentum within the organization. For example, we left the single workload that accounted for ~40% of our overall batch volume to the end, after we had gained enough experience as an organization.We combined the migration to Spark along with the migration to Dataproc. This has initially resulted in some extra dev work but helped reduce the effort for testing and other activities.Our initial approach was to lift and shift from existing managed providers and Map/Reduce to Dataproc and Spark. We then later focused on optimizing the workloads for cost and reliability.What’s working wellCost AttributionAs is true with any business, it’s important to know where your cost centers are. Moving from a single cluster, made opaque by the number of teams loading work onto it, to GCP’s Organization/Project structure has made cost reporting very simple. The tool breaks down cost by project, but also allows us to attribute cost to a single cluster via tagging. As we sometimes deploy a single application to a cluster, this helps us to make strategic decisions on cost optimizations at an application level very easily. FlexibilityThe programmatic nature of deploying Hadoop clusters in a cloud like GCP dramatically reduces the time and effort involved in making infrastructure changes. LiveRamp’s use of a self-service Terraform module means that a data engineering team can very quickly iterate on cluster configurations. This allows a team to create a cluster that is best for their application while also adhering to our security and health monitoring standards. We also get all the benefits of infrastructure as code: highly complicated infrastructure state is version controlled and can be easily recreated and modified in a safe way.SupportWhen our teams face issues with services that run on Dataproc, the GCP team is always quick to respond. They work very closely with LiveRamp to develop new features for our needs. They proactively provide LiveRamp with preview access to new features that help LiveRamp to stay ahead of the curve in the Data Industry.Cost SavingsWe have achieved around 30% cost savings in certain clusters by achieving the right balance between on-demand and PVMs. The cost savings were a result of our engineers building efficient A/B testing frameworks that helped us run the clusters/jobs in several configurations to arrive at the most reliable, maintainable and cost efficient configuration. Also, one of the applications is now 10x + faster.Five lessons learnedMigration was a successful exercise that took about six months to complete, across all our teams and applications. While many aspects went really well, we also learned a few things along the way that we hope will help you when planning your own migration journey. 1. Benchmark, benchmark, benchmark It’s always a good idea to benchmark the current platform against the future platform to compare costs and performance. On-premises environments have a fixed capacity, while cloud platforms can scale to meet workload needs. Therefore, it’s essential to ensure that the current behavior of the key workload is clearly understood before the migration. 2. Focus on one thing at a timeWe initially focused on reliability while remaining cost-neutral during the migration process, and then focused on cost optimization post-migration. Google teams were very helpful and instrumental in identifying cost optimization opportunities. 3. Be aware of alpha and beta productsAlthough there usually aren’t any guarantees of a final feature set when it comes to pre-released products, you can still get a sense of their stability and create a partnership if you have a specific use case. In our specific use case, Enhanced Flexibility Mode was in alpha stage in April 2019, beta in August 2020, and released in July 2021. Therefore, it was helpful to check in on the product offering and understand its level of stability so we could carry out risk analysis and decide when we felt comfortable adopting it.4. Think about quotasOur Dataproc clusters could support much higher node counts than was possible with our previous vendor. This meant we often had to increase IP space and change quotas, especially as we tried out new VM and disk configurations.5. Preemptable and committed use discounts (CUDs)CUDs make compute less expensive while preemptables make compute significantly less expensive. However, preemptibles don’t count against your CUD purchases, so make sure you understand the impact on your CUD utilization when you start to migrate to preemptables.We hope these lessons will help you in your Data Cloud  journey.
Quelle: Google Cloud Platform