Adopting cloud, with new inventions along the way, charges up HSBC

Editor’s note: We’re hearing today from HSBC, the huge global financial institution. They worked closely with Google Cloud engineers to move their legacy data warehouse to BigQuery, using custom-built tools and an automation-first approach that’s allowed them to make huge leaps in data analytics capabilities and ensure high-fidelity data.At HSBC, we serve 39 million customers, in-person and online, from consumers to businesses, in 66 countries. We maintain data centers in 21 countries, with more than 94,000 servers. With an on-premises infrastructure supporting our business, we kept running into capacity challenges, which really became an innovation blocker and ultimately a business constraint. Our teams wanted to do more with data to create better products and services, but the technology tools we had weren’t letting us grow and explore. And that data was growing continually. Just one of our data warehouses had grown 300% from 2014 to 2018.We had a huge amount of data, but what’s the point of having all that data if we couldn’t get insights and business value from it? We wanted to serve our customers flexibly, in the ways that work best for them. We knew moving to cloud would let us store and process more data, but as a global bank, we were moving complex systems that needed to also be secure. It was a team effort to create the project scope and strategy up front, and it paid off in the end. Our cloud migration now enables us to use an agile, DevOps mindset, so we can fail fast and deliver smaller workloads, with automation built in along the way. This migration also helped us eliminate technical debt and build a data platform that lets us focus on innovation, not managing infrastructure. Along the way, we invented new technology and built processes that we can use as we continue migrating.Planning for a cloud moveWe chose cloud migration because we knew we needed cloud capabilities for our business to really reach its digital potential. We picked Google Cloud, specifically BigQuery, because it’s super fast over small and large datasets, and because we could use both a SQL interface and Connected Sheets to interact with it. We had to move our data and its schema into the cloud—without having to manually manage every detail and miss the timelines we had set. Our data warehouse is huge, complex, and mission-critical, and didn’t easily lend itself to fit into existing reference architectures. We needed to plan ahead and automate to make sure the migration was efficient, and to ensure we could simplify data and processes along the way.The first legacy data warehouse we migrated had been built over a period of 15 years, with 30 years worth of data comprising millions of transactions and 180 TB of data. It ran 6,500 extract, transform, load (ETL) jobs and more than 2,500 reports, getting data from about 100 sources. Cloud migration choices usually involves either re-engineering or lift-and-shift, but we decided on a different strategy for ours: move and improve. This allowed us to take full advantage of BigQuery’s capabilities, including its capacity and elasticity, to help solve our essential problem of capacity constraints. Taking the first steps to cloudWe started creating our cloud strategy through a mapping exercise, which also helped start the change management process among internal teams. We chose architecture decision records as our migration approach, basing those on technical user journeys, which we mapped out using an agile board. User journeys included things like “change data capture,” “product event handling,” or “slowly changing dimensions.” These are typical data warehouse topics that have to be addressed when going through a migration, and we had others more specific to the financial services industry, too. For example, we needed to make sure the data warehouse would have a consistent, golden source of data at a specific point in time. We considered business impacts as well, so we prioritized initially moving archival and historical data to immediately take load off of the old system. We also worked to establish metrics early on and introduce new concepts, like managing queries and quotas rather than managing hardware, so that data warehouse users would be prepared for the shift to cloud.To simplify as we went, we examined what we currently had stored in our data warehouse to see what was used or unused. We worked with stakeholders to assess reports, and identified about 600-plus reports that weren’t being used that we could deprecate. We also examined how we could simplify our ETL jobs to remove the technical debt added by previous migrations, giving our production support teams a bit more sleep at night. We used a three-step migration strategy for our data: first, migrating schema to BigQuery; second, migrating the reporting load to BigQuery, adding metadata tagging and performing the reconciliation process; and third, moving historical data by converting all the SQL script into data into BigQuery-compliant scripts. Creating new tools for migration automationIn keeping with our automation mantra, we invented multiple accelerators to speed up migration. We developed these to meet the timelines we’d set, and to eliminate human error. The schema parser and data reconciliation tool helped us migrate our data layer onto BigQuery. SQL parser helped migrate the data access layer onto Google Cloud Platform (GCP) without having to individually migrate 3,500 SQL instances that don’t have data lineage or documentation. This helped us to prioritize workloads. And the data lineage tool identified components across layers to find dependencies. This was essential for finding and eliminating integration issues during the planning stage, and for identifying application owners during the migration. Finally, the data reconciliation tool reconciles any discrepancies between the data source and the cloud data target. Building a cloud futureWe used this first migration in our UK data center as a template, so we now have a tailored process and custom tools that we’re confident using going forward. Our careful approach has paid off for our teams and our customers. We’re enjoying better development and testing procedures. We’ve created an onboarding path for applications, we have a single source of truth in our data warehouse, and we use authorized views for secure data access. The flexibility and scalable capacity of BigQuery means that users can explore data without constraints and our customers get the information they need, faster. Learn more about BigQuery and about HSBC.
Quelle: Google Cloud Platform

Microsoft and SWIFT extend partnership to make native payments a reality

This blog post is co-authored by George Zinn, Corporate VP, Microsoft Treasurer.

This week at Sibos, the world’s largest financial services event, Microsoft and SWIFT are showcasing the evolution of the cloud-native proof of concept (POC) announced at last year’s event. Building off the relationship between Microsoft Azure, SWIFT, and the work with Microsoft treasury, the companies are entering a long-term strategic partnership to bring to market SWIFT Cloud Connect on Azure. Together we have built out an end-to-end architecture that utilizes various Azure services to ensure SWIFT Cloud Connect achieves the resilience, security, and compliance demands for material workloads in the financial services industry. Microsoft is the first cloud provider working with SWIFT to build public cloud connectivity and will soon make this solution available to the industry. 

SWIFT is the world’s leading provider of secure financial messaging services used and trusted by more than 11,000 financial institutions in more than 200 countries and territories. Today, enterprises and banks conduct these transactions by sending payment messages over the highly secure SWIFT network, leveraging on-premises installations of SWIFT technology. SWIFT Cloud Connect creates a bank-like wire transfer experience with the added operational, security, and intelligence benefits the Microsoft cloud offers.

To demonstrate the potential of the production-ready service, Microsoft Treasury has successfully run test payment transactions through the SWIFT production network to their counterparty Bank of New York-Mellon (BNY Mellon) for payment confirmations through SWIFT on Azure. BNY Mellon is a global investments company dedicated to helping its clients manage and service their financial assets throughout the investment lifecycle. The company’s Treasury Services group, which delivers high-quality performance in global payments, trade services and cash management, provides payments services for Microsoft Treasury.

“At BNY Mellon, we focus on delivering world class solutions that exceed our clients’ expectations,” said Bank of New York Mellon Treasury Services CEO Paul Camp. “Together with SWIFT, we continuously work to enhance the payments experience for clients around the world. We’re excited to join now with our Microsoft Treasury client and with SWIFT to help make Cloud Connect real, leveraging Microsoft’s cloud expertise to expand the frontiers of financial technology. Building on the positive experience with Cloud Connect, we look forward to exploring additional opportunities with Microsoft Treasury to advance their digital payments strategy.”

In response to the rapidly increasing cyber threat landscape, SWIFT introduced the customer security program (CSP). This introduces a set of mandatory security controls for which many financial institutions have a significant challenge to implement in their on-premise environment. To simplify and support control implementation and enable continuous monitoring and audit, Microsoft has developed a blueprint for the CSP framework. Azure Blueprint is a free service that enables customers to define a repeatable set of Azure resources and policies that implement and adhere to standards, patterns and control requirements.  Azure Blueprints allow customers to set up governed Azure environments at scale to aid secure and compliant production implementations. The SWIFT CSP Blueprint is now available in preview.

Microsoft treasury has performed their testing with SWIFT by leveraging the Azure Logic Apps service to process payment transactions. Such an implementation used to take months but instead was completed in just a few weeks. Treasury integrated their backend SAP systems via Logic Apps to SWIFT to process payment transactions and business acknowledgments. As part of this processing, the transactions are validated and checked for duplicates or anomalies using the rich capabilities of Logic Apps.

Logic Apps is Microsoft Azure’s integration platform as a service (iPaaS) and now provides native understanding of SWIFT messaging, enabling customers to accelerate the modernization of their payments infrastructure by leveraging the cloud. With hybrid VNet-connected integration capabilities to on-premises applications as well as a wide array of Azure services, Logic Apps provides more than 300 connectors for intelligent automation, integration, data movement, and more to harness the power of Azure.

Microsoft treasury is able to quickly leverage the power of Azure to enable a seamless transfer of payment transactions. With Azure Monitor and Log Analytics they are also able to monitor, manage, and correlate their payment transactions for full end-to-end process visibility.

We are thrilled to extend our partnership with SWIFT as we believe this will become an integral offering for the industry. We thank BNY Mellon for their part in confirming the potential of SWIFT Cloud Connect. To see it in action, stop by the Microsoft booth in the North Event Hall, Z131.
Quelle: Azure

Design for Users by Users: Design Thinking @ Red Hat – Sara Chirzari (Red Hat UX Research Team) – OpenShift Commons Briefing

 
Have you ever wondered how product teams decide what features to build and what changes to make? In this OpenShift Commons Briefing, the Red Hat User Design Experience Design and Research team discuss applying design thinking to real product development challenges, from problem discovery to testing and validating ideas.
Red Hat’s Sara Chizari walks us thru The Red Hat User Experience Design and Research team’s Design Thinking process that they use to help product teams build solutions that focus on solving problems, and are tailored to users’ needs. In this session, she takes us on a user-centered design journey. Learn about the techniques they use to develop an understanding of the users’ challenges and needs, articulate the users’ problems, and brainstorm potential solutions. Slides: Designing with Users for Users – OpenShift Commons Briefing Slides
Want to take part in upcoming OpenShift UX Design Workshop on San Francisco Oct 28th, 2019?  
If you are an OpenShift user and want to participate in an OpenShift UX Design Thinking workshop, we’re hosting a workshop with the Red Hat UX Design and Research team at the upcoming OpenShift Commons Gathering on October 28th in San Francisco! Request an invitation soon as space is limited to 20!

To request your invitation to attend the Design Thinking Workshop to be held in conjuction with the upcoming OpenShift Commons Gathering in San Francisco on Oct 28th, send an email to schizari@redhat.com. Space is limited to 20.
The Red Hat UX Research team will be focusing in on the OpenShift Console and aspects of troubleshooting, so if you are interested in contributing your feedback on OpenShift UX this workshop will be a great opportunity to do so!
As well, after the morning long workshop, you are invited to join the rest of the OpenShift Commons Gathering which will be focusing on enabling Machine Learning and AI workloads on OpenShift as our guests. More Gathering details here.
 
 

About OpenShift Commons
OpenShift Commons builds connections and collaboration across OpenShift and OKD communities, upstream projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. OpenShift Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this, we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
To stay abreast of all the latest announcements, briefings and events, please join the OpenShift Commons and join our mailing lists & slack channel.
Join OpenShift Commons today!
The post Design for Users by Users: Design Thinking @ Red Hat – Sara Chirzari (Red Hat UX Research Team) – OpenShift Commons Briefing appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift