How to modernize the enterprise’s integration landscape in the hybrid cloud era

Application integrations are key to streamlining enterprises business processes and enabling data movement across systems. Be it real-time payments in the banking industry, distributing vehicle inventory information from dealership to an original equipment manufacturer (OEM), retrieving product information while servicing a phone or supporting the checkout feature of an ecommerce site, there are multiple integrations between the systems that support these processes.
As part of digital transformation initiatives, enterprises are adopting cloud computing to take advantage of the optimization and flexibility the cloud platforms and providers bring to the table. Application workloads are moving to cloud platforms. This will often result in a hybrid cloud target state for enterprises. Public clouds (such as those from IBM, AWS, Azure or Google), SaaS solutions, private clouds, in-house container platforms and traditional data centers are all part of this mix.
A hybrid cloud target introduces the following new macro-level integration patterns:

Intra cloud: Integrations between applications in the same cloud platform
Inter cloud: Integration between applications deployed in different cloud platforms as well as applications in cloud and SaaS solutions
Cloud to on-premises: Integration between core Systems of Records (SOR) that are on-premise, and application deployed on a Cloud through integrations platforms like an Enterprise Service Bus (ESB)

 

 
These newer aspects of integrations often get ignored while defining the applications transformation roadmap to cloud. But, ignoring these distinctions upfront often introduces added complexities at the later part of the cloud journey.
Transforming the integration landscape should be an essential part of any enterprise’s cloud journey. Focus should be there to find and remove redundant integrations, to modernize integrations by adopting modern API and event-driven architectures and to set up an integration platform that is best for the hybrid cloud – a hybrid integration platform (HIP). Per Gartner, 65 percent of large organizations will have realized a hybrid integration platform by 2022 to drive their digital transformation.
Evolution of the enterprise integration landscape
Integration landscapes have evolved over the years as newer architectures and technologies came into play. Point to Point (P2P) integrations, Enterprise Application Integration (EAI) middleware and Service Oriented Architecture (SOA) integrations were all part of this evolutionary journey. Many of the enterprises will have integrations realized by one or more of the above patterns in their landscape. Modern architectures like API/microservices and event-driven architectures are ideal for the hybrid cloud target. Enterprises are targeting to reach a higher level of maturity and realize an optimized integration landscape by adopting these newer architecture patterns.

How to define a modernization roadmap for the integration landscape in three steps
A holistic view of the current integration landscape, as well as its complexity, is critical to define a transformation roadmap that is in line with the applications transformation journey to cloud. IBM recommends a three-step approach to define the enterprise integration transformation roadmap.

Assess and analyze. Collect information about the company’s existing integrations, along with details about source and target applications, for analysis. Understand the overall integration architecture and any security and compliance needs. Use the data to assess the criticality and usage of the integrations and determine their target state. Recommended target integration patterns (REST API, SOA service, Event Driven, Message Driven, FTP, P2P etc.), consolidation possibilities, and other key inputs for defining the target integration state comes out of this analysis.
Envision the target state. The output from the earlier step will help to define the target integration architecture and the deployment model. While adopting newer architecture patterns like the microservices and event-driven architectures are key considerations for the target architecture, ensure any enterprise-specific integration requirements are part of this step too. A reference architecture is usually the best starting point to create a customized target architecture. The IBM Hybrid Integration Architecture published in the Architecture Center is a good example of reference architectures that can be adopted.
Define the integration portfolio roadmap. With the target architecture, implementation patterns and consolidated list of integrations in place, the next step is to create a wave plan to execute the modernization. Confirm the business case in this step before kick starting modernization. Identify a minimum viable product (MVP) and realize it to identify any risks before beginning larger modernization programs. The MVP could include few integrations that cover the critical implementation patterns.

 
Now that the plan to modernize the integration landscape is in place, one of the important things to next establish is the hybrid integration platform that is aligned to the target architecture defined. There are many hybrid integration platform solutions in the market that enterprises can adopt. The IBM Cloud Pak for Integration is the most robust platform that will help to realize a hybrid integration platform and drive the digital transformations of enterprises in an accelerated fashion.
IBM has the end-to-end capability to help enterprises modernize their integration landscape for hybrid cloud. Visit IBM Cloud Integration and IBM Services for Cloud to learn more about how IBM can optimize methods, tools and assets to help in your integration modernization journey.
The post How to modernize the enterprise’s integration landscape in the hybrid cloud era appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introducing the BigQuery Terraform module

It’s no secret software developers love to automate their work away, and cloud development is no different. Since the release of the Cloud Foundation Toolkit (CFT), we’ve offered automation templates with Deployment Manager and Terraform to help engineers get set up with Google Cloud Platform (GCP) quickly. But as useful as the Terraform offering was, it was missing a critical module for a critical piece of GCP: BigQuery.Fortunately, those days are over. With the BigQuery module for Terraform, you can now automate the instantiation and deployment of your BigQuery datasets and tables. This means you have an open-source option to start using BigQuery for data analytics.In building the module, we applied the flexibility and extensibility of Terraform throughout and adhered to the following principles:Referenceable templatesModular, loosely coupled design for reusabilityProvisioning and association for both datasets and tablesSupport for full unit testing (via Kitchen-Terraform)Access control (coming soon)By including the BigQuery Terraform module in your larger CFT scripts, it’s possible for you to go effectively from zero to ML in minutes, with significantly reduced barriers to implementation. Let’s walk through how to set this up.Building blocks: GCP and Terraform prerequisitesTo use the BigQuery Terraform module, you’ll need—you guessed it—to have BigQuery and Terraform ready to go.Note: The steps outlined below are applicable for Unix- and Linux-based devices, and have not been optimized for CI/CD systems or production use.1. Download the Terraform binary that matches your system type and Terraform installation process.2. Install Google Cloud SDK on your local machine.3. Start by creating a GCP project in your organization’s folder and project. Try something via Terraform like the following:4. Let’s set up some environment variables to use. Ensure you updated the values to accurately reflect your environment.5. Go ahead and enable the BigQuery API (or use the helpers directory in the module instead)6. Establish an identity with the IAM permissions required7. Browse through the examples directory to get a full list of examples that are possible within the module.What’s in the box: Get to know the Terraform moduleThe BigQuery module is packaged in a self-contained GitHub repository for you to easily download (or reference) and deploy. Included in the repo is a central module that supports both Terraform v0.12.X and v0.11.X, allowing users (both human and GCP service accounts) to dynamically deploy datasets with any number of tables attached to the dataset. (By the way, the BigQuery module has you covered in case you’re planning to partition your tables using a TIMESTAMP OR DATE column to optimize for faster retrieval and lower query costs.) To enforce naming standardization, the BigQuery module creates a single dataset that is referenced in the multiple tables that are created, which streamlines the creation of multiple instances and generates individual Terraform state files per BigQuery dataset. This is especially useful for customers with hundreds of tables in dozens of datasets, who don’t want to get stuck with manual creation. That said, the module is fundamentally an opinionated method for setting up your datasets and table schemas; you’ll still need to handle your data ingestion or upload via any of the methods outlined here, as that’s not currently not supported by Terraform.In addition, the repo is packaged with a rich set of test scripts that use Kitchen-Terraform plugins, robust examples on how to use the module in your deployments, major version upgrade guides, and helper files to get users started quickly.Putting them together: Deploying the moduleNow that you have BigQuery and Terraform set up, it’s time to plug them together. 1. Start by cloning the repository:2. If you didn’t enable the BigQuery API earlier and create the service account with permissions, run the setup-sa.sh quickstart script in the helpers directory of the repo. This will set up the service account and permissions, and enable the BigQuery API.3. Define your BigQuery table schema, or try out an example schema here.4. Create a deployment (module) directory.5. Create the deployment files: main.tf, variables.tf, outputs.tf, and optionally a terraform.tfvars (in case you want to override default vars in the variables.tf file):6. Populate the files as detailed below.Main.tfOutputs.tfTerraform.tfvarsvariables.tf7. Navigate to the correct directory.8. Initialize the directory and plan.9. Apply the changes.What’s next?That’s it! You’ve used the BigQuery Terraform module to deploy your dataset and tables, and you’re now ready to load in your data for querying. We think this fills a critical gap in our Cloud Foundations Toolkit so you can easily stand up BigQuery with an open-source, extensible solution. Set it and forget it, or update it anytime you need to change your schema or modify your table structure. Once you’ve given it a shot, if you have any questions, give us feedback by opening an issue. Watch or star the module to stay on top of future releases and enjoy all your newfound free time (we hear BQML is pretty fun).
Quelle: Google Cloud Platform