Experian: From credit bureau to technology company with APIs

Editor’s note:Today we hear from Dang Nguyen, API Platform Product Owner at Experian, on how the company uses the Apigee API management platform to digitally transform from a traditional credit bureau to a true technology and software provider. Read on to learn how Experian uses APIs to help businesses make smarter decisions and individuals take financial control. Chances are, when you think of Experian you think of a traditional credit bureau that provides credit reports. But Experian has transformed into a true technology and software provider. We gather, analyze, and process data in ways that other companies just can’t. Businesses use this data to make smarter decisions about credit and lending, as well as to prevent identity fraud and crime. We’re also able to use this data to help individuals take financial control of their own lives and access all kinds of financial services with products like Experian Boost.Transforming data delivery, transforming the enterpriseA big part of our digital transformation has been based on our API program. We approached APIs with a concrete goal in mind. We knew exactly how we wanted to transform the business, and we had a set plan to achieve that. For us, this meant establishing an API center of excellence as a first step. It’s sole purpose was to enable the business units to create their APIs quickly and correctly, then apply them properly. We then went out to the business units one by one so that we could train them to build their APIs in a customer-friendly way. We taught them the entire API process of building, giving access to developers internally and externally.This approach is fundamentally different from previous ones. As far back as the 1990s, our customers connected to us via software applications installed on their systems. As technologies evolved, our services to customers evolved, and we began supporting XML-based transactions and custom integrations with our partners. Some of these integrations actually included VPNs rather than going through HTTP connections. We did custom database schemas, one-off processes, and all kinds of custom development. This meant that we had a team just for our IT system processes. This team kept growing as Experian continued to acquire new companies. Each acquisition brought a new way of doing integrations and business. We had a real challenge in standardizing development practices, which led to a lot of isolated environments inside the company. We had disparate data repositories and non-standard client conductivity, which hampered innovation.Responding to customer demand for APIsWhen we first started, we had some concrete goals. We wanted to grow our ecosystem, develop a massive reach for transaction and content distribution, power a new business model, and drive innovation. Basically, we wanted to use APIs to transform our business into a platform, and we wanted to build an ecosystem that leveraged this API platform to develop new solutions. Our leadership also understood the importance of delivering information to our customers in the way they wanted to consume it. Our customers had told us that they didn’t want software, they just wanted access to the data—and APIs are the easiest, most secure way to grant that access. The Apigee API management platform as an enterprise solutionWe knew we needed an API management platform to enable this step forward. In addition to the documentation and discoverability, we also wanted a place to create APIs fast, and where we could get visibility into usage and other metrics. The Apigee API management platform from Google Cloud offered all of this, and more. From the robust feature set to advanced security to the developer portal to analytics – Apigee provides us everything we need to run an enterprise-class API program.Now that we have our API program up and running, integrations no longer take months. In some cases, it’s just a matter of minutes or seconds. Customers can simply look at our documentation on how to invoke APIs and begin consuming data in seconds. We started with this new model in our three largest markets: North America, the United Kingdom and Brazil. Later, we rolled it out to Singapore and Australia, while deploying an on-premises platform for some of our North American business units that needed to provide their APIs internally only. Next, we went to EMEA. At this point, we’ve deployed Apigee company-wide, giving us a flexible deployment model that maintains a centralized platform and processes.We continue to evangelize the program today, and we recently conducted a workshop with the Apigee team to train our EMEA business unit and get them onboarded to the platform. They were able to start developing API proxies right away, and they’re set to go into production with as many as nine of them. We also went live with three developer portals, which we call API hubs, in North America, the United Kingdom, and Brazil. As we expand, we don’t want to keep building up different developer portals for each region because then we’re going to have too many. Alternatively, we plan to combine them into a single global developer portal that will allow users to select geographies of interest where they’ll be presented with relevant information.Experian continues to evolve the types of products and services we offer. Thanks to Apigee, we have the flexibility, security, and technology to keep innovating and providing value to our business and our customers.
Quelle: Google Cloud Platform

OpenShift 4.2 Disconnected Install

In a previous blog, it was announced that Red Hat is making the OpenShift nightly builds available to everyone. This gives users a chance to test upcoming features before their general availability. One of the features planned for OpenShift 4.2 is the ability to perform a “disconnected” or “air gapped” install, allowing you to install in an environment without access to the Internet or outside world.
NOTE: that nightly builds are unsupported and are for testing purposes only!
In this blog I will be going over how to perform a disconnected install in a lab environment. I will also give an overview of my environment, how to mirror the needed images, and any other tips and tricks I’ve learned along the way.
Environment Overview
In my environment, I have two networks. One network is completely disconnected and has no access to the Internet. The other network is connected to the Internet and has full access. I will use a bastion host that has access to both networks. This bastion host will perform the following functions.

Registry server (where I will mirror the content)
Apache web server (where I will store installation artifacts)
Installation host (where I will be performing the installation from)

Here is a high-level overview of the environment I’ll be working on.

In my environment, I have already set up DNS, DHCP, and other ancillary services for my network. Also, it’s important to get familiar with the OpenShift 4 prerequisites before attempting an install.
Doing a disconnected install can be challenging, so I recommend trying a fully connected OpenShift 4 install first to familiarize yourself with the install process (as they are quite similar).
Registry Set Up
You can use your own registry or build one from scratch. I used the following steps to build one from scratch. Since I’ll be using a container for my registry, and Apache for my webserver, I will need podman and httpd on my host.
yum -y install podman httpd httpd-tools

Create the directories you’ll need to run the registry. These directories will be mounted in the container running the registry.
mkdir -p /opt/registry/{auth,certs,data}

Next, generate an SSL certificate for the registry.  This can, optionally, be self-signed if you don’t have an existing, trusted, certificate authority. I’ll be using registry.ocp4.example.com as the hostname of my registry. Make sure your hostname is in DNS and resolves to the correct IP.
cd /opt/registry/certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt

Generate a username and password (must use bcrypt formatted passwords), for access to your registry.
htpasswd -bBc /opt/registry/auth/htpasswd dummy dummy

Make sure to open port 5000 on your host, as this is the default port for the registry. Since I am using Apache to stage the files I need for installation, I will open port 80 as well.
firewall-cmd –add-port=5000/tcp –zone=internal –permanent
firewall-cmd –add-port=5000/tcp –zone=public   –permanent
firewall-cmd –add-service=http  –permanent
firewall-cmd –reload

Now you’re ready to run the container. Here I specify the directories I want to mount inside the container. I also specify I want to run on port 5000. I recommend you put this in a shell script for ease of starting.
podman run –name poc-registry -p 5000:5000
-v /opt/registry/data:/var/lib/registry:z
-v /opt/registry/auth:/auth:z
-e “REGISTRY_AUTH=htpasswd”
-e “REGISTRY_AUTH_HTPASSWD_REALM=Registry”
-e “REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry”
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
-v /opt/registry/certs:/certs:z
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key
docker.io/library/registry:2

Verify connectivity to your registry with curl. Provide it the username and password you created.
curl -u dummy:dummy -k https://registry.ocp4.example.com:5000/v2/_catalog

Note, this should return an “empty” repo

If you have issues connecting try to stop the container.
podman stop poc-registry

Once it’s down, you can start it back up using the same podman run command as before.
Obtaining Artifacts
You will need the preview builds for 4.2 in order to do a disconnected install. Specifically, you will need the client binaries along with the install artifacts. This can be found in the dev preview links provided below.

Client Binaries
Install Artifacts

Download the binaries and any installation artifacts you may need for the installation. The file names will differ depending on when you choose to download the preview builds (they get updated often).
You can inspect the nightly release notes and extract the build number from there. I did this with the curl command.
export BUILDNUMBER=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/release.txt | grep ‘Name:’ | awk ‘{print $NF}’)
echo ${BUILDNUMBER}

To download the client binaries to your staging server/area (in my case, it’s the registry server itself) use curl:
curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/openshift-client-linux-${BUILDNUMBER}.tar.gz

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/openshift-install-linux-${BUILDNUMBER}.tar.gz

You’ll also need these clients on your registry host, so feel free to un-tar them now.
tar -xzf /var/www/html/openshift-client-linux-${BUILDNUMBER}.tar.gz -C /usr/local/bin/
tar -xzf /var/www/html/openshift-install-linux-${BUILDNUMBER}.tar.gz -C /usr/local/bin/

Depending on what kind of install you will do, you would need to do one of the following.
PXE Install
If you’re doing a PXE install, you’ll need the BIOS, initramfs, and the kernel files. For example:
curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-metal-bios.raw.gz

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-installer-initramfs.img

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-installer-kernel

Once you have staged these, copy them over into your environment. Once they are in your PXE install server and your configuration updated, you can proceed to mirror your images.
ISO Install
If you’re doing an ISO install, you’ll still need the BIOS file but only the ISO for the install.
curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-metal-bios.raw.gz

curl -o /var/www/html/

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest/rhcos-${BUILDNUMBER}-installer.iso

Once these are staged, copy them over to where you’ll need them for the installation. The BIOS file will need to be on a web server accessible to the OpenShift nodes. The ISO can be burned onto a disk/usb drive or mounted via your virtualization platform.
Once that’s done, you can proceed to mirror the container images.
Mirroring Images
The installation images will need to be mirrored in order to complete the installation. Before you begin you need to make sure you have the following in place.

An internal registry to mirror the images to (like the one I just built)

You’ll also need the certificate of this registry
The username/password for access

A pullsecret obtained at https://cloud.redhat.com/openshift/install/pre-release

I downloaded mine and saved it as ~/pull-secret.json

The oc and openshift-install CLI tools installed
The jq command is also helpful

First, you will need to get the information to mirror. This information can be obtained via the dev-preview release notes. With this information, I constructed the following environment variables.
export OCP_RELEASE=”4.2.0-0.nightly-2019-08-29-062233″
export AIRGAP_REG=’registry.ocp4.example.com:5000′
export AIRGAP_REPO=’ocp4/openshift4′
export UPSTREAM_REPO=’openshift-release-dev’   ## or ‘openshift’
export AIRGAP_SECRET_JSON=’~/pull-secret-2.json’
export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=${AIRGAP_REG}/${AIRGAP_REPO}:${OCP_RELEASE}
export RELEASE_NAME=”ocp-release-nightly”

I will now go over how to construct these environment variables from the release notes

OCP_RELEASE – Can be obtained by the Release Metadata.Version section of the release page.
AIRGAP_REG – This is your registry’s hostname with port
AIRGAP_REPO – This is the name of the repo in your registry (you don’t have to create it beforehand)
UPSTREAM_REPO – Can be obtained from the Pull From section of the release page.
AIRGAP_SECRET_JSON – This is the path to your pull secret  with your registry’s information (which we will create later)
OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE – This environment variable is set so the installer knows to use your registry.
RELEASE_NAME – This can be obtained in the Pull From section of the release page.

Before you can mirror the images, you’ll need to add the authentication for your registry to your pull secret file (the one you got from try.openshift.com). This will look something like this:
{
“registry.ocp4.example.com:5000″:
{
“auth”: “ZHVtbXk6ZHVtbXk=”,
“email”: “noemail@localhost”
}
}

The base64 is a construction of the registry’s auth in the username:password format. For example, with the username of dummy and password of dummy; I created the base64 by running:
echo -n ‘dummy:dummy’ | base64 -w0

You can add your registry’s information to your pull secret by using jq and the pull secret you downloaded (thus creating a new pull secret file with your registry’s information).
jq ‘.auths += {“registry.ocp4.example.com:5000″: {“auth”: “ZHVtbXk6ZHVtbXk=”,”email”: “noemail@localhost”}}’ < ~/pull-secret.json > ~/pull-secret-2.json

Also, if needed and you haven’t done so already, make sure you trust the self-signed certificate. This is needed in order for oc to be able to login to your registry during the mirror process.
cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract

With this in place, you can mirror the images with the following command.
oc adm release mirror -a ${AIRGAP_SECRET_JSON}
–from=quay.io/${UPSTREAM_REPO}/${RELEASE_NAME}:${OCP_RELEASE}
–to-release-image=${AIRGAP_REG}/${AIRGAP_REPO}:${OCP_RELEASE}
–to=${AIRGAP_REG}/${AIRGAP_REPO}

Part of the output will have an example imageContentSources to put in your install-config.yaml file. It’ll look something like this.
imageContentSources:
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-release-nightly
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

Save this output, as you’ll need it later
Installation
At this point you can proceed with the normal installation procedure, with the main difference being what you specify in the install-config.yaml file when you create the ignition configs.
Please refer to the official documentation for specific installation information. You’re most likely doing a Bare Metal install, so my previous blog would be helpful to look over as well.
When creating an install-config.yaml file, you need to specify additional parameters like the example below.
apiVersion: v1
baseDomain: example.com
compute:
– hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ocp4
networking:
clusterNetworks:
– cidr: 10.254.0.0/16
hostPrefix: 24
networkType: OpenShiftSDN
serviceNetwork:
– 172.30.0.0/16
platform:
none: {}
pullSecret: ‘{“auths”:{“registry.ocp4.example.com:5000″: {“auth”: “ZHVtbXk6ZHVtbXk=”,”email”: “noemail@localhost”}}}’
sshKey: ‘ssh-rsa …. root@helper’
additionalTrustBundle: |
—–BEGIN CERTIFICATE—–
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
—–END CERTIFICATE—–
imageContentSources:
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-release-nightly
– mirrors:
– registry.ocp4.example.com:5000/ocp4/openshift4
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

Some things to note here:

pullSecret – only the information about your registry is needed.
sshKey – the contents of your id_rsa.pub file (or another ssh public key that you want to use)
additionalTrustBundle – this is your crt file for your registry. (i.e. the output of cat domain.crt)
imageContentSources –  What is the local registry is and the expected original source that should be in the metadata (otherwise they should be considered as tampered)

You will also need to export the OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE environment variable. This tells OpenShift which image to use for bootstrapping. This is in the form of ${AIRGAP_REG}/${AIRGAP_REPO}:${OCP_RELEASE}. It looked like this in my environment:
export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=registry.ocp4.example.com:5000/ocp4/openshift4:4.2.0-0.nightly-2019-08-29-062233

I created my install-config.yaml under my ~/ocp4 install directory. At this point you can create your Ignition configs as you would normally.
# openshift-install create ignition-configs –dir=/root/ocp4
INFO Consuming “Install Config” from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
WARNING Found override for ReleaseImage. Please be warned, this is not advised

Please note that it warns you about overriding the image and that, for the 4.2 dev preview, the masters are schedulable.

At this point, you can proceed with the installation as you would normally.
Troubleshooting
A good thing to do during the bootstrapping process is to login to the bootstrap server and tail the journal logs as the bootstrapping process progresses. Many errors or misconfigurations can be seen immediately when tailing this log.
[core@bootstrap ~]$ journalctl -b -f -u bootkube.service

There are times where you might have to approve the worker/master node’s CSR. You can check pending CSRs with the oc get csr command. This is important to check since the cluster operators won’t finish without any worker nodes added. You can approve all the pending CSRs in one shot with the following command.
[user@bastion ~]$ oc get csr –no-headers | awk ‘{print $1}’ | xargs oc adm certificate approve

After the bootstrap process is done, it’s helpful to see your cluster operators running. You can do this with the oc get co command. It’s helpful to have this in a watch in a separate window.
[user@bastion ~]$ watch oc get co

The two most common issues are that the openshift-install command is waiting for the image-registry and ingress to come up before it considers the install a success. Make sure you’ve approved the CSRs for your machines and you’ve configured storage for your image-registry. The commands I’ve provided should help you navigate any issues you may have.
Conclusion
In this blog, I went over how you can prepare for a disconnected install and how to perform a disconnected install using the nightly developer preview of OpenShift 4. Disconnected installs were a highly popular request for OpenShift 4, and we are excited to bring you a preview build.
Nightly builds are a great way to preview what’s up and coming with OpenShift, so you can test things before the GA release. We are excited to bring you this capability and hope that you find it useful. If you have any questions or comments, feel free to use the comment section below!
The post OpenShift 4.2 Disconnected Install appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Simplify modernization and build cloud-native with open source technologies

Cloud-native technologies are the new normal for application development. Cloud-native creates immeasurable business value with increased velocity and reduced operational costs. Together, these support emerging business opportunities.
Advancements in application development have focused on net new applications. We have seen that existing applications that cannot easily move to the cloud have been left on traditional technologies. As a result, less than 20 percent of enterprise workloads are deployed to the cloud according to an IBM-commissioned IBM-commissioned study by McKinsey & Company.
At IBM, we see open source as a foundation for the new hybrid multicloud world, and our recent acquisition of Red Hat underscores our long commitment to open technologies.
Open source allows consistency and choice
Key open source technologies – containers, Kubernetes, Istio, Knative and others – together define the new hybrid multicloud platform, providing consistency and choice across any cloud provider. These technologies allow developers to build applications to support enterprise workloads using a common technology base with flexible vendor choices. They establish freedom for enterprises to deploy applications across public, private and hybrid cloud platforms.
Kubernetes provides a container orchestration layer that consistently manages workloads. Developers have full freedom of choice on languages, runtimes, and frameworks, while Kubernetes maintains a consistent operational platform across diverse technologies. This approach provides a basis for microservices-based container applications as well as existing enterprise applications.
New IBM open source project accelerates the cloud journey
In 2017, IBM began modernizing our software portfolio into containers and Kubernetes, and optimized more than 100 products for Red Hat OpenShift. In addition to our own journey, we’ve learned a lot about modernization from our clients and partners. Together, we’ve migrated or modernized more than 100,000 workloads.
With the new IBM Cloud Pak for Applications, we’ve encoded our experience into a set of technologies to accelerate the journey to cloud. Built on open source technologies, IBM Cloud Pak for Applications delivers tools, technologies and platforms designed to bring WebSphere workloads to any cloud through Kabanero.io and Red Hat Runtimes.
IBM Cloud Pak for Applications provides a rich set of open source technologies and functions that allow enterprises to secure and curate their favorite frameworks and runtimes, including those using Java, Open Liberty, SPRING BOOT® with Tomcat®, JBoss®, Node.js®, Vert.x and more. IBM Cloud Pak for Applications performs vulnerability scanning on all open source frameworks and runtimes to prevent security issues. All IBM Cloud Paks are supported by IBM and contain Docker-certified middleware.
Move WebSphere applications to any cloud
For existing applications, modernization tools in the new IBM Cloud Pak chart a path to modernize WebSphere applications into a fully open source stack. IBM Cloud Pak for Applications analyzes applications and provides a modernization plan specific for each application. Many WebSphere applications can be migrated to containers with automation and without code changes.
In the end, traditional applications are ready to deploy to any cloud — from OpenShift on IBM Cloud, an existing infrastructure, or to your cloud of choice.
IBM Cloud Pak for Applications delivers the open technology platform for the future and enables businesses to address the 80 percent of enterprise workloads that have yet to move to cloud according the report.
Learn more about IBM Cloud Pak for Applications and register to join us for the upcoming IBM Application Modernization Technical Conference 2019, Chicago, IL, United States on 24-25 September 2019. Experience two days of in-depth technical sessions for developers, administrators and architects at the inaugural IBM Application Modernization Technical Conference 2019 and hear from top subject matter from our labs, IBM Business Partners and customers.
The post Simplify modernization and build cloud-native with open source technologies appeared first on Cloud computing news.
Quelle: Thoughts on Cloud