BPAY: Uncovering new business opportunities with APIs

Editor’s note: Today we hear from Jon White and Angela Donohoe from BPAY Group. BPAY Group is best known for BPAY, the leading electronic bill payment system in Australia, handling one-third of the market. Learn how BPAY Group is positioning the organization for the future by using APIs to streamline workflows for existing customers and new businesses.BPAY has been a leader in the bill payment industry in Australia for 22 years and provides a secure, fast, and convenient way to connect individuals, businesses, and banks to help people stay on top of their bills.One of the reasons that BPAY is the preferred bill payment service for so many Australians is our commitment to human-centered design. We’re continuously talking with customers and looking at ways that we can deliver better experiences, products, and services, such as peer-to-peer payments. During these conversations, we noticed some ways that our processes were causing friction for existing or potential customers.For example, we traditionally used a batch processing system to handle requests between billing companies and banks. But that could cause headaches for some customers, as an error in even one request could cause the whole batch to be rejected. Plus, many “neobanks” (new types of digital-only banks) wanted to work with real-time transactions instead of batch processes, which take longer to complete.We realized that APIs had the potential to solve many of the challenges impacting customers while opening the doors for future product and business development. We developed a few customer-facing APIs and tested them in closed betas. This experiment went far better than we expected, and we realized that there was a huge appetite for APIs among our biller customers.While we had developed many APIs for internal systems, developing APIs that were easy to consume by our customers was a new challenge. We needed to move away from our home-grown API development approach and make our API environment more powerful, versatile, and easier to use. The API experts at The Singularity worked with us to develop a strong API strategy. We decided that we would need to support our new strategy with a scalable API management platform.After a rigorous search, we landed on the Apigee API Management Platform. Apigee was the only solution that met all our technology and business requirements. With Apigee, we have a solid foundation for APIs that will help us deliver more value for all customers.Creating a custom development environmentBPAY is a trusted brand in Australia, so it was very important to us that we maintain our reputation for excellent customer experiences. When setting up our developer portal, we started with a closed pilot and used developer feedback to make the portal as convenient and simple to use as possible. The Apigee developer portal has many built-in features to help us customize experiences, and if we run into roadblocks, the Apigee team at Google listens and helps us create the custom experience we want.The developer portal has already proven to be extremely popular. In its first month live, we registered 104 developers in the sandbox environment and 10 developers in the production environment. That was before we even started marketing our developer portal, so we expect those numbers to rise quickly.Breaking new ground with APIsWe’ve already released four foundational APIs, with a goal of eventually releasing dozens. Our APIs are helping us create smoother experiences for customers. We mentioned that when processing a batch of payment files, one mistake could cause the entire batch to get rejected. Our APIs now enable businesses to validate all payment information before submitting a batch file, dramatically reducing the chances of errors. They can even use our APIs to automatically generate batch files in the right format for different banks.While APIs improve service for current customers, they also open the doors for new areas of business. Buy now, pay later (BNPL) services, which enable customers to spread out payments across weeks or months, are already popular in retail spaces. After releasing our first APIs, we connected with two BNPL billing services. These companies use our APIs to validate customers’ bill payment information and then pay the bill in full on behalf of customers. This was a completely new use case for us, one that could not have been implemented without our APIs.The payment service NoahPay also adopted our APIs to validate payment information and let customers pay bills using funds from their WeChat accounts. This is an exciting new market for us, as it’s one of the first examples of how we can connect to international digital wallets through our new APIs. It’s also a great way to introduce users of WeChat, a messaging app used by more than 1 billion people in China, to the BPAY brand.Planning for the future of bill payWe have big plans for APIs in the future, and Apigee helps make these plans a reality. We plan to establish a generous freemium monetization model that will allow customers to make up to 200,000 API calls for free each month with tiered payment plans above that. This will enable us to open the doors for smaller organizations while providing optimal support for larger businesses and banks that might need to make millions of calls. Having powerful end-to-end monetization features built in to Apigee means that we can process monetized transactions with ease.Built-in reporting functionality will also help us make sure that we’re understanding the market’s need for APIs and always providing our customers with valuable services and support.Apigee greatly streamlines creating self-service API environments. Even as we grow our business, our internal teams will be able to continue providing excellent customer service without needing extra staff to answer questions, help with integration support, and constantly check API security. APIs are the way of the future, and Apigee prepares us meet the challenges that come along with it.
Quelle: Google Cloud Platform

Simplifying OpenShift Case Information Gathering Workflow: Must-Gather Operator

Introduction
Collecting debugging information from a large set of nodes (such as when creating SOS reports) can be a time consuming task to perform manually. Additionally, in the context of Red Hat OpenShift 4.x and Kubernetes, it is considered a bad practice to ssh into a node and perform debugging actions. To better accomplish this type of operation in OpenShift Container Platform  4, there is a new command: oc adm must-gather, which will collect debugging information across the entire cluster (nodes and control plane). More detailed information on the must-gather command can be found in the platform documentation.
While using the must-gather command is fairly straightforward, the full end-to-end process to facilitate all of the available tasks can be time consuming. This process involves issuing the command, waiting for the associated tasks to complete, and then upload the resulting information to the Red Hat case management system.
A way to further streamline the process is to automate these actions.
Must-Gather Operator
The must-gather operator streamlines running the must-gather command and uploading the results to the Red Hat case management system. The must-gather operator is intended to be used only by the cluster administrator as it requires elevated permissions on the cluster.  A must-gather run can be started by creating a MustGather custom resource (CR) similar to the following:
apiVersion: redhatcop.redhat.io/v1alpha1

kind: MustGather

metadata:

name: example

spec:

caseID: ‘XXXXXXXX’

caseManagementAccountSecretRef:

name: case-management-creds

serviceAccountRef:

name: must-gather-admin

Within the MustGather CR, three parameters can be defined:

caseID. Red Hat Support case to which the resulting output will be attached.
caseManagementAccountSecretRef: secret containing the credentials needed to login and upload files to the Red Hat case management system.
serviceAccountRef: service account with the cluster-admin role that is used to run the must-gather command. Running as a cluster-admin is a must-gather requirement.

 
When this CR is created, the operator creates a job that runs must-gather operations, and uploads the resulting information in a compressed file.
 
The must-gather operator watches only the namespace in which it is deployed. This should make it easier for a cluster administrator to configure limited access to that namespace. This is recommended as that namespace needs to contain a service account with cluster-admin privileges for the reason seen before and therefore needs to be properly protected.
Running Additional Must-Gather Images
The must-gather command supports the option of running multiple must-gather compatible images that can be used for collecting additional information. This option is typically limited to OpenShift addons, such as Kubevirt and OpenShift Container Storage (OCS). The must-gather operator supports this functionality by allowing these images to be specified as in the following example:
apiVersion: redhatcop.redhat.io/v1alpha1

kind: MustGather

metadata:

name: example-more-images

spec:

caseID: ‘XXXXXXX’

caseManagementAccountSecretRef:

name: case-management-creds

serviceAccountRef:

name: must-gather-admin

mustGatherImages:

– quay.io/kubevirt/must-gather:latest

– quay.io/ocs-dev/ocs-must-gather

As you can see, the mustGatherImages property is an array of strings representing images. When added to a must-gather CR, all the specified images in addition to the default must gather image will be run.
Installation
The must gather operator can be installed via the OperatorHub or with a Helm chart.
The project GitHub repository contains detailed information on how to install the must-gather operator.
Conclusions
Being able to provide diagnosis information in a consistent fashion  makes it easier for Red Hat support to aid in the resolution of issues. A more streamlined and automatic information collecting process makes it more likely for the customer to be able to provide timely debugging information to Red Hat support. The must-gather operator aims to help in this space.
The post Simplifying OpenShift Case Information Gathering Workflow: Must-Gather Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Amazon QuickSight bietet jetzt neue Analysefunktionen sowie Support für Athena-Arbeitsgruppen und die Anbindung von Presto an VPC

In Amazon QuickSight sind neue mathematische Funktionen zur Durchführung komplexer statistischer Berechnungen verfügbar. Dazu gehören unter anderem Logarithmen (log()) und natürliche Logarithmen (ln()) sowie Exponentialfunktionen (exp()), Wurzelfunktionen (sqrt()) und Betragsfunktionen (abs()). Darüber hinaus unterstützt QuickSight jetzt für die Funktionen RANK, DENSE_RANK und PERCENT_RANK ebenenabhängige Aggregationen. Dadurch können Sie diese Funktionen auf Ihre Geschäftsmetriken anwenden, ganz gleich, mit welchen Filtern und Aggregationen Ihre Grafiken erstellt wurden. Weitere Informationen finden Sie hier.
Quelle: aws.amazon.com

Connect to your VPC and managed Redis from App Engine and Cloud Functions

Do you wish you could access resources in your Virtual Private Cloud (VPC) with serverless applications running on App Engine or Cloud Functions? Now you can, with the new Serverless VPC Access service.Available now, Serverless VPC access lets you access virtual machines, Cloud Memorystore Redis instances, and other VPC resources from both Cloud Functions and App Engine (standard environments), with support for Cloud Run coming soon.How it worksApp Engine and Cloud Functions services exist on a different logical network from Compute Engine, where VPCs run. Under the covers, Serverless VPC Access connectors bridge these networks. These resources are fully managed by Google Cloud, requiring no management on your part. The connectors also provide complete customer and project-level isolation for consistent bandwidth and security. Serverless VPC Access connectors allow you to choose a minimum and maximum bandwidth for the connection, ranging from 200–1,000 Mbps. The capacity of the connector is scaled to meet the needs of your service, up to the maximum configured (please note that you can obtain higher maximum throughput if you need by reaching out to your account representative).While Serverless VPC Access allows connections to resources in a VPC, it does not place your App Engine service or Cloud Functions inside the VPC. You should still shield App Engine services from public internet access via firewall rules, and secure Cloud Functions via IAM. Also note that a Serverless VPC Access connector can only operate with a single VPC network; support for Shared VPCs is coming in 2020.You can however share a single connector between multiple apps and functions, provided that they are in the same region, and that the Serverless VPC Access connectors were created in the same region as the app or function that uses them. Using Serverless VPC AccessYou can provision and use a Serverless VPC Access connector alongside an existing VPC network by using the Cloud SDK command line. Here’s how to enable it with an existing VPC network:Then, for App Engine, modify the App.yaml and redeploy your application:To use Serverless VPC Access with Cloud Functions functions, first set the appropriate permissions then redeploy the function with the vpc- connector flag:Once you’ve created and configured a VPC connector for an app or function, you can access VMs and Redis instances via their private network IP address (e.g. 10.0.0.123). Get startedServerless VPC Access is currently available in Iowa, South Carolina, Belgium, London, and Tokyo, with more regions in the works. To learn more about using Serverless VPC Access connectors, check out the documentation and the usage guides for Cloud Functions and App Engine.
Quelle: Google Cloud Platform