The Speed Read with Quentin Hardy: Keep it simple

Editor’s note:The Speed Read is a column authored by Google Cloud’s Quentin Hardy, examining important themes and hot topics in cloud computing. It previously existed as an email newsletter. Today, we’re thrilled to welcome it to its new home on the Cloud blog.Some things in modern enterprise technology are a good deal harder to understand than they need to be. It is a great moment when we’re able to change that. Take cloud services, for example. Microservices and service meshes are cloud technologies that will be important in your business life, and they are not all that strange. In fact, the mere concept of them should be familiar. They are really, really powerful as simplifiers that make innovation at scale possible. Welcome to The Speed Read, “positive simplifier” edition. As with many things in business, the secret to understanding these cloud computing technologies and techniques lies in establishing how their rise relates to supply and demand, the most fundamental elements of any market. With business technology, it’s also good to search for ways that an expensive and cumbersome process is being automated to hasten the delivery of value.But what does this have to do with cloud services? At the first technology level, microservices are parts of a larger software application that can be decoupled from the whole and updated without having to break out and then redeploy the whole thing. Service meshes control how these parts interact, both with each other and other services. These complex tools exist with a single great business purpose in mind: to create reusable efficiency.Think of each microservice as a tool from a toolbox. At one time, tools were custom made, and were used to custom make machines. For the most part, these machines were relatively simple, because they were single devices, no two alike, and that limited the building and the fixing of them. Then with standardized measurement and industrial expansion, we got precision-made machine tools, capable of much more re-use and wider deployment. Those standardized machine tools were more complex than their predecessors. And they enabled a boom in standardized re-use, a simpler model overall.The same goes with microservices—the piece parts are often more complex, but overall the process allows for standardized reuse through the management of service meshes. The “tool” in this case is software that carries out a function—doing online payments, say, or creating security verifications. Extrapolating from this analogy, does the boom in microservices tell us that the computational equivalent of the Industrial Revolution is underway? Is this an indication of standardization that makes it vastly easier to create objects and experiences, revolutionizes cost models, and shifts industries and fortunes?Without getting too grandiose about it, yeah.You see it around you, in the creation of companies that come out of nowhere to invent and capture big markets, or in workforce transformations that allow work and product creation to be decoupled, much the way microservices are decouplings from larger applications. Since change is easier, you see it in the importance of data to determine how things are consumed, and in rapidly reconfiguring how things are made and what is offered. Perhaps most important for readers like you is that you see it in the way businesses are re-evaluating how they apportion and manage work. Nothing weird about that; we do it all the time.It is understandable how the complexity of tech generates anxiety among many of its most promising consumers. Typically a feature of business computing evolves from scarce and difficult knowledge. Its strength and utility makes it powerful, often faster than software developers can socialize it, or the general public can learn. Not that long ago, spreadsheets and email were weird too, for these reasons. To move ahead, though, it’s important to recognize big, meaningful changes, and abstract their meaning into something logical and familiar. At a granular level, microservices may be complex, but their function is very straightforward: standardize in order to clear space for innovation.
Quelle: Google Cloud Platform

How Worldline puts APIs at the heart of payments services

Editor’s note:Today we hear from Worldline, a financial services organization that creates and operates digital platforms handling billions of critical transactions between companies, partners, and customers every year. In this post, Wordline head of alliances and partnerships Michaël Petiot and head of API platform support Tanja Foing explain how APIs and API management enable this €2.3 billion enterprise to offer its services to partners in a wide variety of industries.Worldline is the European leader in the payment and transactional services industry, with activities organized around three axes: merchant services, financial services including equensWorldline, and mobility and e-transactional services. In order to be more agile, we’re undergoing a transformation in how we work internally and with our partners, putting APIs at the heart of how we’re connecting with everyone.Leveraging APIs for third-party collaborationLike most companies, Worldline collaborates more and more with third parties to deliver the products and services our customers expect. We want to move faster, and open up our platforms to partners who can develop new use cases in payments and customer engagement. To meet evolving technology, business, and regulatory demands for connecting our ecosystem of partners and developers, we needed a robust API platform. It was especially important to us that third parties could connect easily and securely to our platform. We chose Google Cloud’s Apigee API management platform as our company-wide standard. Initially, we leaned toward an open source tool, but Apigee won us over, thanks to its complete feature set, available right out of the box. The Apigee security and analytics features are particularly important to us because of our collaboration with banking and fintech customers and partners. Developing bespoke customer solutionsOur first three API use cases include: digital banking, connected cars, and an internal developer platform. Banks need their data to be properly categorized and highly secure, and Apigee gives us the tools to provide the right environment for them. Leveraging Apigee, our digital banking solution offers a dedicated developer portal for our customers in a separate environment. It has its own architecture to access back-end services as well. With functionality ranging from trusted authentication to contract completion, payments, and contact management, Worldline digital banking customers can tap into APIs to interact with us at every stage. An important trend in transport and logistics is the integration of real-time data with third parties. Our Connected Car offering is a white-label solution that provides APIs for a car manufacturer’s fleet of cars. This offering enables fleet owners to exchange data with their entire ecosystem. It also offers a relatively closed environment with a limited number of developers accessing it, and we expose these APIs via the Apigee gateway. We use Apigee analytics features to track how the APIs are used and how they’re performing, and then make changes as needed. Our third use case is internal; we’re building a developer portal in order to make APIs easier to access and quicker to deploy.Our partner ecosystem includes lessors, insurance companies, repair shops, logistics companies and end-users. Everyone benefits from advanced APIs for real-time secure exchanges, combined with open-exchange protocols such as the Remote Fleet Management Systems standard (used by truck manufacturers) in order to provide the best service to customers.We recently presented to the Worldline product management community how we can scale up to a large portfolio of API solutions using Apigee as an accelerator. the presentation was a success, and illustrates how we can leverage the platform as a tool for driving innovation throughout Worldline—and throughout our growing ecosystem of automotive and financial services customers
Quelle: Google Cloud Platform

OpenShift Scale-CI: Part 1 – Evolution

If you’ve played around with Kubernetes or Red Hat OpenShift, which is an enterprise ready version of Kubernetes for production environments, the following questions may have occurred to you:

How large can we scale our OpenShift-based applications in our environments?
What are the cluster limits? How can we plan our environment according to object limits?
How can we tune our cluster to get maximum performance?
What are the challenges of running and maintaining a large and/or dense cluster?
How can we make sure each OpenShift release is stable, performant and satisfies our requirements in our own environments?

We, the OpenShift Scalability team at Red Hat created an automation pipeline and tooling called OpenShift Scale-CI to help answer all of these questions. OpenShift Scale-CI automates the installation, configuration and running of various Performance and Scale tests on OpenShift across multiple cloud providers.
Motivation behind building Scale-CI
There are two areas which led us to build Scale-CI:

Providing a green signal for every OpenShift product release, for all product changes to support scale and for shipping our Scalability and Performance guide with the product. 
Onboarding workloads to see how well they perform at scales above thousands of nodes per cluster.

It is important to find out at what point any system starts to slow down or completely fall apart. It could be because of various reasons:

Your cluster has low Master ApiServer, Kubelet QPS and Burst values.
Etcd backed quota size might be too low for large and dense clusters. 
The number of objects running on the cluster is beyond the supported cluster limits. 

This motivated us to scale test each and every release of OpenShift and ship the Scalability and Performance guide with each OpenShift release which helps users plan/tune their environment accordingly. 
 In order to make efficient use of the lab hardware or the hourly paid compute and storage in public cloud which might get very expensive at large scale, automation does a better job at optimization than humans do at the endless wash. rinse and repeat cycle of CI-based testing. This led us to create automation and tooling which works on any cloud provider and runs performance and scale tests to cover various components of OpenShift; Kubelet, Control plane, SDN, Monitoring with Prometheus, Router, Logging, Cluster Limits and Storage can all be tested with the click of a button.
We used to spend weeks to running tests and capturing data. Scale-CI speeds up the process, thus saving lots of time and money on compute and storage resources. Most importantly: It gave us the time to work on creative tasks like tooling and designing new scale tests to add to the framework.
Not every team or user has the luxury of building automation, tooling and access to the hardware to test how well their application or OpenShift component is working at scales above 2000 nodes . Being part of the Performance and Scalability team, we have access to a huge amount of hardware resources and this motivated us to build Scale-CI in such a way that anyone can come use it and participate in the community around it.  Users can submit a pull request on Github with a set of templates to get their workload onboarded into the pipeline. The onboarded workloads are automatically tested at scale on an OpenShift cluster built with the latest and greatest builds. It doesn’t hurt that this entire process is managed and maintained by the OpenShift Scalability team.
You can find us online at openshift-scale github organization. Any feedback or contributions are most welcome. Keep an eye out for our next blog, OpenShift Scale-CI Deep Dive: Part 2, which will have information about the various Scale-CI components including workloads, pipeline and the tooling we use to test OpenShift at scale.
The post OpenShift Scale-CI: Part 1 – Evolution appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift