Simplify the digital enterprise journey with hybrid multicloud

Organizations are adopting a hybrid multicloud environment to accelerate their journey to becoming digital enterprises. IT leaders are faced with a challenge of demystifying the hybrid multicloud environment to unlock the true value of digital transformation.
According to IBM Institute for Business Value, by 2021, 90 percent of the organizations that are already on cloud plan to adopt multiple hybrid clouds. However, only 30 percent have the required procedures and tools in place and just 30 percent have a multicloud orchestrator or other multicloud management platforms.
Build for variety, velocity and volume
Moving IT functions to the cloud can give organizations many benefits, but orchestration and automation across multiple technologies, cloud environments and service providers can be complex and expensive. To ensure success, enterprises must seek answers to the following:

How to build cloud native and DevOps capabilities in a safe, secure and cost-effective manner
How to avoid vendor lock-in and leverage the benefits of open architectures
How to orchestrate across multiple technologies and clouds
How to quickly build a virtualized or containerized platform for faster application development and deployment
How to enable development team to provision or deprovision environments efficiently
How to build infrastructure services for a multicloud environment

To ride on the success of cloud initiatives in the digital era, businesses today must build their cloud for variety, velocity and volume.

Variety – Build cloud to manage the variety of heterogeneous technology complexities of both container and virtual workloads and topologies of cloud deployment models.
Velocity – Build cloud to manage the speed of change and reduce the timelines to incorporate changes across multiple cloud end points.
Volume – Build cloud design to manage the scalability of capacity as required without disruption in efficiencies.

Address hybrid multicloud orchestration challenges
Businesses are grappling with cloud orchestration challenges owing to complexities of multiple technologies, cloud platforms and service provider environments. IBM Cloud Deployment Services (ICDS) offers a multicloud orchestration and automation platform for both virtualization and container workloads powered by enterprise-ready standard blueprints.
IBM Cloud Deployment Servicesis technology agnostic, supports open architecture and can help businesses

Automate delivery of infrastructure, applications, and custom IT services
Deploy application workloads across on-premises and off-premises environments (for example, public and private clouds)
Offer integration with all leading public cloud providers, such as Amazon Web Services (AWS) and Microsoft Azure
Offer integration for ServiceNow, resiliency offerings, managed security services, and so on
Available in both single-tenant and multitenant architectures
Available with Red Hat OpenShift Container Platform in addition to IBM and VMware orchestration capabilities
Enables design and build of solution blueprints
Includes build and deployment services (with required hardware and software licenses and delivery services of the platform)

IBM Cloud Deployment Services simplifies the journey to cloud by building cloud for variety, velocity and volume. To know more, visit us at https://ibm.co/2He4rDJ.
The post Simplify the digital enterprise journey with hybrid multicloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Going to VMWorld? Learn to help data scientists and application developers accelerate AI/ML initiatives

IT experts from around the world are  headed to VMworld 2019 in San Francisco to learn how they can leverage emerging technologies from VMware and ecosystem partners (e.g. Red Hat, NVIDIA, etc.) to help achieve the digital transformation for their organizations.  Artificial Intelligence (AI)/Machine Learning (ML) is a very popular technology trend, with Red Hat OpenShift customers like HCA Healthcare, BMW, Emirates NBD, and several more are offering differentiated value to their customers. Investments are ramping up across many industries to develop intelligent digital services that help improve customer satisfaction, and gain competitive business advantages. Early deployment trends indicate AI/ML solution architectures are spanning across edge, data center, and public clouds.
If you are part of the IT group, you may have already been asked to support the data scientists and software developers in your organization that are driving the development of machine learning models and the associated intelligent applications. 
Data scientists play a vital role in the success of AI/ML projects. They are primarily responsible for ML model selection, training, and testing. They also need to collaborate with data engineers and software developers to make sure the source data is credible, and the machine learning models are successful deployed in application development processes.
Here are some of the key challenges faced by data scientists as they strive to efficiently build the ML models: 

Selecting & deploying the right ML tooling or framework
Complexities and time required to train, test, and select the ML model providing the highest prediction accuracy
Slow execution of ML modeling computational tasks because of lack of powerful IT infrastructure
Dependency on IT to provision and manage infrastructure 
Collaboration with other key contributors e.g. data engineers, application developers, etc. 

If I were  a data scientist, I would want a “self-service cloud like” experience for my ML projects. This experience should allow me to access a rich set of ML modelling frameworks, data, and computational resources across edge, data center, and public clouds. I should be able to share work and collaborate with my colleagues, and deliver my work into production with agility and repeatability to achieve business value.
This is where containers and Kubernetes-based hybrid cloud solutions like Red Hat OpenShift Container Platform and NVIDIA GPUs, on VMware vSphere, come into play. It can help extend the value of your vSphere investments, and drive the mainstream adoption of AI/ML powered intelligent apps. 
There are several benefits that can be achieved with this solution, including:

Agility across the ML pipeline by automating the install, provisioning, and autoscaling of the containers based ML models/frameworks.  NVIDIA GPUs can help speed up the massive computational tasks required to train, test, and fine tune the ML models without having to buy more compute and storage resources, with Red Hat OpenShift serving as the container and Kubernetes based “self service cloud.”
Portability and flexibility for ML powered apps to be developed and delivered across data center, edge, and public clouds. OpenShift also provides flexibility to offer ML-as-a-service to apps without having to embed the ML models directly in the application code for production use.
Efficient operations and lifecycle management for ML powered intelligent applications with automation of the CI/CD process, enabling more efficient collaboration and helping to boost productivity. 

While you are at VMworld, don’t miss your chance to learn more on this topic. Come check out the mini-theatre session from Red Hat’s Andrew Sullivan at the NVIDIA booth in the expo center at 12:45pm on Monday, August 26th, 2019.
Please also check out the Red Hat AI/ML blog here to learn more, and also our announcement with NVIDIA to learn more about the strategic partnership between Red Hat and NVIDIA to accelerate and scale AI/ML across Hybrid Cloud. 
The post Going to VMWorld? Learn to help data scientists and application developers accelerate AI/ML initiatives appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

From the Enterprisers Project: What Are Kubernetes Secrets?

The Enterprisers Project always has terrific information that can help you and your team communicate those complex cloud computing concepts to the C-levels. This past week, they published an excellent article describing what exactly secrets are in Kubernetes, how to manage them and what security benefits they provide. From the article:
Kubernetes Secrets defined, three ways
Let’s add a few more clear-cut definitions of Secrets to your arsenal that should help you either get up to speed as necessary or explain the concept to others on the team.
1. “As applications run in Kubernetes, apps need credentials to interact with the surrounding infrastructure or another application. Those credentials are kept in Kubernetes, and your applications can use a credential by specifying the name of a Secret as opposed to having the application keep the contents of the Secret.” –Eric Han, VP of product management at Portworx. (Han was also the first Kubernetes product manager when it was still an internal system at Google.)
2. “Kubernetes Secrets provide a means to protect sensitive information in a way that limits accidental exposure and provides flexibility in how the information is utilized. Secrets are only accessible to Pods if they are explicitly part of a mounted volume or at the time when the Kubelet is pulling the image to be used for the Pod. This prevents the need to store sensitive information in a Pod image, which mitigates the risk that data is compromised and makes it easier to vary things like credentials, cryptographic keys, etc. for different pods.” –Jonathan Katz, director of customer success & communications at Crunchy Data
3. “Kubernetes Secrets are a way to store and distribute sensitive information – think passwords, or an SSL certificate – that are used by applications in your Kubernetes cluster. Importantly, the declarative nature of Kubernetes definitions allows third-party solutions to be integrated with the Secret management.” –Gary Duan, CTO at NeuVector
Read the full article here.
The post From the Enterprisers Project: What Are Kubernetes Secrets? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

The state of hybrid cloud and multicloud enterprise adoption: 451 analyst interview

In today’s enterprise, hybrid cloud and multicloud have become an industry standard in the IT landscape. Enterprises are quickly modernizing their IT infrastructures to embrace the elevated agility that comes with hybrid multicloud and adopting a unified approach to maximize cloud performance. In the interviews below, Melanie Posey, Research Vice President for 451 Research’s Voice of the Enterprise and Cloud Transformation Programs, gives an overview of the current state, benefits, challenges and best practices of hybrid cloud and multicloud.
More than half of all enterprises have chosen hybrid cloud and multicloud as their ideal enterprise IT architecture, and about 63 percent of organizations are using two or more separate clouds. This allows organizations to keep established IT in place, while also benefiting from the flexibility and agility provided by new cloud capabilities. In the video below, Posey details the industry-wide migration towards hybrid cloud and multicloud adoption and shows how it is non-negotiable in the IT of the future.
Watch the video: 451 Research: Multicloud and Hybrid Cloud: What is the state of enterprise adoption?
 
Hybrid and multicloud benefits for enterprise
With a solid hybrid and multicloud architecture and strategy, enterprises can significantly lower costs and enjoy the increased performance that comes from a unified IT environment, specifically with applications and workloads. One critical benefit of hybrid multicloud is the marked improvement to data integration and application delivery, because applications have different specifications and may depend upon other workloads or data sets to function properly. Another benefit of hybrid multicloud environments is the seamless link between environments, and, as a result, between applications. Watch the video below for a deeper look.
Watch the video: 451 Research: Multicloud and Hybrid Cloud: What benefits can organizations achieve?
 
The key challenges organizations encounter in hybrid and multicloud environments
Today’s IT world is becoming increasingly complex, and the challenges posed by shifting to hybrid and multicloud environments can seem daunting. How do enterprises ensure that their cloud is secure, integrated and well-orchestrated? Organizations today deal with complex IT environments that have data and workloads scattered across clouds, and applications that range from traditional monolithic to cloud-native to containerized. New emerging technologies like Internet of Things (IoT), which collects data from various endpoints, add another layer of complexity. Watch the following video to see why enterprises need solutions to ensure that cloud works as advertised—with true agility, flexibility and scalability, without sacrificing security.
Watch the video: 451 Research: Multicloud and Hybrid Cloud: What key challenges do organizations encounter?
 
Best practices for cloud adoption success
The best approach for multicloud and hybrid cloud environments depends on a number of factors including security, governance and orchestration. The best approach to workload deployment is on an application-by-application basis, and enterprises have to consider unique security requirements, application characteristics and dependencies in addition to the users, targets and audiences. Modernizing the IT infrastructure that applications run on is also an important piece of the puzzle. Instead of simply moving applications from one environment to another, modernizing infrastructure and application architecture can create better access, control and communication between workloads.
A successful digital transformation means that your mission-critical applications must be running and fully integrated with newer cloud-native applications, utilizing technology like machine learning and AI to streamline processes and deliver real business value. In the video below, Posey details how to build a fully harmonized and integrated approach to ensure communication among applications.
Watch the video: 451 Research: Multicloud and Hybrid Cloud: What are best practices for success?
 
Additional insights and improved strategies are required to reap the full range of benefits of a true hybrid IT environment. Learn more about hybrid and multicloud strategy for the enterprise.
 
The post The state of hybrid cloud and multicloud enterprise adoption: 451 analyst interview appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Automating Low Code App Deployment on Red Hat OpenShift with the Joget Operator

This is a guest post by Julian Khoo, VP Product Development and Co-Founder at Joget Inc.  Julian has almost 20 years of experience in the IT industry, specifically in enterprise software development. He has been involved in the development of various products and platforms in application development, workflow management, content management, collaboration and e-commerce.  As […]
The post Automating Low Code App Deployment on Red Hat OpenShift with the Joget Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to modernize the enterprise’s integration landscape in the hybrid cloud era

Application integrations are key to streamlining enterprises business processes and enabling data movement across systems. Be it real-time payments in the banking industry, distributing vehicle inventory information from dealership to an original equipment manufacturer (OEM), retrieving product information while servicing a phone or supporting the checkout feature of an ecommerce site, there are multiple integrations between the systems that support these processes.
As part of digital transformation initiatives, enterprises are adopting cloud computing to take advantage of the optimization and flexibility the cloud platforms and providers bring to the table. Application workloads are moving to cloud platforms. This will often result in a hybrid cloud target state for enterprises. Public clouds (such as those from IBM, AWS, Azure or Google), SaaS solutions, private clouds, in-house container platforms and traditional data centers are all part of this mix.
A hybrid cloud target introduces the following new macro-level integration patterns:

Intra cloud: Integrations between applications in the same cloud platform
Inter cloud: Integration between applications deployed in different cloud platforms as well as applications in cloud and SaaS solutions
Cloud to on-premises: Integration between core Systems of Records (SOR) that are on-premise, and application deployed on a Cloud through integrations platforms like an Enterprise Service Bus (ESB)

 

 
These newer aspects of integrations often get ignored while defining the applications transformation roadmap to cloud. But, ignoring these distinctions upfront often introduces added complexities at the later part of the cloud journey.
Transforming the integration landscape should be an essential part of any enterprise’s cloud journey. Focus should be there to find and remove redundant integrations, to modernize integrations by adopting modern API and event-driven architectures and to set up an integration platform that is best for the hybrid cloud – a hybrid integration platform (HIP). Per Gartner, 65 percent of large organizations will have realized a hybrid integration platform by 2022 to drive their digital transformation.
Evolution of the enterprise integration landscape
Integration landscapes have evolved over the years as newer architectures and technologies came into play. Point to Point (P2P) integrations, Enterprise Application Integration (EAI) middleware and Service Oriented Architecture (SOA) integrations were all part of this evolutionary journey. Many of the enterprises will have integrations realized by one or more of the above patterns in their landscape. Modern architectures like API/microservices and event-driven architectures are ideal for the hybrid cloud target. Enterprises are targeting to reach a higher level of maturity and realize an optimized integration landscape by adopting these newer architecture patterns.

How to define a modernization roadmap for the integration landscape in three steps
A holistic view of the current integration landscape, as well as its complexity, is critical to define a transformation roadmap that is in line with the applications transformation journey to cloud. IBM recommends a three-step approach to define the enterprise integration transformation roadmap.

Assess and analyze. Collect information about the company’s existing integrations, along with details about source and target applications, for analysis. Understand the overall integration architecture and any security and compliance needs. Use the data to assess the criticality and usage of the integrations and determine their target state. Recommended target integration patterns (REST API, SOA service, Event Driven, Message Driven, FTP, P2P etc.), consolidation possibilities, and other key inputs for defining the target integration state comes out of this analysis.
Envision the target state. The output from the earlier step will help to define the target integration architecture and the deployment model. While adopting newer architecture patterns like the microservices and event-driven architectures are key considerations for the target architecture, ensure any enterprise-specific integration requirements are part of this step too. A reference architecture is usually the best starting point to create a customized target architecture. The IBM Hybrid Integration Architecture published in the Architecture Center is a good example of reference architectures that can be adopted.
Define the integration portfolio roadmap. With the target architecture, implementation patterns and consolidated list of integrations in place, the next step is to create a wave plan to execute the modernization. Confirm the business case in this step before kick starting modernization. Identify a minimum viable product (MVP) and realize it to identify any risks before beginning larger modernization programs. The MVP could include few integrations that cover the critical implementation patterns.

 
Now that the plan to modernize the integration landscape is in place, one of the important things to next establish is the hybrid integration platform that is aligned to the target architecture defined. There are many hybrid integration platform solutions in the market that enterprises can adopt. The IBM Cloud Pak for Integration is the most robust platform that will help to realize a hybrid integration platform and drive the digital transformations of enterprises in an accelerated fashion.
IBM has the end-to-end capability to help enterprises modernize their integration landscape for hybrid cloud. Visit IBM Cloud Integration and IBM Services for Cloud to learn more about how IBM can optimize methods, tools and assets to help in your integration modernization journey.
The post How to modernize the enterprise’s integration landscape in the hybrid cloud era appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

KaaS vs PaaS: Mirantis Kubernetes-as-a-Service vs OpenShift

The post KaaS vs PaaS: Mirantis Kubernetes-as-a-Service vs OpenShift appeared first on Mirantis | Pure Play Open Cloud.
Many companies who use Kubernetes today do it using Red Hat’s OpenShift distribution, so one question we often hear from users asking about the Mirantis Kubernetes as a Service beta is “How is KaaS different from OpenShift?”
The short answer is that OpenShift is a Platform as a Service (PaaS) and Mirantis KaaS is…well…a KaaS. These two concepts are different. Let me explain.
OpenShift is a Platform as a Service, or PaaS, that just happens to use Kubernetes as its underlying substrate. But just because a PaaS uses K8s, that doesn’t automatically make it a KaaS.
PaaS enables developers to easily create applications without having to worry about setting up the underlying platform components. So if a developer wants a database, they just check a box, or make an API call, or use whatever mechanism that the PaaS provides to get one. They don’t have to install the database, they can just use it. OpenShift, Cloud Foundry, and Heroku are examples of a PaaS. 
KaaS systems, on the other hand, assume that the K8s API is the highest level of abstraction exposed to the developer. As such, its focus is to make it easy to create, scale, and manage many distributed Kubernetes clusters, whether on premises or across multiple cloud environments. 
OpenShift has implemented some KaaS functionality in its new version, OpenShift 4, but most folks using it are still on the more PaaS-y OpenShift 3 version, so to get very specific about how KaaS differs from PaaS, let’s compare KaaS with the most commonly used OpenShift (version 3.x), with respect to key use cases and implementation approaches.  
K8s Cluster Upgrades
Because the emphasis of a PaaS is on application development vs. Kubernetes cluster lifecycle management, the process to upgrade an OpenShift PaaS instance (and its embedded K8s version) is not necessarily straightforward. It is generally assumed that a PaaS is used by developers, but is operated by a trained operations team. Because OpenShift consists of multiple frameworks on top of Kubernetes, upgrading an OpenShift cluster is a bespoke procedure consisting of a series of manual Ansible script runs.
Conversely, under the hood Mirantis KaaS partially relies on KubeSpray – a set of validated Ansible scripts maintained by the K8s community – to perform work required for a cluster upgrade. However, from the end-user standpoint, all of the complexity is hidden behind Mirantis’ implementation of ClusterAPI – a Kubernetes-native API standard for cluster lifecycle management – to perform any procedure (such as an upgrade) on a cluster. As such, from an end user standpoint, a cluster upgrade is a single ClusterAPI-compliant API call. The process is similar for other KaaS implementations, such as Rancher and Platform9.
Ability to Scale Kubernetes Clusters
Continuing on the cluster lifecycle management thread, another area in which OpenShift and Kubernetes as a Service differ is when it comes to scaling Kubernetes clusters.
For OpenShift, the process consists of four basic steps, which are similar to those used for installing OpenShift in the first place:

Create VMs with Red Hat Enterprise Linux installed on them
Add access to Red Hat subscription manager repositories on those nodes
Add these nodes to an OpenShift deployment Ansible playbook that lists IP addresses / DNS entries and tweak any other playbook settings as necessary
Run an Ansible playbook and wait for execution to complete

Note that these are steps only an operator can take.
If developers need the ability to scale their own clusters, they will need a KaaS environment, which enables them to scale a cluster with an API call or through the UI; OpenShift is fine as long as developers are not going to need to scale the cluster on which they’re doing their work.
Multi-tenancy
KaaS and OpenShift achieve multi-tenancy in two different ways, and much of that difference has to do with the inherent logic and resulting architecture behind the two solutions. 
An OpenShift PaaS instance is a single instance of K8s running in a single location. 
A KaaS instance is many K8s instances running across many locations, but centrally controlled.  
Because in OpenShift all developers use a single Kubernetes instance, OpenShift has created the additional concept of a “project”, which uses Kubernetes namespaces to isolate users from each others’ resources within a single K8s cluster. Keep in mind, however, that when K8s was initially built, it wasn’t designed to be inherently multi-tenant, and its architecture mostly assumes that a single cluster represents a single tenant. While there are many efforts in the community to implement multi-tenancy, there isn’t an agreement on a single “correct” approach, and upstream is far from an ideal solution here.  
KaaS, on the other hand, doesn’t add any additional hacks to implement multi-tenancy, because resources can be isolated by cluster, with multiple clusters per user or per project. (Note that this is also the approach currently implemented in public cloud implementations such as Google’s GKE.)
IAM Authentication
While both KaaS and OpenShift can integrate with authentication systems such as LDAP and Active Directory, the key difference is in just what you’re controlling access TO. 
OpenShift PaaS (as with most other PaaSes) is a self-contained, all-inclusive and opinionated implementation of all things necessary for developers to build an app. As such, it comes with its own pre-integrated artifact repository, continuous delivery engine and even an SDN. Therefore, once a user is authenticated into an OpenShift instance, there is limited need to interact with any services outside. An OpenShift end user is not going to be deploying K8s embedded in OpenShift on more nodes in the cloud and, therefore, doesn’t need access for it. Similarly, they won’t be using an external artifact repository (such as JFrog) or CD engine (such as Spinnaker or Argo).  Because of this, in OpenShift users authenticate via LDAP or Active Directory to get access to their own “projects” mapped to Kubernetes namespaces, but if they also want to include access to other external resources, such as artifact repositories or external storage devices, operators can configure that access via a bespoke manual procedure. 
Conversely, KaaS generally assumes that an enterprise will have a diverse ecosystem of external “best-of-breed” systems that a Kubernetes cluster (and consequently its end users) needs to interface with, so the Mirantis KaaS beta is implemented as a Single Sign On system using the community KeyCloak project. This approach enables a single user to have access to multiple clusters, artifact repositories, physical machines, and so on, through a single configuration. 
Plug-Ins: CNI, Artifact Repository, CI/CD 
As outlined in the section above, PaaS and KaaS different greatly in their philosophy as to what’s in and out of scope. While it’s not entirely black and white, OpenShift mostly follows the “all-inclusive, full-stack” approach in which everything is defined, whereas KaaS mostly follows the “batteries included, but optional” approach, in which almost everything can be defined, but can also be changed if necessary. Let’s take a closer look to make things more concrete.  
For artifact repositories, OpenShift comes with its own, fairly sophisticated system, which is an augmented implementation of Docker Registry. It gets installed automatically via the same Ansible playbook as the rest of OpenShift, and is designed to interact with a single instance / K8s cluster. This option is best when development is centralized to a single (OpenShift-based) Kubernetes cluster. With some minimal work, it is also possible to bridge multiple OpenShift instances to a third party artifact repository such as JFrog.
The Mirantis KaaS beta does not implement its own registry, but assumes that a user already has a registry in mind. If not, KaaS offers the option to co-deploy a Harbor registry with the Kubernetes cluster. Note that in KaaS, this registry is not typically tied to a single cluster, so you can use a single instance of Harbor to store artifacts from across multiple, geo-distributed K8s clusters. This method is a good choice when development spans multiple Kubernetes clusters.
For CI/CD, OpenShift implements its own system based on Jenkins, enabling developers to build applications with a proper CI/CD workflow. Mirantis KaaS doesn’t specify a CI/CD engine, and is designed to integrate with whatever the customer is already using. 
Application Catalog
Perhaps the most significant thing OpenShift has that KaaS doesn’t is the Application Catalog.  After all, that’s what makes it a PaaS in the first place! This catalog comes populated with over a hundred services, such as MySQL, MongoDB, JBoss, RabbitMQ, and so on, but there’s one very important caveat to keep in mind: these pre-packaged services are there just to get your developers started. They don’t come with support, most won’t be upgraded, and most important to keep in mind, they’re not meant to be used in production.  
That doesn’t mean the catalog can’t be useful, but most large enterprises end up turning off most of these services and re-populating it with their approved, validated, and tested set of services.
Multi-Cloud K8s Management
Just as the application catalog is key to OpenShift’s identity as a PaaS, the ability to deploy multiple Kubernetes clusters is primary to KaaS. 
While OpenShift focuses on making life easier for developers, KaaS focuses on making it simple for operators (and by extension, the developers they support) by exposing a self-service, developer-facing interface for deploying, scaling and upgrading multiple K8s clusters, distributed across public and private clouds. 
Summary
Bringing it all together side-by-side, we get the following picture: 

Typical KaaS
OpenShift 3.x

K8s Cluster Upgrade 
Automated
Manual

K8s Cluster Scaling
ClusterAPI or UI 
Bespoke Ansible Playbooks

Multi-tenancy
Per K8s cluster
Per K8s namespace

IAM Authentication
KeyCloak
Proprietary

Artifact Repository
Harbor Plug-In option
Implementation of Docker Registry with external plug-in option

Access to CNI
Calico
OpenShift SDN

Built-in CI/CD
No
Jenkins 

Application catalog
No
Yes, 100+ apps

Multi-Cloud K8s Management 
OpenShift, AWS
No

Ultimately, the choice between PaaS and KaaS is going to depend on where you want to draw the line between devs and ops in your organization. If you (like most enterprises today) need substantial and strict guardrails and training wheels for your dev teams to adhere to, with an experienced, third-party vendor providing and managing the complete catalog of application building blocks, you should opt for the OpenShift and the PaaS route. On the other hand, if you believe that the Kubernetes API is that line of separation, you should go with KaaS, particularly if you are engaged in multi-cluster or multi-cloud development.
Is Kubernetes as a Service the right solution for you? Request access to the private beta of Mirantis KaaS and find out.
The post KaaS vs PaaS: Mirantis Kubernetes-as-a-Service vs OpenShift appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Asteria builds liquidity management solution for SMBs on IBM Cloud

Small businesses operating with lean teams face many challenges, including being distracted by paperwork and planning. These tasks are necessary, but shift focus away from the actual work of the business. Company owners might worry about questions such as

What products will sell next month?
When will I receive my payments?
Do I have enough capital on hand to pay salaries?

These questions are especially relevant for businesses such as construction companies, or any company with a good deal of invoice handling that are working-capital intensive, meaning they need a high volume of production to provide an adequate return on investment.
To simplify business administration and financial planning processes for small and medium businesses (SMBs), Asteria developed Smart Cashflow, a cloud-based liquidity management solution that we sell to financial institutions and they, in turn, integrate it into their online offerings for entrepreneurs.
Smart Cashflow employs APIs to fetch information from accounting software including taxes and salaries, records from invoicing software and data from the bank account. We apply machine learning artificial intelligence (AI) to analyze this data and forecast cash flow.
The dashboard uses color schemes to help people visually understand business spending, how income is derived and their account balance. Clients can see their future cash flow as well as get suggestions about banking products and visualize how they might impact liquidity management.

Creating the solution for liquidity management
Our team was introduced to IBM through an accelerator program in the Nordics. The IBM Global Entrepreneur Program helped give us a platform to start the liquidity management solution and grow the business.
We are a cloud-native software development company, not a hosting company. We never considered any other option for hosting and data storage than the cloud, and we couldn’t have created such a robust solution for our clients without the IBM Cloud.
We need the scalability that the IBM Cloud offers, and, in turn, we use the IBM Cloud Object Storage service to scale data storage. We run everything in a Kubernetes cluster using a microservices architecture and docker containers for version management. The IBM Cloud Kubernetes Service framework lends flexibility and enables us to work from remote locations and across multiple hybrid clouds.
Asteria believes strongly in hybrid clouds. It is especially important for companies like ours who service financial institutions because a hybrid cloud architecture better enables us to maintain geographic control of data and keep it secure.
Additionally, the reputation that IBM has as a company is important for working with financial institutions: the IBM reputation for trustworthiness is unparalleled.
Preparing for future financial services with AI
AI already plays a significant role in the financial industry and that significance will continue. In the next five to 10 years we anticipate seeing changes created by AI in everything from the public sector and small business owners, to governments and large enterprises. Everyone will start exploring AI opportunities to make better decisions, to make more money and to save more time, ultimately with the goal of making people’s lives easier.
As we look to this AI-driven future, we’re evaluating additional machine learning and AI technologies to further our offerings.
Read the case study for more details.
The post Asteria builds liquidity management solution for SMBs on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Self-Serviced, End-to-End Encryption for Kubernetes Applications, Part 2:  a Practical Example

Introduction In part one of this series, we saw three approaches to fully automate the provisioning of certificates and create end-to-end encryption. Based on feedback from the community suggesting the post was a bit too theoretical and not immediately actionable, this article will illustrate a practical example. You can see a recording of the demo […]
The post Self-Serviced, End-to-End Encryption for Kubernetes Applications, Part 2:  a Practical Example appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Community Blog Round Up 13 August 2019

Making Host and OpenStack iSCSI devices play nice together by geguileo

OpenStack services assume that they are the sole owners of the iSCSI connections to the iSCSI portal-targets generated by the Cinder driver, and that is fine 98% of the time, but what happens when we also want to have other non-OpenStack iSCSI volumes from that same storage system present on boot? In OpenStack the OS-Brick […]

Read more at https://gorka.eguileor.com/host-iscsi-devices/
Service Assurance on small OpenShift Cluster by mrunge

This article is intended to give an overview on how to test the

Read more at http://www.matthias-runge.de/2019/07/09/Service-Assurance-on-ocp/
Notes on testing a tripleo-common mistral patch by JohnLikesOpenStack

I recently ran into bug 1834094 and wanted to test the proposed fix. These are my notes if I have to do this again.

Read more at http://blog.johnlikesopenstack.com/2019/07/notes-on-testing-tripleo-common-mistral.html
Developer workflow with TripleO by Emilien

In this post we’ll see how one can use TripleO for developing & testing changes into OpenStack Python-based projects (e.g. Keystone).

Read more at https://my1.fr/blog/developer-workflow-with-tripleo/
Avoid rebase hell: squashing without rebasing by OddBit

You’re working on a pull request. You’ve been working on a pull request for a while, and due to lack of sleep or inebriation you’ve been merging changes into your feature branch rather than rebasing. You now have a pull request that looks like this (I’ve marked merge commits with the text [merge]):

Read more at https://blog.oddbit.com/post/2019-06-17-avoid-rebase-hell-squashing-wi/
Git Etiquette: Commit messages and pull requests by OddBit

Always work on a branch (never commit on master) When working with an upstream codebase, always make your changes on a feature branch rather than your local master branch. This will make it easier to keep your local master branch current with respect to upstream, and can help avoid situations in which you accidentally overwrite your local changes or introduce unnecessary merge commits into your history.

Read more at https://blog.oddbit.com/post/2019-06-14-git-etiquette-commit-messages/
Running Keystone with Docker Compose by OddBit

In this article, we will look at what is necessary to run OpenStack’s Keystone service (and the requisite database server) in containers using Docker Compose.

Read more at https://blog.oddbit.com/post/2019-06-07-running-keystone-with-docker-c/
The Kubernetes in a box project by Carlos Camacho

Implementing cloud computing solutions that runs in hybrid environments might be the final solution when comes to finding the best benefits/cost ratio.

Read more at https://www.anstack.com/blog/2019/05/21/kubebox.html
Running Relax-and-Recover to save your OpenStack deployment by Carlos Camacho

ReaR is a pretty impressive disaster recovery solution for Linux. Relax-and-Recover, creates both a bootable rescue image and a backup of the associated files you choose.

Read more at https://www.anstack.com/blog/2019/05/20/relax-and-recover-backups.html
Quelle: RDO