How to Automatically Scale Low Code Apps with Joget and JBoss EAP on OpenShift

This is a guest post by Julian Khoo, VP Product Development and Co-Founder at Joget Inc.  Julian has almost 20 years of experience in the IT industry, specifically in enterprise software development. He has been involved in the development of various products and platforms in application development, workflow management, content management, collaboration and e-commerce. Introduction […]
The post How to Automatically Scale Low Code Apps with Joget and JBoss EAP on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Continuous delivery and the DevOps approach to modernization

Businesses are working to increase agility, deliver innovative and engaging experiences to clients, and stay ahead of competition. Increasingly, companies are modernizing business applications to make these business goals a reality.
Modernizing applications is generally composed of three transformations: cloud-native architecture, continuous delivery and infrastructure automation. These typically occur concurrently, but do have distinct characteristics. For example, the cloud-native architecture journey transforms organizations from monolithic applications to containerized microservices applications in which lightweight data collectors help enable success.
Continuous delivery is also critical to business transformation success. Teams may be responding to industry pressure to keep up with competitors, often cloud-native companies, who are pushing updates out faster. The bar is set increasingly higher as users become accustomed to applications that are highly reliable and frequently updated.
There are two additional levels of transformation that teams must implement in order to reach continuous delivery. Companies must also adopt agile development and a DevOps approach. These approaches are practiced and honed over time to constantly refine and discover what works best for the team.
How to start incorporating continuous delivery
The best way to begin on the continuous delivery journey is to start adopting an agile approach to development. However, agile by itself is not sufficient. Incorporating a DevOps approach also adds another important component in this journey.
DevOps involves increasing collaboration and implementing a tighter feedback loop between the development and operations teams. This enhanced connectivity between the teams translates to increased speed of delivery, along with increased reliability and stability in production.
Another component to DevOps is the Site Reliability Engineering (SRE) approach. An SRE approach involves automating as many repetitive tasks as possible and spending at least 50 percent of team time focusing on improving application reliability, instead of simply maintaining it.
Once a team has organized around DevOps and SRE principles, they can continue refining their process and culture to increase the frequency and reliability of updates.
How to overcome continuous delivery challenges

Ensure seamless communication between developers and operation teams. Development teams often instrument their own tools, such as lightweight and open source solutions, but these don’t necessarily translate into production environments. Particularly for continuous delivery, development teams need to ensure their code is ready for production and operations teams need to trust that this is the case. If the two teams are using separate tools, visibility is limited, which can result in delays. IBM Cloud App Management offers a solution by making it easy for developers to add lightweight data collectors, which will also seamlessly work in the production environment, to their code. Now when a code change occurs, the production team can see it and developers can easily quickly grasp how their code is working in production. This feedback loop is critical to accelerating delivery.
Identify code bugs early with lightweight data collectors. Another impediment to accelerating application delivery is that bugs are often not discovered during development. This can lead to costly fixes once the code is deployed. Again, by easily instrumenting lightweight data collectors early in the development process, the dev team can find and fix bugs before going into production. This is essential for the continuous delivery process.
Automate application processes. Seeing how changes correlate to performance can be another challenge. IBM UrbanCode Deploy automates application deployment by promoting code through the pipeline. It can also rollback or uninstall applications. Automating these processes is a key component in continuous delivery. To make it even more useful, companies can connect UrbanCode events directly into IBM Cloud App Management to see how the deployment correlates with application performance.
Support multiple development pipelines. To successfully implement continuous delivery, it’s important that enterprises support multiple development pipelines for dev, staging, test and production. This enables work to continue in each pipeline without affecting the others. With this agile approach, the same data collectors run in your services whether they are running in dev, staging, test or production.
Continuously monitor key metrics. Lastly, it can be difficult to quickly find the root cause of a problem if and when it does occur. This is due, in no small part, to the wide range of specialized technologies that make up the distributed network of microservices comprising an application. There’s no time to have the experts in each technology look through their logs and determine if their service is causing the problem. Dashboard monitoring tools can help. IBM Cloud App Management monitors the four SRE golden signals, which are latency, errors, traffic and saturation. Teams can immediately see these four golden signals, so root causes can be quickly identified and fixed.

While the path to continuous delivery can be long, implementing the best practices above can ease the transition and help teams incorporate the core components of agile, DevOps and SRE.
Learn more about continuous delivery and a DevOps approach.
The post Continuous delivery and the DevOps approach to modernization appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

CloudForms at Summit – Everything you need to know

Download this Summit Quick Reference Guide for CloudForms for a complete list of CloudForms sessions and labs at Red Hat Summit 2019.
A few highlights:
Hybrid cloud management strategies and best practices
Learn how Red Hat’s hybrid cloud management strategy is evolving to address customer needs. Hear how traditional management products, such as Red Hat Satellite and Red Hat CloudForms are being connected with Red Hat Ansible Tower and new SaaS solutions to provide customers with comprehensive and flexible hybrid cloud management options.
Hybrid cloud cost management challenges, opportunities, and best practices
In this session, we’ll talk about tools and best practices to discover, report, and analyze the cost and consumption of public and private cloud resources today and lay the groundwork for automated optimization of resource consumption and application priorities.
Word Wide Technology (WWT) bare metal provisioning solution
Learn how to control bare metal resources by leveraging Redfish in Ansible playbooks.
LAB: Private cloud lab with OpenStack, Ansible, and CloudForms
Get some hands-on experience with a private cloud based on Red Hat OpenStack, Ansible Tower and CloudForms.
Hope to see you in Boston!
Quelle: CloudForms

Performance for Assets uses cloud to increase wind farm output

Demand for wind energy is soaring. The European Union (EU) has set a target to increase the share of energy consumed from renewable sources to 20 percent by 2020. Wind energy is now the second leading form of power generation capacity in the region.
To capitalize on this opportunity, wind turbine operators must maintain a complex network of assets to keep production levels high. Waiting for components to fail before tending to them can reduce output significantly.
Spotting an opportunity, Performance for Assets (P4A) teamed up with IBM Cloud Garage to create an advanced asset management monitoring system for wind turbines that enables predictive maintenance and boosts asset performance.
Building a strong idea
Lots of energy companies have asset management monitoring systems that produce vast amounts of data, but teams often don’t know how to use it effectively. Without clear alerts about when an asset is suddenly underperforming, and information about what actions to take and what to prioritize, it is difficult for technicians to detect problems before they become serious.
To help wind turbine owners extend the lives and value of their assets, P4A set out to create a new monitoring solution. We aimed to unite sophisticated analytics with field knowledge, providing action alerts for technicians before asset failure and performance drift.
We knew we had a strong concept. The next challenge was how to bring it to market successfully.
Making development a breeze
We heard about IBM Cloud Garage, a center of innovation where we could try out the latest technology. Alongside our partner I-Pulses, we set up a session with the IBM Cloud Garage in Nice, France. The IBM team led us through a three-day Design Thinking workshop to evaluate our idea from every angle, with a strong focus on the customer.
We took what we learned from the experience and worked with technical experts from IBM and I-Pulses to create a minimum viable product (MVP) for our new solution in an IBM Cloud Foundry environment.
Using IBM Watson Internet of Things (IoT) Platform hosted in the IBM Cloud, our app gathers and processes data from wind turbine sensors and merges it with weather forecast information. We rely on IBM Informix on Cloud to provide a high-performance, scalable database.
Our data scientists built hybrid machine learning models that can be put into production at scale with the IBM Watson Machine Learning service. The resulting algorithms detect problems with wind turbine components and provide diagnosis and prognosis. This provides a full picture of asset performance and health.
With the app’s user interface, field technicians and managers can explore and visualize asset performance data. They can also interact with an integrated virtual assistant based on IBM Watson Assistant to optimize maintenance workload and machine output.
Powering greater output at lower cost
Thanks to the support from IBM and I-Pulses, the MVP for our app was ready in eight weeks. We presented it to leading asset management owners at the Euromaintenance 4.0 event. It was received very well, and we’re seeing real interest in the solution.
Our app gives asset management owners the visibility they need to boost output and efficiency. Users can tackle underperformance of assets proactively, before they start to drag down revenues. We also help asset management owners to deploy their maintenance teams more effectively, making the best use of limited resources.
P4A plans to make the app available for commercial use within three months. When we do, we’ll be able to scale up quickly and easily using IBM Cloud solutions. With help from IBM Cloud Garage, we’ve created an application that’s going to help ignite the energy sector.
Read the case study for more details.
The post Performance for Assets uses cloud to increase wind farm output appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mirantis Announces New SaaS Portal to Configure On-Premise Cloud Environments in Minutes

The post Mirantis Announces New SaaS Portal to Configure On-Premise Cloud Environments in Minutes appeared first on Mirantis | Pure Play Open Cloud.
An online tool will enable IT operators to experience Mirantis Cloud Platform’s flexible, model-driven approach to cloud lifecycle management
Open Infrastructure Summit, Denver, CO — April 29, 2019 – Today, Mirantis announced a web-based SaaS application that enables users to quickly deploy a compact cloud and experience the flexibility and agility of Infrastructure-as-Code. Available next month, Model Designer for Mirantis Cloud Platform (MCP) helps infrastructure operators build customized, curated, exclusively open source configurations for on-premise cloud environments.
Mirantis Cloud Platform employs a unique approach to deployment and management of on-premise cloud environments, where the entire cloud configuration is expressed as code in a highly granular fashion. That configuration is then provided as input to a deployment tool, called DriveTrain, which validates the configuration data and deploys the cloud accordingly.
“Our customers love the flexibility and granular infrastructure control that MCP offers, but for many, the learning curve associated with building an initial cluster model using YAML files is simply too steep,” said Boris Renski, co-founder and CMO, Mirantis. “Model Designer provides the necessary guardrails, making it easier for anyone to get started with MCP, without compromising on the flexibility they may require down the road as they expand their cloud footprint.”
Model Designer will enable users to specify the degree of configurability they require for their on-premise cloud use case. With the basic configurability level (humorously called “I am too young to die”), Model Designer automatically pre-populates most of the cluster settings to default, pre-tested values. While on the other side of the spectrum, Model Designer will offer the “Ultraviolence” configurability setting, where users are able to tweak virtually every aspect of their on-premises cloud. The resulting configuration models, generated by the Model Designer, are then fed into DriveTrain, which combines them with security tested OpenStack and Kubernetes software binary packages to deploy or update end user cloud environment.
“A declarative approach to operating hybrid cloud infrastructure is a pattern that the community contributed to significantly with projects like Airship,” said Jonathan Bryce, executive director of the OpenStack Foundation. “Model Designer aims to make declarative infrastructure operations accessible to the masses, and it’s a positive sign to see vendors driving adoption for design concepts pioneered by open source communities.”
Model Designer is expected to be generally available in May 2019. Businesses interested in participating in a private beta can sign up here.
About Mirantis
Mirantis helps enterprises and telcos address key challenges with running Kubernetes on-premises with pure open source software. The company employs a unique build-operate-transfer delivery model to bring its flagship product, Mirantis Cloud Platform (MCP), to customers. MCP features full-stack enterprise support for Kubernetes and OpenStack and helps companies run optimized hybrid environments supporting traditional and distributed microservices-based applications in production at scale.
To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest open clouds in the world. Its customers include iconic brands such as Adobe, Comcast, Reliance Jio, State Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.
###
Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com
The post Mirantis Announces New SaaS Portal to Configure On-Premise Cloud Environments in Minutes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Cloud migration checklist: 5 steps to a successful journey

As organizations undergo digital transformation to improve the efficiency and effectiveness of their IT environments, cloud migration is often a high priority.
Yet migrating workloads and applications can often be a challenging process. To help organizations efficiently make the move, here is a five-step cloud migration checklist gleaned from industry best practices.
1. Determine your cloud strategy
Before making the leap to the cloud, determine what you want to accomplish. This starts with capturing baseline metrics of your IT infrastructure to map workloads to your assets and applications. Having a baseline understanding of where you stand will help you establish cloud migration key performance indicators (KPIs). This will help you identify issues during the migration and help determine when your migration is successfully completed.
Some cloud migration KPIs you might select include:

Page load times
Response times
Availability
CPU usage
Memory usage
Conversion rates

These types of metrics enable measurement across a number of categories such as engagement, infrastructure, application performance and user experience to determine how cloud applications are performing.
2. Evaluate which applications to migrate
Not every application belongs in the cloud. Cost and security are major components to take into consideration. Writing for Network Computing, Uptime Institute Chief Technical Officer Chris Brown encourages companies to ensure they understand the business and IT impacts of moving specific workloads and applications to the cloud well before they actually get the migration process underway.
As you evaluate which applications to move to the cloud, keep these questions in mind:

Which applications can be moved as-is, and which will require a redesign?
If a redesign is necessary, what is the level of complexity required?
Does the cloud provider have any services that allow migration without reconfiguring workloads?
What is the return on investment for each application you will be moving, and how long will it take to achieve it?
For applications where moving to the cloud is deemed cost-effective and secure, which type of cloud environment is best — public, private or multicloud?

An analysis of your architecture and a careful look at your applications can help determine what makes sense to migrate. Some applications that use established enterprise hardware could be more expensive to operate in the cloud, or there may be hidden network or bandwidth costs that could make the cloud more expensive than expected.
Additionally, each type of cloud environment has different benefits. Private clouds are more secure and offer more control than public clouds, but also require your organization to manage security and performance. Public clouds provide a highly scalable, pay-per-use model, but are also multitenancy and lack control. Often, a hybrid cloud option, which offers mixture of private and public cloud services, can provide a good balance between the benefits and risks of private and public.
3. Create your data migration plan
Once you’ve evaluated which applications and services make sense to migrate, determine how you want to move each asset to the cloud. Also consider how you can maintain data integrity and operational continuity while doing so. Additional considerations include understanding what dependencies there are, and, with these in mind, determining the order of migrating applications.
4. Secure the right cloud provider
A key part of your data migration will involve picking a cloud provider that can work with you throughout and after the migration process. Here are some questions to ask as you evaluate providers:

What tools, including third-party, does it have available to help make the process easier?
What is its level of experience?
Can it support public, private and multicloud environments at any scale?
How can it help you deal with complex interdependencies, inflexible architectures, or redundant and out-of-date technology?
What level of support can it provide throughout the migration process?

Moving to the cloud is not simple. Consequently, the service provider you select should have proven experience that it can manage the complex tasks required to manage a cloud migration at a global scale. This includes providing service-level agreements that include milestone-based progress and results.
5. Execute your cloud migration
If you’ve followed the first four steps carefully, this last step should be relatively easy. However, how you migrate to the cloud will partially depend on the complexity and architecture of your application(s) and the architecture of your data. You can move your entire application over, run a test to see that it works and then switch over your on-premises traffic. Alternatively, you can take a more piecemeal approach, slowly moving customers over, validating and then continuing this process until all customers are moved to the cloud.
Following a cloud migration checklist helps ensure that you have strategically planned for possible issues, helping you to avoid challenges and ensuring a smoother journey. Ultimately, this allows you to enjoy the benefits that attracted you to move to the cloud in the first place.
Learn more about creating a non-disruptive migration path to the public cloud.
The post Cloud migration checklist: 5 steps to a successful journey appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

RDO Stein Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Stein for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Stein is the 19th release from the OpenStack project, which is the work of more than 1200 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-stein/.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
Photo by Yucel Moran on Unsplash
New and Improved
Interesting things in the Stein release include:

Ceph Nautilus is the default version of Ceph, a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage, within RDO (or is it the default without OpenStack?). Within Nautilus, the Ceph Dashboard has gained a lot of new functionality like support for multiple users / roles, SSO (SAMLv2) for user authentication, auditing support, a new landing page showing more metrics and health info, I18N support, and REST API documentation with Swagger API.

The extracted Placement service, used to track cloud resource inventories and usages to help other services effectively manage and allocate their resources, is now packaged as part of RDO. Placement has added the ability to target a candidate resource provider, easing specifying a host for workload migration, increased API performance by 50% for common scheduling operations, and simplified the code by removing unneeded complexity, easing future maintenance.

Other improvements include:

The TripleO deployment service, used to develop and maintain tooling and infrastructure able to deploy OpenStack in production, using OpenStack itself wherever possible, added support for podman and buildah for containers and container images. Open Virtual Network (OVN) is now the default network configuration and TripleO now has improved composable network support for creating L3 routed networks and IPV6 network support.

Contributors
During the Stein cycle, we saw the following new RDO contributors:

Sławek Kapłoński
Tobias Urdin
Lee Yarwood
Quique Llorente
Arx Cruz
Natal Ngétal
Sorin Sbarnea
Aditya Vaja
Panda
Spyros Trigazis
Cyril Roelandt
Pranali Deore
Grzegorz Grasza
Adam Kimball
Brian Rosmaita
Miguel Duarte Barroso
Gauvain Pocentek
Akhila Kishore
Martin Mágr
Michele Baldessari
Chuck Short
Gorka Eguileor

Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 74 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

yatin
Sagi Shnaidman
Wes Hayutin
Rlandy
Javier Peña
Alfredo Moralejo
Bogdan Dobrelya
Sławek Kapłoński
Alex Schultz
Emilien Macchi
Lon
Jon Schlueter
Luigi Toscano
Eric Harney
Tobias Urdin
Chandan Kumar
Nate Johnston
Lee Yarwood
rabi
Quique Llorente
Chandan Kumar
Luka Peschke
Carlos Goncalves
Arx Cruz
Kashyap Chamarthy
Cédric Jeanneret
Victoria Martinez de la Cruz
Bernard Cafarelli
Natal Ngétal
hjensas
Tristan de Cacqueray
Marc Dequènes (Duck)
Juan Antonio Osorio Robles
Sorin Sbarnea
Rafael Folco
Nicolas Hicher
Michael Turek
Matthias Runge
Giulio Fidente
Juan Badia Payno
Zoltan Caplovic
agopi
marios
Ilya Etingof
Steve Baker
Aditya Vaja
Panda
Florian Fuchs
Martin André
Dmitry Tantsur
Sylvain Baubeau
Jakub Ružička
Dan Radez
Honza Pokorny
Spyros Trigazis
Cyril Roelandt
Pranali Deore
Grzegorz Grasza
Bnemec
Adam Kimball
Haikel Guemar
Daniel Mellado
Bob Fournier
Nmagnezi
Brian Rosmaita
Ade Lee
Miguel Duarte Barroso
Alan Bishop
Gauvain Pocentek
Akhila Kishore
Martin Mágr
Michele Baldessari
Chuck Short
Gorka Eguileor

The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Train, which has an estimated GA the week of 14-18 October 2019. The full schedule is available at https://releases.openstack.org/train/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 13-14 June 2019 for Milestone One and 16-20 September 2019 for Milestone Three.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
Quelle: RDO

NexJ delivers CRM for wealth management as a service with IBM Cloud

In the financial services industry, a firm that uses technology to differentiate its business most often has an advantage over the competition. Today, firms are increasingly focused on cognitive services and intelligent analytics that can be used to deliver better customer service.
It’s not just about dashboards, though seemingly everybody wants a dashboard. Firms want services that give them insight and understanding into what’s going on and how to delight their customers.
A challenge that has become increasingly common in recent years is that firms don’t want to worry about the infrastructure required to benefit from this technology. They’d instead prefer to have it managed and hosted by someone else.
This is why NexJ Systems, a Toronto-based software company that offers intelligent customer management solutions to some of the biggest banks in the world, has partnered with IBM.
Augmenting bankers with better information
The NexJ Intelligent Customer Management solution advances traditional customer relationship management (CRM), adding data management, process management and artificial intelligence to increase user productivity and adoption, improve customer engagement and relationships, and ultimately increase profits.
The solution is installed on premises, behind the firewall at the bank and on secure, private instances of IBM Cloud.
Enabling a SaaS-managed services solution with IBM Cloud
NexJ moved to the cloud when the financial services industry overall became more accepting of cloud infrastructure.
Company leaders chose to partner with IBM because they viewed IBM as having a very strong hybrid cloud offering focused around data integration.
Because the NexJ solution is tightly integrated with a bank’s customer data, using the IBM Cloud secure private offering, as opposed to a multitenant cloud offering, gives NexJ customers peace of mind.
The infrastructure is consistent in data centers worldwide, enabling NexJ to comply with its customers’ data residency requirements.
IBM Cloud enables faster implementation times and makes it easier for NexJ to offer CRM as a managed service.
Reducing time to implementation
NexJ measures its success in terms of being able to provide vertical-specific CRM faster at a lower cost.
By offering the CRM solution as a service, NexJ can deploy it to customers in about 15 to 20 percent of the time that we could if it were not cloud-based.
Additionally, the ongoing maintenance of the application, in terms of total cost of ownership, has dropped as well.
Working with IBM, NexJ has been able to provide a better solution and remain competitive in a high-pressure marketplace.
Watch the video for more details.
 
The post NexJ delivers CRM for wealth management as a service with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud