Changes to OpenShift Online Starter Tier

For some time now, we have offered our OpenShift Online service in a few service tiers. This hosted service has been available since 2011, and to date, well over 4 million applications have been launched on OpenShift Online. One of the key features of this hosted form of Red Hat OpenShift has been our Starter […]
The post Changes to OpenShift Online Starter Tier appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Combining the best pieces of public and private Cloud Foundry

Jason McGee, vice president and chief technology officer for IBM Cloud Platform, appeared on an Enterprise Times podcast this week to discuss IBM Cloud Foundry Enterprise Environment, among other topics.
McGee called Enterprise Environment “an isolated and fully dedicated version of Cloud Foundry in the public cloud” and an evolution of the existing offering.
The Enterprise Times explains further: It’s “an isolated version that allows customers to blend the best of public and private Cloud Foundry. This is about speeding up the provisioning of Cloud Foundry instances. It is fully elastic, priced by the hour and can be deployed through a self-service portal.”
McGee also discussed the Eirini project, which will enable companies to plug Kubernetes into their Cloud Foundry deployments.
To listen to the full podcast episode, check out the post at Enterprise Times.
The post Combining the best pieces of public and private Cloud Foundry appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

CloudForms 4.7 Beta

The CloudForms team is proud to announce the release of CloudForms 4.7 Beta1. Based on the ManageIQ Hammer release, this release contains several enhancements and bug fixes and is the result of about many months of upstream development.
Some of the notable enhancements include:

Ansible improvements, including integration with external Ansible Tower workflows.
Enhancements in OpenStack: new Dashboard and the ability to discover OpenStack Director Providers
New provider for Nuage, which gives the ability to manage its services from CloudForms
New provider for Red Fish (Tech Preview) that allows managing physical services in a similar way as we manage virtual machines
Improvements in the Lenovo providers, like rack topology or Ansible playbook execution against physical resources

For more details and a list of all other improvements see the release notes.
We’d like to encourage all of our customers to download the beta from access.redhat.com and to try it out. Any issues can be reported by opening a case with Red Hat support.
Quelle: CloudForms

Profility taps cloud to help transform patient recovery with personalized post-acute care

Post-acute care is becoming a hot topic as the healthcare industry struggles to adapt to the challenges posed by aging populations and spiraling costs.
When a patient is discharged from a hospital after surgery, they often require a period of rehabilitation. In some cases, they can recover at home. In other instances, they may need specialist treatment at a post-acute care facility for several days, weeks or even months.
Finding the right facility for each patient is a time-consuming process for healthcare providers. And, if a suitable facility can’t be found quickly, it can delay the discharge process. Moreover, if the selected facility or treatment plan turns out to be a bad fit for a particular patient, it may take them longer to recover.
A longer stay doesn’t just mean increased costs for payers and providers. It also keeps patients away from their family and out of their home environment for longer, which can have a significant physical, mental and emotional cost.
Finding a better way
At Profility, we’ve found a better way to help healthcare organizations manage post-acute care. Instead of relying on care plans designed for the “average patient,” we use sophisticated algorithms to profile each individual patient and match them against a vast data set of post-acute care information.
Our solution identifies the treatment pathways that have been most successful for similar patients in the past and recommends facilities that can offer the most appropriate care for each person while keeping them as close to their home and family as possible.
As we were building the solution, we realized that a cloud-based architecture would be the most cost-effective and low-maintenance option. However, since we need to store highly sensitive patient information, we needed a cloud platform that was built from the ground up for security and compliance.
We took our ideas to the IBM Alpha Zone, and worked with senior IBM architects to design a solution using IBM Cloud technologies. The result was a solution that has helped us achieve full compliance with HIPAA and other relevant data privacy and security regulations.
To store the medical records of the patients we need to match, we use IBM Cloudant. As a NoSQL document store, it’s an ideal platform for the semi-structured, variable format of these records, and it scales seamlessly across multiple nodes to provide all the performance, scalability and resilience we need.
Meanwhile, to run our core back-end applications, we use IBM WebSphere Application Server, a slim Java runtime that is perfect for cloud deployments.
Personalizing post-acute care
With support from IBM, we were able to make the leap from an initial prototype to a full production system in the IBM Cloud within just a few weeks, enabling us to start building relationships with clients and delivering value almost immediately.
Our first clients have been using the Profility platform for six or seven months, and they are already seeing very promising results. We’re confident that the solution will empower healthcare professionals to accelerate the post-acute care planning process, helping hospitals avoid delays in the discharge process.
Moreover, by recommending the most appropriate care pathways, our solution helps patients rehabilitate in facilities that offer the best care for their individual needs. As a result, payers and providers are starting to see a reduction in the average length of stay in post-acute care facilities, and a lower rate of readmission, which should ultimately result in significant cost savings.
Above all, our solution helps patients get the support they need to regain their health faster and get home to their families sooner.
Read the case study for more details.
The post Profility taps cloud to help transform patient recovery with personalized post-acute care appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Updating your application with MCP DriveDrain

The post Updating your application with MCP DriveDrain appeared first on Mirantis | Pure Play Open Cloud.
Providing complete lifecycle management for an application involves more than simple Continuous Integration and Continuous Delivery (CI/CD). When done right, there’s orchestration involved, as well as oversight, auditability, and even, on occasion, manual approvals. Here at Mirantis, we find that the Mirantis Application Platform, based on Spinnaker, provides a way to take advantage of all of those capabilities for a complete application lifecycle management solution.
But what if you don’t have a complete solution?  What if you only have the CI/CD portion? What if your application is not cloud native?
Can you still do lifecycle management?
For example, if you have Mirantis Cloud Platform, you already have the ability to do CI/CD. (In fact, MCP is based on it). So why not dip your toe into the water and get started with that?
Today, I am going to show you how you can do CI/CD (continuous integration, continuous delivery) of a sample application deployed to an Openstack-based cloud using the Jenkins component in MCP DriveTrain. If you’re running a Kubernetes-based cloud instead, don’t worry, the principles are essentially the same, at least from the Jenkins standpoint.
First, a little background on DriveTrain.

DriveTrain is the heart of Mirantis Cloud Platform (MCP), giving you the flexibility to adapt your private cloud to on-the-ground application reality without significant investment in professional services. You can continuously tune the services, versions, configuration, and topology of your clouds to suit the changing needs of your business – all while delivering critical updates with minimal interruption.

Just like the public cloud providers do with their own clouds, DriveTrain uses an infrastructure-as-code approach for your private clouds, helping you to reduce the cost of manual steps and the risk of delivering change quickly.
In this way, you can fully automate the delivery, testing and maintenance of your infrastructure so you can achieve efficient, day-2 operations, zero-downtime upgrades, and stay current with the latest open source features and fixes.
With DriveTrain, you can manage your private clouds with:

Code – programmatic deployment/changes
Automation – continuous integration, delivery, and validation
Control – configuration change auditing

In the following example, we have a simple web application that is being maintained on a git repository. The master branch is currently deployed into production, and we will demonstrate how to orchestrate a test environment to build, deploy, and test a development version of that same application. Once the tests have passed, the production environment can be upgraded using those same tools and mechanisms.
All of the above is achieved via automation, which not only provides extreme agility, but also minimizes errors and failures.
It is assumed that the reader is familiar with CI/CD and orchestrations tools such as git, jenkins, heat, and so on. Links are provided at the end for your convenience as well.

The staging view
Of course you’d never deploy right into production, so the first step is to build the application, deploy it to a dev/test or staging environment, and test it. In the Jenkins component of MCP, that process looks something like this:

Let’s take a brief look at how that works.  (If you want the details, you can scroll down to the embedded video which shows the step-by-step process.)
Here is how the typical stages would be defined and executed by using the Jenkins component in MCP DriveTrain.

Build

To build your application, typically you integrate a build system (such as Maven) with a repository (such as Git). Jenkins provides capabilities for source code management, build triggers, build environments, and so on, and you can easily configure them.

You typically have the prod/master branch running in production, so in this example, you would choose a dev/feature branch of your application in the code repository to start with.
Once the build process Jenkins triggers completes, you’re ready to deploy the application.

Deploy

In this example, we’re deploying the application to OpenStack, so we’ll do it using Heat Orchestration Templates (H.O.T.) that triggered from the Jenkins pipeline. (If you were deploying to Kubernetes, you might use Helm instead, for example.)
You can perform the Initial configuration of Jenkins for H.O.T by specifying the connection parameters to the Openstack configuration:

You specify the actual heat template that you want use for orchestration in the deploy stage of the pipeline. You configure the environment details within the heat template itself, as well as passing any parameters (such as branch, floating ip addresses, and so on) as inputs or outputs.

Test

Once the application has been deployed, you can test it by integrating various automation tools with Jenkins. In this example, we have just used curl commands to validate the http output from the test application. You can also use the outputs from the heat templates in the previous step as inputs to drive the test automation in this stage as well.
This is what the dev application looks like in this test environment.

 
Note that because the Heat template was directed at the dev/test environment, the production environment is not impacted at all, and you can now plan and schedule the upgrade to the new version.

Promoting to production

At its simplest level, promoting these changes to production is a straightforward matter of performing the same steps, but with a Heat template that deploys to production rather than dev/test or staging.
But is that really how it should be?
In an ideal world, the process of promoting code to production would be integrated into a more complex pipeline that includes not just deploying to and testing on staging, but also automated or even manual approvals. You can do this manually, of course, or you can use a tool that provides that level of orchestration and auditability support, such as Mirantis Application Platform’s Spinnaker component.

Why bother with CI/CD?
Of course, in some ways it seems like CI/CD makes things more complicated, but continuous integration and delivery provides various benefits that make it worth the effort
Continuous delivery provides benefits such as:

Integration bugs are detected early and are easy to track down due to small change sets. This saves time, money, and countless hours of aggravation over the lifespan of a project.
CI/CD avoids last-minute chaos at release dates, when everyone tries to check in their slightly incompatible versions of the overall code.
When unit tests fail or a bug emerges, if developers need to revert the codebase to a bug-free state without debugging, only a small number of changes are lost because integration happens frequently.
Continuous delivery means constant availability of a “current” build for testing, demo, or release purposes.
Frequent code check-in pushes developers to create modular, less complex code.
It enforces the discipline of frequent automated testing.

Additional benefits of continuous automated testing include:

It provides immediate feedback on the system-wide impact of local changes.
It allows for efficient use of resources. For example, a small dev/test environment can be created and destroyed on demand.
Software metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and feature completeness) focus developers on developing functional, quality code, and help develop momentum in a team.

In summary, CI/CD via automation delivers extreme agility and efficient use of resources via automation, whether you use the basic version built into MCP, or you provide more complex pipelines using MAP and Spinnaker..
Interested in learning more about how to get the best from your Mirantis Cloud Platform? Check out our 3 day MCP training course.
Additional resources

What OpenStack is and how it works
What Heat is and how it works
What Jenkins is and how it works
Mirantis DriveTrain lifecycle management
Link to demo

        The post Updating your application with MCP DriveDrain appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to drive a successful decision automation project

An IBM Institute for Business Value report points out how automation is shaping the future of organizations, evolving from basic to intelligent operations.

More than 90 percent of C-level executives who use intelligent automation say their organization performs above average in managing organizational change in response to emerging business trends.
More than 50 percent of C-level executives who use intelligent automation have identified key operational processes that can be augmented or automated using AI capabilities.

Operational decisions are fundamental to organizations and their automation technologies. Operational decisions are everywhere. Just think of the thousands of decisions made in day-to-day operations: managing risk, assessing credit and loan applications, defining or adjusting pricing, offering promotions to loyal clients, detecting fraud. There are operational decisions behind pretty much every aspect of most organizations across industries.
How to overcome decision automation challenges
As automation becomes pervasive, the challenges companies face in implementing these decisions are plentiful. Business stakeholders need to easily and consistently implement, test and deploy decision changes to reach markets quickly. Organizations should eradicate the lack of consistency in how decisions are made.
Challenges such as these are addressed through a robust decision automation platform. Whether you are considering starting your journey or you are in the middle of your automation project, the following three things are important to drive success with your decision automation project.

Business-oriented methodology. Let your business experts take control by discovering business decisions and validating them with no need to write code. For example, consider a collaborative platform where they can directly author, change and validate business policy decisions and share with team members.
Ability to grow with changing needs. Decisions are the key assets driving business, and you will need to manage them professionally. Imagine if you have a single platform that scales from your first project to an enterprise-wide program without the need for huge incremental investment.
Intelligent operations with robotic process automation. Bots can help you automate repetitive tasks, but you can make them much more intelligent by allowing them to make decisions, such as those involving eligibility, compliance, pricing and tax decisions.

How to ease the transition
You can make implementing automation projects simpler by considering how company employees will interact with the solution. Below are three tips culled from our many client interactions.
Model decisions visually. Simplify the discovery and the authoring of decisions through visual models that you can validate in a few clicks. These models help business users decompose a decision into small pieces that contribute to the final decision visually. This makes life easier and doesn’t require writing code.
Prototype solutions. Add value by creating more sophisticated decisions, validate your decisions in complex scenarios, simulate the impact of new decisions, and manage user access and change management.
Make your bots smarter. Make it easier for non-technical people to create intelligent bots, without programming skills. Expose key decisions through APIs or interactive user interfaces that robotic process automation bots can operate.
How IBM Operational Decision Manager can help
The recently released IBM Operational Decision Manager 8.10 and the cloud-based service IBM Decision Composer greatly simplify how decisions can be authored, shared, tested and then executed within enterprise applications. This new methodology blends together well-defined decisions using decision modeling and notation (DMN) visuals and the power of the IBM Operational Decision Manager solution. It also puts business experts in the driver’s seat so that they can author powerful executable decisions without typing a single line of code.
You can now try Decision Composer for free to experience the power of decision modeling.
The post How to drive a successful decision automation project appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Re-Imagining Virtualization with Kubernetes and KubeVirt

The Kubernetes platform’s evolution allows organizations to revisit how they develop new applications using microservices and containers. As with any new technology there can be the temptation to “move everything to containers”, yet history shows the length of such transitions is measured in years, or even decades. With a major part of the current application […]
The post Re-Imagining Virtualization with Kubernetes and KubeVirt appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Getting Started with KubeVirt Containers and Virtual Machines Together

Virtual machines have existed for a long time and have filled a critical role providing infrastructure on demand to application administrators and developer teams who need to be able to code, test and deploy faster than can be done with physical infrastructure. However, container technology–which isn’t new but has seen a rise in popularity in […]
The post Getting Started with KubeVirt Containers and Virtual Machines Together appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

The silver lining: Why businesses should invest in cloud transformation

Cloud computing is nearing its second decade of existence. Since 2000, the industry and technology landscape has matured greatly, with organizations evolving from using cloud experimentally to cloud being a platform for innovation and running entire businesses.
However, despite the growth in acceptance, enterprise cloud adoption and the rate at which enterprises actively run workloads on cloud is still low. A 2017 survey by 451 Research indicated that only 45 percent of workloads are deployed to some type of cloud. Even this may be optimistic. A more recent McKinsey survey estimated that the median cloud adoption rate for enterprises may be around 19 percent.
Gathering cloud adoption business inputs
One of the most important considerations enterprises face is the sustained business benefits or measurable value that cloud provides against the investments required. This is not only a question from those just starting their cloud journey, but also those transforming their entire enterprise using cloud, moving from “cloud 1.0” to “cloud 2.0”.
A common tactic to simplify decision making around cloud adoption and quickly increase usage is focusing on activities such as workload migration or application modernization to help transition applications from an established platform to one that is cloud-based. These are technical initiatives that focus on the nature of the application itself, the type of cloud environment it runs in, and the overall plan to modernize applications or migrate whole workloads to the cloud. While this is important and necessary, this perspective is only part of the equation. Organizational financial and cultural inputs must also successfully guide decisions around achieving long-term successful and business-value-based cloud transformation objectives.
For instance, there is a perception that all applications or workloads will have long-term transformational impact and cost benefit when moved to cloud. However, this may not always be the case. Fortunately, there is a silver lining in the cloud discussion: these insights can be derived and definitively validated using The Cloud Adoption and Transformation Framework.
Understanding the business value of cloud adoption and transformation
Initiatives such as workload migration or application modernization help facilitate the move to cloud, but these initiatives alone do not create the holistic perspective required to achieve value from cloud transformation. What’s missing is the more complete context summarized in the picture below.

As part of moving to cloud, enterprises seek to transition from their current state to a future state that includes the actualization of new cloud-based capabilities and practices. Considerations for the current state include talent and skills, the nature of services currently consumed or delivered, hardware, software, and communications services.
To bridge the gap between the current state and the desired future state, enterprises typically employ application transformation techniques. A more holistic view extends this application-focused view with additional inputs as depicted above.
Future-state capabilities can be organized by dimension (architecture and technology, culture and organization, security and compliance, methodology, and so on). Cloud adoption accelerators may include reskilling, automation and reimagining parts of an organization design, such as the introduction of concepts, or including a center of competency to nurture, deploy and scale new ways of working.
These cloud adoption accelerators become the levers for driving the rate and pace of cloud transformation, the key elements in which enterprises should invest for the desired business outcomes in the timeframe planned. An enterprise may choose to alter the pace at which change will take place. For example, a lower adoption rate of 40 percent takes a more cautious approach that extends the time required for key milestones to be attained, in exchange for lower risk. A rate of 80 percent may decrease the time needed to realize key milestones, but may require greater investment to support required changes.
Defining KPIs and measuring success
Key performance indicators (KPIs) measure effectiveness and can help an enterprise continuously calibrate the cloud decisions made to assure alignment to interim milestones, and, importantly, strategic intent. This may include a minimum set of operational metrics supporting business and technical objectives and aligned to the expectations for cloud and for sustained transformation goals. These metrics can guide the implementation of a business-value-driven case since they serve as milestones in the transformation journey.
Example categories and KPIs include:

Platform and service performance, including service availability percentage, responsiveness rate and service capacity rate.
Customer fulfillment and provisioning, including lead time to fulfill and provision, plus demand backlog size.
Service quality, including deploy success percentage, failure rate percentage and incident rate.

These metrics are important to continue to keep focus on what is important to the business supported by information technology. Here are some useful references on total cost and value of cloud ownership and what-if-analysis given the various considerations.
The move to cloud may be challenging. There are no silver bullets given the complexity of these systems. However, there is a systematic method to help you navigate through these decisions. This is the silver lining. It is the integrated set of decisions that need to be made together that assures long-term success even in the face of complexity.
To start or expand this essential conversation, you can schedule a complimentary cloud adoption briefing to discuss how your organization can use cloud adoption transformation to get on track to think, transform and thrive, ultimately realizing significant business outcomes with cloud.
The post The silver lining: Why businesses should invest in cloud transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 2

The growth and innovation in the Kubernetes project, since it first launched just over four years ago, has been tremendous to see. In part 1 of my blog, I talked about how Red Hat has been a key contributor to Kubernetes since the launch of the project, detailed where we invested our resources and what […]
The post OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift