Exploring a Metrics-Driven Approach to Transformation

My team has been working with organizations adopting containers, Kubernetes, and Red Hat OpenShift for more than three years now. When we started providing professional services around enterprise Kubernetes, it became clear we needed a program-level framework for adopting containers that spelled out the activities of multiple project teams. Some participants would be focused on […]
The post Exploring a Metrics-Driven Approach to Transformation appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Save the Date: OpenShift Commons Gathering at Red Hat Summit announces speakers from NASA, Volkswagen, Microsoft Azure and Eli Lilly

Check out the packed Agenda for the OpenShift Commons Gathering in Boston on May 6th! The OpenShift Commons Gathering will feature speakers from NASA, Volkswagen, Microsoft Azure and Red Hat’s CEO Jim Whitehurst. The OpenShift Commons Gathering at Red Hat Summit brings together experts from all over the world to discuss the container technologies, operators, the […]
The post Save the Date: OpenShift Commons Gathering at Red Hat Summit announces speakers from NASA, Volkswagen, Microsoft Azure and Eli Lilly appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Why organizations should make strategy central to cloud transformation

In his conversation with General Manager of IBM Hybrid Cloud Services Jim Comfort, InformationWeek’s Joao-Pierre S. Ruth asked a big question: what sets IBM apart from other cloud services?
Throughout the interview, Comfort stressed that one of the key components is strategy.
“All of the large deals that we’ve done have had a business strategy, a recognition of the degree of the transformation, and a systemic approach to how to make it more of a business transformation than a technology exploitation,” Comfort said.
Other cloud providers offer services and products that don’t necessarily align with business goals that matter most, he added. With IBM, “Rather than a point product sale or massive project, you will recognize the problem that needs to be solved.”
Comfort went on to describe several areas in which cloud is changing how businesses operate: infrastructure architecture, application architecture, DevOps, development processes, capabilities, tools, culture, security and compliance.
“Each of those areas is a massive change, and for most clients, they don’t come together until they reach the CEO,” he said.
Because of the scope of those changes, “It doesn’t make sense to change everything overnight,” Comfort said. Organizations should focus on areas that drive topline performance first. He predicted that most organizations will likely keep about half of their operations on premises through 2020, with 30 percent on external infrastructure and 20 percent in software as a service (SaaS).
Comfort also discussed the planned IBM acquisition of Red Hat: “We want it to continue to be the foundation for hardening open source, the innovation around open source and the massive developer ecosystem they have. We don’t want to break or change the model.”
For more about cloud strategy, multicloud and Red Hat, read the full interview at InformationWeek.
The post Why organizations should make strategy central to cloud transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Considerations on OpenShift PKIs and Certificates

OpenShift features several Public Key Infrastructures (PKIs) that manage certificates for specific purposes. To help deploy OpenShift more securely, it’s necessary to know what each of these infrastructures does and how to best configure them. Note that the information discussed in this article refers to OpenShift 3.x and it is subject to change in the […]
The post Considerations on OpenShift PKIs and Certificates appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Keep the honeymoon going with the right DevOps environment

By marrying a company’s software development processes and IT operations, DevOps breaks down organizational silos, improves business performance and lowers costs. When done right, the DevOps honeymoon never ends.
But chemistry only happens when an enterprise shapes the goals of its DevOps environment around the proper IT infrastructure. Here are a few considerations to keep in mind as your organization meshes development and operations to become more agile and responsive to the changing demands of customers.
“Shift left” with development and testing
If inefficiencies and department isolation cause a consistent backlog in software development, your products will never get off the ground. Development and testing teams should work more quickly and efficiently together, since they’ll be required to create and test applications in just a few days and often in a matter of hours or minutes.
Speed and efficiency will happen naturally as testing and development remove wasteful coding and testing processes. That comes with a “shift left,” the process that moves testing to an earlier stage of the software development cycle. By combining service virtualization with test automation, teams can test sooner in the process. That gives them ample time to offer feedback to the development team, which means issues can be resolved sooner, when they’re less costly to fix.
Automation supports continuous delivery
Automation lays the groundwork for continuous testing, a process that ensures software can be developed to high standards while getting the job done quicker and for less money than ever before. By continuously testing, your organization can follow a standard of continuous delivery, helping you validate the quality of software even after it goes out the door.
Continuous delivery connects development and testing with users so feedback can be rapidly implemented to improve the product. Repetition and exactness allow every version of software to be released with confidence and speed.
Application management runs the DevOps engine
Automated tools let you shift left and continuously test and deliver, but it’s also critical for all of your DevOps applications to be aligned. Applications constantly change for the better, so syncing them with every step of development and delivery gives developers and IT the insight they need to see what they’re working with.
With the support of application management tools, teams can detect, isolate, diagnose and solve any problems that come down the line. With the clarity to measure application availability and performance, your continuous processes will always be ready for the demands of DevOps.
PaaS packs the right punch for cloud DevOps
While cloud technology isn’t an absolute requirement of DevOps, it does offer the necessary provisioning resources that developers and testers need to quickly create the test environments for software development.
The type of service level — not whether it’s a public or private cloud — is what powers a DevOps environment. The higher the service level, the more responsive, agile and reliable your pipeline will be. A platform-as-a-service (PaaS) deployment is an ideal model because it offers application management tools and supports DevOps technologies such as containers. PaaS also consistently helps development teams create and migrate applications to the cloud more quickly and easily.
Make sure your commitment to DevOps starts on the right foot. Learn how to make the method work by registering for “DevOps for Dummies”, which can help you discover how you can fulfill your DevOps needs around the cloud.
The post Keep the honeymoon going with the right DevOps environment appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift Commons Briefing: Introduction to MLFlow on OpenShift – Mani Parkhe (Databricks) and Zak Hassan (Red Hat)

  In this OpenShift Commons Briefing, DataBricks‘ Mani Parkhe gave an excellant introduction to MLFlow, an open source platform to manage the Machine Learning lifecycle, including experimentation, reproducibility and deployment. MLFlow currently has three components: MLflow Tracking – Record and query experiments: code, data, config, and results. MLflow Projects -Packaging format for reproducible runs on […]
The post OpenShift Commons Briefing: Introduction to MLFlow on OpenShift – Mani Parkhe (Databricks) and Zak Hassan (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How and Why We’re Changing Deployment Topology in OpenShift 4.0

Red Hat OpenShift Container Platform is changing the way that clusters are installed, and the way those resulting clusters are structured. When the Kubernetes project began, there were no extension mechanisms. Over the last four-plus years, we have devoted significant effort to producing extension mechanisms, and they are now mature enough for us build systems […]
The post How and Why We’re Changing Deployment Topology in OpenShift 4.0 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

3 reasons most companies are only 20 percent to cloud transformation

The way some providers seem to describe it, cloud transformation is easy. All companies must do is migrate all workloads to one cloud, modernize their applications and enjoy the full benefits of the digital transformation.
Of course, if it were really so easy, the majority of businesses would already have done it.
The reality, though, is that an IBM-commissioned study by McKinsey & Company shows that most enterprises are only 20 percent of the way into their cloud journeys. The simplest workloads are in the process of migration, but according to the study, the remaining 80 percent of workloads are still on premises. Why?

To get through the next 80 percent of the cloud journey, we think teams should manage complexities that the basic cloud model doesn’t address. This means tackling three major challenges:
1. Unique workload needs.
Crucial issues such as security, compliance and location mean that many enterprises can’t simply move data or workloads into the public cloud.
Open technology standards have benefits including increased flexibility caused by the ability to choose from more vendors while having technologies work better together. Open source schemas are designed to give companies the flexibility to adapt quickly to changing business needs. IBM Cloud is built on open standards, with a choice of many cloud models: public, dedicated, private and managed, so you can run the right workload on the right cloud model without vendor lock-in.
For example, a financial organization can keep its most sensitive customer information on premises, its marketing data on a separate public cloud vendor and a key cloud-native application on the IBM Cloud, with reliable security and integration between all three. Having these options can help accommodate unique security or compliance needs.
2. Multiple clouds and vendors.
According to the McKinsey & Company study, 94 percent of enterprises surveyed rely on multiple cloud providers to accomplish their business goals. We are living in a multicloud world. Enterprises need help managing some of the challenges that come from this reality.
Multicloud management solutions can provide full visibility and control across a company’s preferred mix of cloud vendors and models. IBM provides hybrid integration to connect all enterprise clouds with their existing applications and workloads, in the cloud or on premises. Additionally, IBM creates its solutions with an understanding that many clients rely on some solutions outside of IBM.
3. Lack of relevant skills.
Some clients may not have the in-house expertise to take on issues such as application modernization.
Consider working with a technology vendor that has expertise across industry and solution challenges and opportunities. IBM specialists are available to help companies master their cloud journey. Our teams have a deep understanding of various industries and a history of helping enterprises prioritize and modernize what matters most so companies can move more to the cloud.
IBM has built its cloud strategy around meeting enterprises where they are in their cloud journeys. The solution is an open, multicloud approach that provides flexibility, agility and deep industry expertise without sacrificing rigorous security.
Learn more about IBM, Red Hat and how to help your organization achieve cloud transformation by visiting the official website.
The post 3 reasons most companies are only 20 percent to cloud transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How Travelport uses cloud to meet changing demand in the digital travel marketplace

The travel industry is transforming with emerging technologies, new business models and changing consumer expectations.
Travelers now have a wide range of options when it comes to planning their trips. While it might have once been common to visit your local travel agent to plan a trip after seeing a neighbor’s vacation snaps, it’s now more likely that vacation pictures on social media are leading someone to browse for a getaway online. People can perform their own search and use a travel-booking website to compare the vast amount of options available.
In turn, travel suppliers and sellers must work hard to keep travelers engaged throughout their trip.
That is why Travelport, a global distribution service (GDS) that connects thousands of travel suppliers with even more travel sellers, is modernizing its travel commerce capabilities and offerings. We’re driving this transformation with a focus on the digital disruptors enabling the experiences travelers now prefer and expect.
The five forces of digital
There are five major digital forces that are disrupting the travel industry. These are the same technologies that are impacting all areas of our lives; including the Internet of Things (IoT), mobile devices, artificial intelligence (AI), big data and access to mass compute, particularly through cloud infrastructure. Together, these forces are improving travel experiences and transforming consumer expectations.
With the cloud, organizations can take advantage of technology capabilities such as analysis of big data, AI and machine learning, not just in central databases, but spread around the world within systems of intelligence. A seller can then use this relevant, more accurate information to personalize the traveler’s journey. For example, as consumers continue depending on their mobile devices with real-time connectivity enabled by IoT, they want real-time flight alerts and access to their trip itineraries on the go. Our 2018 digital traveler research revealed that these were two of the most important travel app features identified by leisure travelers.
Driving business forward with data
With data and AI capabilities, enterprises can improve the efficiency of their travel spending and the experiences of their employees during business travel. After identifying some of the major pain points for business travelers, Travelport and IBM worked together to find a solution and build the technology to make the experience of work travel better.
The IBM Travel Manager is a solution for corporations. The AI platform delivered on the IBM Cloud combines IBM employee and expense data with Travelport’s global travel content on various routes to offer more personalized, relevant options using AI to determine the best time to buy, the optimal routes and the best-fit hotels. It also looks at what-if scenarios. Instead of putting the traveler in an in-policy hotel five miles from the meeting, for example, it might recommend an exception that will save time and transportation costs.
The IBM Travel Manager actually goes beyond the efficiency of cost savings. It’s driving businesses forward by getting the right employee to the right place with the right mindset.
Travelport and IBM Cloud
In some ways, cloud is simply distributed compute in data centers that you can turn on and off to access services as needed. It has played a great part in the success of startups and new companies. But, for a more traditional company with a large data center like Travelport, cloud must complement existing infrastructure, not replace it.
IBM is a major supplier of large mainframe compute within our data centers, which we still believe are very important for security, speed and resilience. Combining that into the IBM Cloud gives us the ability to burst processing capabilities and distribute content wider than our data center to reduce latency with a more localized approach.
Ways to use data analytics and AI
A company can elevate conversations by informing its customers or stakeholders about the messages that matter most to them. It can also use these technologies to improve processes and achieve greater efficiency from machines. As a result, companies can personalize the experience for travelers and generate new business models.
When I joined Travelport five years ago, we were performing about 1.5 billion searches per month. In those search results, we would provide 300 to 400 different options for the traveler. We’ve now grown to more than 11 billion searches per month and we’re now pricing around six billion itineraries each day.
What has changed is the demand cycle with the mobile applications and meta searches through comparison sites. Our engine must manage the massive growth in consumer demand, processing more and more searches at faster speeds. Working with data analytics and AI tools, along with data models, we’ve used our own search algorithms to think differently about what we serve up.
We can now eliminate waste from those processes and get feedback loops into search, allowing us to provide highly relevant options. We can intelligently cache at the edge of the cloud, so that we have the results stored and ready to go rather than recalculating them every time, which is easier said than done when you’re working with volatile inventory such as airfare.
Putting these technologies to work helps us scale while reducing the cost structure of our business; and, Travelport is able to provide increasingly more personalized and engaging search results for travelers.
Learn more about data analytics and artificial intelligence capabilities available on the IBM Cloud.
The post How Travelport uses cloud to meet changing demand in the digital travel marketplace appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift 4: Install Experience

In the previous post, we described our goal of making day-to-day software operations effortless for both operations and development teams. How have we changed our install experience in Red Hat OpenShift to reduce friction and achieve this goal? In this post, we will provide an overview of our new installation tool, its usage of the […]
The post OpenShift 4: Install Experience appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift