Building Your Own Monitoring Solution – A Look at openshift-tools

OpenShift Online and OpenShift Dedicated are monitored and managed based on tooling that has been developed and deployed by Red Hat, and the approach has been refined as part of the experience from running OpenShift Online (v2) and now OpenShift Dedicated and Online (v3). This monitoring is currently based around Zabbix, NB watch for future posts on the future of CloudForms, and the cm-ops project which is underway to add alerting and thresholds to CloudForms as well.
Quelle: OpenShift

Enterprise cloud strategy: Platforms and infrastructure in a multi-cloud environment

In past posts about multi-cloud strategy, I&;ve focused on two principles for getting it right — governance and applications and data — and their importance when working with a cloud services provider (CSP).
The third and final element of your multi-cloud strategy is perhaps most crucial: platform and infrastructure effectiveness to support your application needs.
Deployment flexibility
When managing multiple clouds, you want to deploy applications on platforms that satisfy business, technical, security and compliance requirements. When those platforms come from a CSP, keep these factors in mind:

The platforms should be flexible and adaptable to your ever-changing business needs.
Your CSP should allow you to provision workloads on bare metal servers, where performance or strict compliance is needed and can support virtual servers and containers.
The CSP should be able build and support a private cloud on your premises. That cloud must fulfill your strictest compliance and security needs, as well as support a hybrid cloud model.
The CSP must provide capabilities that help you build applications by stitching together various platform-as-a-service (PaaS) services.
Many customers use containers to port applications. Find out whether your CSP provides container services backed by industry standards. Understand any customization to the standard container service that might create problems.

Seamless connectivity and networking
Applications, APIs and data must travel along networks. Seamless network connectivity across various cloud and on-premises environments is vital to success. Your CSP should be able to integrate with carrier hotels that enable on-demand, direct network connectivity to multiple cloud providers.
Interconnecting through carrier hotels enables automated, near-real-time provisioning of cloud services from multiple providers. It also provides enhanced service orchestration and management capabilities, along with shorter time to market.
Your CSP must also support software-defined and account-defined networks. This helps you maintain network abstraction standards that segregate customers as well as implement network segmentation and isolation.
The CSP should also control network usage with predefined policies. It must intelligently work with cloud-security solutions such as federated and identity-based security systems. Make sure the CSP isolates your data from other clients’ and segments it to meet security and compliance requirements.
Storage Interoperability and resiliency
Extracting data from a CSP to migrate applications in-house or to another CSP is the most challenging part in a multi-cloud deployment. In certain cases, such as software-as-a-service (SaaS) platforms, you may not have access to all the data. One reason: there are no standards for cloud storage interoperability. It only gets more complex when you maintain applications across multiple clouds for resiliency.
The solution is to demand that your data can move between clouds and support both open-standard and native APIs. Ask your CSP whether it supports “direct link&; co-location partnerships that can &;hold&8221; customer-owned storage devices for data egress or legacy workload migrations.
With a sound storage strategy, you&8217;ll have good resiliency in case of disaster. Again, questions matter. Does your CSP provide object storage in multi-tenant, single-tenant or on-premises &8220;flavors”?
As with everything else involving a CSP, look carefully under the hood. Find out whether the CSP&8217;s storage solution is true hybrid; that is, an on- or off-premises solution that simplifies multi-cloud governance and compliance.
For more information, read “IBM Optimizes Multicloud Strategies for Enterprise Digital Transformation.”
The post Enterprise cloud strategy: Platforms and infrastructure in a multi-cloud environment appeared first on news.
Quelle: Thoughts on Cloud

Lifecycle support changes for Red Hat OpenStack Platform 10 and beyond

OpenStack continues to evolve
During the past six years, OpenStack has evolved rapidly. The OpenStack community itself has grown to more than 60,000 strong, with support from a wide array of technology vendors across the globe. Customers are pushing OpenStack into production and starting to realize the many benefits OpenStack has been promising them.
And as more and more customers push OpenStack into production, changes into how they want to consume it have evolved as well.
Customers need choice for consumption
The needs for different ways to consume Red Hat OpenStack Platform came about from our constant interaction with customers and potential customers in the market. In listening to them, we realized that offering one standard lifecycle was not going to meet everyone’s needs going forward. During our discussions, which we expect to continue, we saw two types of customer emerge.
One type of customer needs a “long life” version. These customers typically are reluctant to change what’s already in production. To them upgrading to a newer version of software can be a major disruption. Downtime is not something they can afford or work-around. Validations during an upgrade are often done manually. Sometimes they are constrained by complex regulations within their industry. Also, they may not always need or desire the latest features of a product, especially if that product is mature enough that it’s current version serves them well enough.
On the other side, there are customers who want the latest features to stay on the cutting edge of technology and stay ahead of (or keep up with) their competition. They want new features as soon as they are available. They work in a fast-paced environment and may even be asked to deliver applications to their internal and external clients rapidly as well. These organizations are well versed in continuous delivery concepts and have automated ways of performing validations during upgrades. They also have the ability to continuously upgrade their hardware to support a rapidly scaling infrastructure.
Our surveys and customer interactions even show that within a single large customer we often encounter a diversity of deployments with different needs, some needing latest features and some craving for a long life version.

Figure 1. A breakdown of the two types of customers.
Thus, with the release of Red Hat OpenStack Platform 10, based upon the upstream community’s Newton release, we are announcing new options for lifecycle support. One that will support those customers desiring long-life versions, and another that will allow customers to stay up-to-date with the latest releases.
The two new lifecycle support options
The new lifecycle support options will address the needs of the two different customer types outlined above. We will be able to do this by designating different support lengths for different versions of Red Hat OpenStack Platform. Some versions will be long life and have support for three years with an option for longer, and other versions will be supported for one year.

Figure 2. Meeting the needs of customers with two types of support.
Red Hat OpenStack Platform 10 will be the first “long life” version. It will be supported by Red Hat for three years. Customers will also have the option to purchase fourth, and even fifth, years of support. Then, approximately 18 months after version 10, we plan to release another long life version, with Red Hat OpenStack Platform 13. And we expect this cadence to continue with every third release. The frequency of the length of this support, up to five years on every third version, is exclusive to Red Hat.
Customers who want to standardize can then do so on Red Hat OpenStack Platform 10, Red Hat OpenStack Platform 13, etc. for up to five years (with the extended lifecycle support option for those fourth and fifth years). They will be able to skip the versions in between.
For the regular versions (those in between these long life ones), we will provide one year of support. So for Red Hat OpenStack Platform versions 11 and 12, support will be available for one year each. Customers on this path still will be able to use Red Hat OpenStack Platform director for automatic upgrades and updates to take them from version 10 to 11 to 12 to 13 and so forth, offering them a new version to upgrade to approximately every six months.

Figure 3. Long life vs. standard life support by release.
What else should you know about these changes?
These new lifecycle support changes are only effective for Red Hat OpenStack Platform 10 and later. Support for versions 9, 8, 7, etc. does not change. By policy, Red Hat does not modify (or more precisely reduce) the lifecycle of a version after it has been released. We will still offer three years’ support with no option to purchase a fourth or fifth year.
If you need help deciding which path is the right one for you, please contact your Red Hat account representative or contact us if you aren’t a customer yet. We will work with you so that you can decide which option is right for your needs  .
Also know that we won’t lock you into one path or the other. If you start out thinking you should be on the latest features path, upgrading every six months or so, and decide that standardizing on the long life version would be the better option, we will work with you to set you on that other path.
Finally, if you want more information about these changes, visit our customer portal page or contact your Red Hat account representative.
Quelle: RedHat Stack

Open Service Broker API and Platform Evolution

At Cloud Native Day in Toronto in August, a presentation was given about a proof-of-concept project that would provide a consistent API for platforms to be able to access 3rd-party services. The news of that presentation went mostly under the radar, but it had the potentially to have a significant impact on the future of container-based, cloud-native platforms (such as Red Hat OpenShift). Fast forward 4 months and that significant impact has become a reality, with the formal announcement of the Open Service Broker API [code on GitHub].
Quelle: OpenShift

Analytics on cloud opens a new world of possibilities

Data has become the most valuable currency and the common thread that binds every function in today’s enterprise. The more an organization puts data to work, the better the outcomes.
How can we harness data in a way that makes lives easier, more efficient and more productive? Where can we find the insight from data that will give a business the edge it needs?
Data intelligence is the result of applying analytics to data. Data intelligence creates more insight, context and understanding, which enable better decisions. Digital intelligence with cloud empowers organizations to pursue game-changing opportunities.
In a Harvard Business Review Analytic Services’ study of business executives, respondents who said they effectively innovate new business models were almost twice as likely to use cloud.

Connecting more roles with more data
Cloud analytics facilitates the connection of all data and insights to all users. This helps lay the foundation for a cognitive business. Trusted access to data is essential for organizations. Including data in motion or at rest, internal or external, structured or unstructured.
Besides their own data, companies have many more data sources that can provide insights into their business. Some popular examples include:

social media data
weather data
Thompson Reuters
public sources such as the Center for Disease Control and Prevention and Internet of Things (IoT) sensor data

Cloud democratizes analytics by enabling companies to deliver more tools and data to more roles. Compared to on-premises solutions, cloud analytics deploys faster and offers a wider variety of analytics tools, including simple, natural language-based options.  With cloud’s scalability and flexibility, data volume and diversity have become almost limitless.
More accessible data and tools have created data-hungry professionals.
Application developers must turn creative ideas into powerful mobile, web and enterprise applications. Data scientists must discover hidden insights in data. Business professionals must create and act on insights faster. Data engineers must wrangle, mine and integrate relevant data to harness its power. The collaboration between these roles helps to extract more value from complex data.
Discovering more opportunities
Cloud-based analytics enables organizations to discover new opportunities, with data intelligence at the core. Organizations can uncover more insights by leveraging new technologies and approaches. A cloud platform provides faster and simplified access to the latest technologies. The ability to mix and match them, try things out, use what you want and put them back when you’re done.
Data science, machine learning and open source let organizations extract insights from large volumes of data in new, iterative ways:

Data science tools enable quick prototyping and design of predictive models.
Machine learning has advanced fraud detection, increased sales forecast accuracy and improved customer segmentation.
Open source tools, such as Apache Spark and Hadoop, help teams conduct complex analytics at high speeds.
More and more, new products and services are built on the cloud. It provides the ideal platform for users to fail fast and innovate quickly.

Accelerating insights with cloud
Organizations with cloud-based analytics speed up outcomes. They iterate, improve business models and release new offerings into the marketplace rapidly.
Cloud underpins this in three ways:

Providing easier access to new technologies sooner
Deploying new data models faster
Enabling quick embedding of insights into process, applications and services

Putting insights into production in real time has become easy and expected. For example, when a retailer wants to trigger the right offer for a customer shopping online, it should be immediate. Speed is essential in offering this personalized experience.
The cloud has helped companies use analytics to respond to volatile market dynamics, establish competitive differentiation and create new business paradigms.
Learn how innovative organizations have harnessed analytics on the cloud in the Harvard Business Review Analytics Services&; whitepaper, &;Powering Digital Intelligence with Cloud Analytics.&;
The post Analytics on cloud opens a new world of possibilities appeared first on news.
Quelle: Thoughts on Cloud

IDC stacks up top object storage vendors

If you’ve been thinking about object storage for just backup and archive, you’ve missed a turn. In a digital transformation journey, like many that I’ve seen in enterprises, managing unstructured content is key.
The latest &;MarketScape: Worldwide Object-Based Storage 2016 Vendor Assessment&; from IDC reminds us that:
Digital assets are the new IP and many businesses are actively trying to create new sources of revenue streams through it. For example, media streaming, the Internet of Things (IoT), and web 2.0, are some of the ways businesses are generating revenue in today&;s digitized world. IT buyers are looking for newer storage technologies that are built not just for unprecedented scale while reducing complexities and costs but also to support traditional (current-generation) and next-generation workloads.

Businesses need to not just be able to store and access data, but also to do something with that data to create value. The type and volume of stored data is rapidly changing, and businesses must look at storage approaches that support today’s storage needs and offer the flexibility needed for future requirements.
In its assessment, IDC placed IBM and IBM Cloud Object Storage (featuring technology from the acquisition of Cleversafe in 2015) in the “leader” category.
As a vendor, I personally cannot be happier or prouder.
Object storage solutions provide the scale and resiliency necessary to efficiently support a set of unstructured content (audio, video, images, scans, documents and so forth)  that are ever-growing in size and volume. Yet not all object storage solutions are the same. One key consideration is the platform that the vendor employs and the flexibility a vendor offers when it comes to deployment options.
Business processes are increasingly hybrid. There will be processes and applications that must run inside your data center, managed by your staff and on your servers. Others can run in the public cloud and even be optimized for pure public cloud deployment, while still other elements might be a mix of the two.
If you look at the vendors in the leader category, IBM Cloud Object Storage is the solution that provides proven, deployment dexterity: on premises, on the public cloud and in any mix of the two. The public cloud we run on is designed from the ground up, with the enterprise in mind. With over 50 IBM Cloud data centers around the world, support for open and industry standards, and the innovation that IBM Watson and IBM Bluemix enable, IBM Cloud Object Storage stands out from the pack. That’s not to say that the other leaders aren’t worth considering, and IDC makes it clear.
With data slated to hit 44 zettabytes by 2020, and 80 percent of that unstructured, according to IDC’s object storage forecast for 2016 to 2020, getting ahead of this dynamic is imperative. Doing it with a leader in object storage just makes business sense.
Try it for yourself. Provision your free tier of object storage on IBM Bluemix, learn more about the overall IBM Cloud Object Storage family and read the full IDC report on IBM.
Read the press release.
Learn more about IBM Cloud Object Storage.
The post IDC stacks up top object storage vendors appeared first on news.
Quelle: Thoughts on Cloud

Modeling complex applications with Kubernetes AppController

The post Modeling complex applications with Kubernetes AppController appeared first on Mirantis | The Pure Play OpenStack Company.
When you&;re first looking at Kubernetes applications, it&8217;s common to see a simple scenario that may include several pieces &; but not explicit dependencies. But what happens when you have an application that does include dependencies. For example, what happens if the database must always be configured before the web servers, and so on? It&8217;s common for situations to arise in which resources need to be created in a specific order, which isn&8217;t easily accomodated with today&8217;s templates.
To solve this problem, Mirantis Development Manager for Kubernetes projects Piotr Siwczak explained the concept and implementation of the Kubernetes AppController, which enables you to orchestrate and manage the creation of dependences for a multi-part application as part of the deployment process.
You can see the entire presentation below:

The post Modeling complex applications with Kubernetes AppController appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Be ready for disaster recovery with IBM Cloud for VMware Solutions

Companies large and small are looking for the best way to use cloud service provider (CSP) capabilities, including disaster recovery, as part of an emerging hybrid cloud strategy.
Many of those same enterprises have made significant investments in VMware-based technology over the past few years, deriving substantial business value.
IBM Cloud for VMware Solutions are designed to help those organizations enable this strategy.

For hybrid cloud to truly fulfill its potential, all workloads, including production, must be supported. This means being prepared for disaster recovery events with confidence.
Building on its VMware Cloud Foundation offering, IBM Cloud is adding a new solution for disaster recovery based on Zerto Virtual Replication (ZVR) and its intra-cloud architecture. ZVR enables hypervisor-based replication and supports multiple hypervisors, providing tremendous flexibility and use case coverage.
Zerto on IBM Cloud is designed to provide a security-rich, flexible and scalable disaster recovery solution. It is a single-tenant architecture that is integrated with IBM Cloud for VMware Solutions on bare metal. This approach grants organizations full access and control of their VMware vCenter deployments and associated disaster recovery solutions.
This latest offering is usage-based and supports several use cases, including: from on-premises to the cloud; from one IBM Cloud data center to another; high-speed cloud migrations and cloud bursting.
Organizations also can use IBM’s global network backbone for replication between IBM Cloud data centers at no additional cost.
Disaster recovery and intra-cloud architecture

Zerto on IBM Cloud is attractive to mid-size and larger enterprise companies that are currently running and managing their production VMware or Hyper-V environments on premises, in IBM Cloud or both.
This solution also appeals to organizations that are looking for a standard approach with a multi-CSP strategy.
Companies using the service can realize the following benefits:

Automation: Fully automated Zerto Virtual Replication deployment
Cost effectiveness: No cost to use IBM Cloud’s private network to replicate data
Security: Workloads are protected through physical isolation
Scalability: Start small and scale up or down as needed
Performance: Ability to choose from multiple storage and dedicated compute options
Point in time recovery: Recovery to any point in time within seconds of data loss
Non-disruptive testing: Testing with no impact to production
Application consistency: Recovery of complex applications running across multiple virtual machines and restoration to any point in time in minutes

What’s been missing up to now is a true, production-ready cloud that provides client access and control at the hypervisor layer, along with disaster recovery capabilities, that meet a rigorous set of production RPO and RTO requirements. Organizations can now have that capability and be confident that they are ready for disaster recovery.
Learn more about IBM Cloud for VMware Solutions.
The post Be ready for disaster recovery with IBM Cloud for VMware Solutions appeared first on news.
Quelle: Thoughts on Cloud

Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)

Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5)

The post Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5) appeared first on Mirantis | The Pure Play OpenStack Company.
There&;s Linux, and there&8217;s Windows. Windows apps don&8217;t run on Linux. Linux apps don&8217;t run on Windows. We&8217;re told that. A lot. In fact, when Docker brought containers into prominence as a way to pack up your application&8217;s dependencies and ship it &;anywhere&;, the definition of &8220;anywhere&8221; was quick to include &8220;Linux&8221;. Sure, there were Windows containers, but getting everything to work together was not particularly practical.
With today&8217;s release of Kubernetes 1.5, that all changes.
Kubernetes 1.5 includes alpha support for both Windows Server Containers, a shared kernel model similar to Docker, and Hyper-V Containers, a single-kernel model that provides better isolation for multi-tenant environments (at the cost of greater latency). The end result is the ability to create a single Kubernetes cluster that includes not just Linux nodes running Linux containers or Windows nodes running Windows containers, but both side by side, for a truly hybrid experience. For example, a single service can have PODs using Windows Server Containers and other PODs using Linux containers.
Though it appears fully functional, there do appear to be some limitations in this early release, including:

The Kubernetes master must still run on Linux due to dependencies in how it&8217;s written. It&8217;s possible to port to Windows, but for the moment the team feels it&8217;s better to focus their efforts on the client components.
There is no native support for network overlays for containers in windows, so networking is limited to L3. (There are other solutions, but they&8217;re not natively available.) The Kubernetes Windows SIG is working with Microsoft to solve these problems, however, and they hope to have made progress by Kubernetes 1.6&8217;s release early next year.
Networking between Windows containers is more complicated because each container gets its own network namespace, so it&8217;s recommended that you use single-container pods for now.
Applications running in Windows Server Containers can run in any language supported by Windows. You CAN run .NET applications in Linux containers, but only if they&8217;re written in .NET Core. .NET core is also supported by the Nano Server operating system, which can be deployed on Windows Server Containers.  

This release also includes support for IIS (which still runs 11.4% of the internet) and ASP.NET.
The development effort, which was led by Apprenda, was aimed at providing enterprises the means for making use of their existing Windows investments while still getting the advantages of Kubernetes. “Our strategy is to give our customers an enterprise hardened, broad Kubernetes solution. That isn’t possible without Windows support. We promised that we would drive support for Kubernetes on Windows Server 2016 in March and now we have reached the first milestone with the 1.5 release.” said Sinclair Schuller, CEO of Apprenda. “We will deliver full parity to Linux in orchestrating Windows Server Containers and Hyper-v containers so that organizations get a single control plane for their distributed apps.”
You can see a demo of Apprenda&8217;s Senior Director of Products, Michael Michael, explaining the functionality here:  <iframe width=&8221;560&; height=&8221;315&8243; src=&8221;https://www.youtube.com/embed/Tbrckccvxwg&8221; frameborder=&8221;0&8243; allowfullscreen></iframe>
Other features in Kubernetes 1.5
Kubernetes 1.5 also includes beta support for StatefulSets (formerly known as PetSets). Most of the objects that Kubernetes manages, such as ReplicaSets and Pods, are meant to be stateless, and thus &8220;disposable&8221; if they go down or become otherwise unreachable. In some situations, however, such as databases, cluster software (such as RabbitMQ clusters), or other traditionally stateful objects, this might not be feasible. StatefulSets provide a means for more concretely identifying resources so that connections can be maintained.
Kubernetes 1.5 also includes early work on making it possible for Kubernetes to deploy OCI-compliant containers.
The post Linux and Windows, living together, total chaos! (OK, Kubernetes 1.5) appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

With DevOps, NBCUniversal massively reduces app release times

When you think about companies that employ DevOps practices, what comes to mind? Many people think of born-on-the-cloud startups built for market disruption and innovation, such as Uber.
Increasingly, long-standing enterprises are eyeing DevOps, too. Take, for example, NBCUniversal, which uses DevOps to streamline time to market for new applications, increase application code quality, and reduce development, testing and deployment costs.
The media conglomerate is known for leading the entertainment industry with network, mobile and web content, but the journey to consistent application delivery on a variety of devices hasn&;t always been easy due to the many practices for tooling and processes.
This is why NBCUniversal sought to engage a DevOps approach to application development across its complex enterprise of 17 business units. The goals were to improve code quality, streamline development and lower costs.
Scaling DevOps across a large, multi-speed enterprise
Embracing DevOps and supporting it with automation using the IBM UrbanCode tool suite has enabled NBCUniversal to shift from its previous state of chaotic culture toward a standardized, unified way of working.
Today, NBCUniversal uses IBM UrbanCode Build, UrbanCode Deploy, and IBM Development and Test Environment Services as the engine of DevOps, combining continuous integration, delivery, testing, feedback and monitoring into one automated workflow. By doing so, the company bridges process, culture and technology across the organization.
NBCUniversal Platform DevOps Manager John Comas explains in this recent webinar how introducing the concept of DevOps and supporting it with automation through the IBM UrbanCode tool suite has helped his company achieve benefits such as a 75 percent reduction in time required for new application releases. Being able to satisfy business requirements also helps the company more rapidly improve code quality.
“Because we’ve shown that the capabilities are flexible and expandable and that they allow us to deliver on a very tight timeline we’ve seen a six-fold increase in project volume, from 10 to more than 60 applications,” Comas said. “We’re providing a path to application production that engenders a level of confidence we’ve never had before.”
Want more details about NBC Universal’s dramatic DevOps transformation? Download this case study to learn more.
The post With DevOps, NBCUniversal massively reduces app release times appeared first on news.
Quelle: Thoughts on Cloud