IBM Design Thinking workshop: Your journey to the cloud

A few years ago, cloud computing was seen as a mechanism to save cost. Companies didn’t have to own and run their own data centers. They didn’t have to own infrastructure or buy software. As cloud has matured, so has this perception. Organizations are seeing cloud as a platform for increased agility. Some are now using it as a platform for innovation. It’s a core part of their business, driving productivity, innovation and competitiveness. It’s not only about reducing cost.
I recently ran a Design Thinking workshop at the Cloud and Infrastructure Summit.  At this event — run by Computing magazine — attendees from a wide range of companies discussed their cloud experiences. Each team had an IBM facilitator to kick off the topics and feedback to the whole group to ensure a lively debate. We started by looking at creating personas for each team which included:

Line of Business executive
Application Developer
Head of IT

Each group then discussed the challenges and inhibitors in moving to cloud.

Before adopting cloud
Some groups had the view that their users might not like this new solution. They were happy with the “as-is” and didn’t see a need to change technology or practices. Other concerns included the worry that cloud wouldn’t deliver what the business needed. There was the perennial fear that cloud reduces the need for IT staff. Finally, concerns were expressed over a lack of involvement in the adoption of cloud.
On the positive side, the groups saw cloud as a way to allow IT to take on new business opportunities. There was also a desire to save money and to be able to do more with less.
Concerns and positives
A few key themes emerged here. Who can see my data and is it secure in the cloud? Will it help us get to market quicker and improve customer engagement? There was the impact on day-to-day operations and whether cloud was mature enough. Access to knowledge and skills was a clear concern. People skilled in the new methods and processes are in short supply. Questions also arose of how companies could become more creative with cloud.
After adoption
Having discussed the view before moving to cloud, we wanted to also look to the future.
Having moved to the cloud, users saw opportunities for improved collaboration. They saw increased responsiveness to take on new business opportunities. Cloud would give rise to increased freedom via access to data from anywhere and at any time.
The ability to react to user feedback and changing business demands would be key. This increased agility and a new mindset would drive better customer engagement.
Groups saw a chance to save costs by only paying for consumed services. This positive was counter to harder to predict, variable costs. There was also a concern over lost staff.

A common theme was around data which in today’s digital world is a company’s most valuable asset. The value of data is in extracting insights to drive better decision making. Cloud helps clients gain competitive advantage from their data.
Alongside the workshop, we have a short video explaining how one client — BuzzRadar — has adopted cloud. This talks to these data challenges and shows how the cloud solves them.
Personalized workshop at your location: Your cloud journey
If you’re interested in a Design Thinking workshop for your organization, sign up here.
The post IBM Design Thinking workshop: Your journey to the cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

CaaS Campfires Around The Wild Wild West

The post CaaS Campfires Around The Wild Wild West appeared first on Mirantis | Pure Play Open Cloud.
As more gas is continually thrown on the already exciting fire of adoption for all-things-containers, it seems to be the Wild Wild West out there in the race to provide services around the container ecosystem, specifically in the emerging market of Containers-as-a-Service (CaaS). There is no shortage of alternatives: new companies, old companies, big companies, small companies, public cloud companies, private cloud companies as well as managed services companies. Everyone is racing west to stake their claim in how developers can more easily adopt, deploy and best harness the flexibility of containers. So what is an enterprising container-user supposed to do? Seems best to join around the campfire with one of the prevalent camps of thought most suitable to you.

First, some context

Containers have been around for decades. My initial interaction with them was last century, no kidding. I was the product line manager for Solaris Resource Manager, a precursor to the kernel-based Solaris Containers in Solaris 10 (R.I.P. Solaris, how sad). Containers were terrific for running multiple applications within the same OS image while providing isolation and control of the various system resources. And then came VMware with virtualization, originally enabling server consolidation, flexibility in guest OS images, and over time the addition of advanced features along with other benefits that hit the IT industry by storm and never looked back. Funny, what’s old is new, containers are in vogue again.

The main difference, in my opinion, between containers then and now comes down to the primary intended user. Back then containers were decidedly for IT operations, to allow multiple applications to run in the same OS image by throttling resources accordingly. They were designed and intended for use after applications were developed. Now containers are intended to be used by developers to be able to package applications and their dependencies for easier deployment. They are designed and intended for use while applications are being developed as well as during application deployment and lifecycle management.

But what about CaaS and the camps?

Containers-as-a-Service enables developers to create and control their own container-based clusters without having to get into the complications of containers and container management system details. In essence, CaaS allows developers to focus on the development and packaging of their applications while also allowing IT operations to easily provide containers to developers. To this end, in the Wild Wild West there are many approaches to CaaS, each with multiple companies circling their wagons around different camps of thought. Here are a few:

The “single public cloud” camp comprises all the main public cloud providers. This camp professes that your needs can be met with one brand of public cloud by using their CaaS offering together with other cloud services in their portfolio. If you intend to commit to your favorite public cloud of choice, you will be well served by this camp.

The “for emerging companies” camp focuses on the needs of smaller companies with a commensurate development team, perhaps half a dozen to a dozen. Between the technology and/or the business models, these tend to focus on an easy off-premises onramp for a limited number of cloud resources. Some are managed offerings while others are not. If you fit this scale profile, you’ll be happy in this camp.

The “private and proprietary” camp offers containers as an extension to a legacy environment based on proprietary software. CaaS in this camp may be focused on-premises with some off-premises public cloud capabilities, and is trying to bridge an older deployment model with that of the cloud-native container model. If you are committed to the legacy model and also want some CaaS, this camp may be a fit for you.

You are always welcome at our camp

Mirantis is moving westward with a very flexible approach to containers and CaaS. Mirantis Cloud Platform has supported OpenStack VMs, bare metal resources and Kubernetes on bare metal from day one. Bare metal K8s on MCP are operator-initiated clusters, and now MCP optionally adds CaaS to enable developers to self-manage Kubernetes clusters across AWS and MCP OpenStack.

If you like choice in container deployment, you will like our camp. Not only can you run on-premises bare metal containers, but now you can also run on-premises CaaS within MCP OpenStack VM instances. In addition, CaaS enables the use of K8s on AWS instances to round out a true multi-cloud environment managed by the same toolchain. That’s right, Mirantis, the company known for private cloud, is embracing the public cloud. And not just AWS either, more choice in public cloud is coming soon.

If you like modern cloud-native principles, you will like our camp. Kubernetes was designed for cloud-native deployments and DevOps deployment practices. So was MCP. Through DriveTrain, MCP was designed from the ground up for continuous delivery of incremental change. MCP CaaS utilizes the DriveTrain toolchain to allow developers to create, resize and destroy K8s clusters through a simplified Web user interface. Don’t run your containers on an antiquated environment that wasn’t designed for the cloud-native world.

If you like flexibility in delivery models, you will like our camp. We can manage MCP and your CaaS environments for you with OpsCare, or you can manage them yourself with enterprise support from Mirantis ProdCare or LabCare. Ensuring you start off with the highest success possible, OpsCare allows you to focus on application development and deployment while Mirantis focuses on your multi-cloud environment with up to 99.99% SLAs. Through a Build, Operate, Transfer model we also afford you the flexible decision as to if and when you would prefer to take over operations.

The choices are yours. The software is 100% open source. Meanwhile, there’s no need to rough it, I’m going to put another log on the fire and hope you join our camp.

To learn more about CaaS and the new offering from Mirantis, join us on Tuesday, October 17 for a webinar, “Containers-as-a-Service: It’s not just a buzzword anymore.”

p.s. This seemed fitting…here I am in my “Dude Wagon” when I was about 4 or 5 years old. So long, partners!

The post CaaS Campfires Around The Wild Wild West appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Mirantis Launches Multi-Cloud CaaS with AWS Support

The post Mirantis Launches Multi-Cloud CaaS with AWS Support appeared first on Mirantis | Pure Play Open Cloud.
Newly released Mirantis Cloud Platform expands beyond private clouds to run Kubernetes on the public cloud

SUNNYVALE, CA – September 26, 2017 – Mirantis today is making it easier than ever to manage hybrid clouds across Amazon Web Services (AWS), OpenStack, and even bare metal, launching the latest Mirantis Cloud Platform (MCP) with a new capability to enable multi-cloud self-service Kubernetes clusters through Containers-as-a-Service (CaaS), improving container deployment and adoption for developers and operators alike.

MCP CaaS supports the use of K8s on OpenStack instances on-premises, AWS instances, or both; with additional public cloud options coming soon. The newly released MCP includes a web-based interface for managing Kubernetes clusters, making it intuitively easy for developers to immediately create and control their own Kubernetes based containers.

Like all other components within MCP, the new CaaS offering takes advantage of the DriveTrain lifecycle management (LCM) toolchain, enabling enterprises to standardize on a single open standards-based tool for both OpenStack and multi-cloud Kubernetes, improving ease of use across public and private clouds.

“With many new open source tools constantly being introduced into the vibrant container ecosystem every month, CaaS platforms are becoming increasingly complex to operate,” said Boris Renski, Mirantis CMO and co-founder. “Building on our experience operating OpenStack for customers like AT&T and VW, we plan to continue introducing new container services to our managed open cloud portfolio as open source projects behind them become more mature.”

The newest MCP release also includes enhancements to StackLight, its suite of Operations Support System (OSS) tools, and expanded update/upgrade capabilities for DriveTrain, its toolchain for Lifecycle Management (LCM).

StackLight now includes a new DevOps portal that provides a holistic view of the MCP environment. This new aggregated toolset significantly reduces the complexity of Day 2 cloud operations through services and dashboards around a high degree of automation, availability statistics, resource utilization, capacity utilization, continuous testing, logs, metrics and notifications.

DriveTrain enables new upgrades to Ocata, OpenContrail and also supports the latest Kubernetes version.

As the leading provider of Managed Open Clouds, Mirantis works with iconic global brands that are asking for a CaaS solution based on open source software and free from vendor lock-in, to accelerate their ability to innovate. Mirantis is bringing this new offering to market as a crucial component of an enterprise’s hybrid cloud and digital transformation strategy.

As one of the fastest-growing open source projects, Kubernetes use is expected to explode as companies increasingly evolve towards cloud-native software development. This course and certification ensures enterprises feel more secure when hiring a certified partner or developer. Cloud computing skills have progressed from being niche to mainstream as the world’s most in-demand skill set. The OpenStack User Survey shows Kubernetes taking the lead as the top Platform-as-a-Service (PaaS) tool, while 451 Research has called containers the “future of virtualization,” predicting strong container growth across on-premises, hosted and public clouds.”

PRICING AND AVAILABILITY
Available now by contacting Mirantis, MCP CaaS pricing begins at $14,000 for a block of up to 1000 instances suitable for 20 users. For more information and to learn more about the newest MCP release, read our blog post covering the MCP enhancements.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com
The post Mirantis Launches Multi-Cloud CaaS with AWS Support appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Mirantis Cloud Platform releases new features: No need to rough it when you have the right set of tools

The post Mirantis Cloud Platform releases new features: No need to rough it when you have the right set of tools appeared first on Mirantis | Pure Play Open Cloud.
I’ve got two teenage boys who love reading survival manuals. On weekends they can’t wait to go out on camping excursions with the bare minimum of equipment to get by while defaulting to ingenuity and skills to overcome all challenges. They’re not doing it because they like to be miserable; they’re doing it because they like the challenge of making sure they have the right tools, knowledge and level of preparedness, and that they make the right decisions — in other words, all of the factors that make the difference between a great experience and a poor one when it comes to accomplishing your objectives in the most efficient way.

Things don’t change when you grow up and go into the office instead of the woods. As you prepare your day in IT operations, keeping the infrastructure and operational environments that support the application deployment needs running smoothly also requires both skills and the right set of tools so you don’t spend your weekends and evenings fixing things that didn’t have to break in the first place.

Just as having a Swiss army knife, a flashlight, sunscreen, a hammock, a raincoat or a fishing rod available can make all the difference in the woods, at work, you also need to think about having the right tools you have at your disposal. You’ve trained hard, you are a hard-core professional, and you deserve to have the best tools you can get for the job; which brings me to today’s topic, Mirantis Cloud Platform.

Here at Mirantis, we’re pretty excited about what’s emerging for those of us assisting IT professionals in your quest to support all the application needs of your customers, internal and external.

Today I wanted to cover a few new options from Mirantis you may want to consider if you are looking to enhance and expand your breadth of capabilities, including capacity monitoring, increased robustness, orchestration and devops, and overall cloud health.

Mirantis StackLight, which is our 100% open-source Operations Support System (OSS) for continuous monitoring and maximum availability now includes a new DevOps Portal that provides a holistic view of your Mirantis Cloud Platform (MCP) environment.

New DevOps Portal

This new aggregated toolset significantly reduces the complexity of Day 2 cloud operations through services and dashboards around a high degree of automation, availability statistics, resource utilization, capacity utilization, continuous testing, logs, metrics, and notifications. What’s more, the new DevOps Portal enables cloud operators to manage larger clouds with greater uptime without having to convert their entire staff into open source developers.

The web UI offers services that include cloud intelligence, capacity management, and a subset of the tools made available within Simian Army.

Included services within the DevOps portal

Let’s take a look at each one of the components in detail.
Capacity Monitoring
MCP enables you to ensure available capacity by providing a live look at what’s going on inside your cloud using the Cloud Intelligence Service and Cloud Capacity Management.

Cloud Intelligence Service: This service collects and stores data from MCP services such as OpenStack, Kubernetes, bare metal, and so on. You can then query the data as part of use cases such as cost visibility, business insights, cost comparison, chargeback/showback, cloud efficiency optimization, and IT benchmarking. Operators can interact with the resource data using a wide range of queries, such as searching for the last VM rebooted, total memory consumed by the cloud, number of containers that are operational, and so on.

Cloud Capacity Management: This dashboard provides point in time resource consumption data for OpenStack by displaying parameters such as total CPU utilization, memory utilization, disk utilization, and number of hypervisors. This dashboard is based on data collected by the Cloud Intelligence Service, and can be used for cloud capacity management and other business optimization aspects.

Cloud Assurance
With this module you can evaluate security and improve utilization.

Security Monkey & Janitor Monkey: In this release, MCP includes Security Monkey and Janitor Monkey (and their respective dashboards), two of the multiple tools that compose Simian Army. Simian Army is a growing set of open source tools originally created by Netflix to run continuous tenant level tests on a production cloud to make it more antifragile. The closest traditional IT analogy to the Simian Army is online diagnostics. Security Monkey runs tests that track and evaluate security-related tenant changes and configurations. Janitor Monkey constantly looks to reclaim unused tenant resources for improved cloud utilization.

Orchestration and DevOps
Here we have a great set of tools to help you automate workflow of jobs in response to specific events among other things.

Runbooks Automation: Clouds are simply too complex to be managed using traditional manual processes. Instead, they require a high degree of automation in which events or time durations trigger the execution of specific jobs. The Runbooks Automation service, based on Rundeck, accomplishes this by enabling operators to create a workflow of jobs that get executed at specific time intervals or in response to specific events (such as policy-driven events). For example, operators can now automate periodic backups, weekly report creation, specific actions in response to a failed Cinder volume, and so on. Note, however, that Runbooks Automation is not a lifecycle management tool; it’s not appropriate for reconfiguring, scaling, or updating MCP itself. (LCM for an MCP cloud is exclusively performed with DriveTrain, see below).

DriveTrain: This toolchain provides access to relevant CI/CD LCM tooling such as Git, Gerrit, Jenkins, Artifactory, etc., to automate the delivery of change controls to the infrastructure and its services. This includes scaling the cloud, patching software packages, and full environment upgrades.

DriveTrain results

Cloud Health
You can find another great set of tools to gain broader monitoring capabilities, additional metrics and a higher level of alerts and notifications in the Cloud Health section.

Cloud Health Service: This service collects availability results for all OpenStack services and failed customer (tenant) interactions (FCI) for a subset of those services. These metrics are displayed so that operators can see both point-in-time health status and trends over time.

Metrics: All metrics collected by Prometheus (see below) are visualized through Grafana dashboards.

Logs: Logs for various MCP services are aggregated in Elasticsearch and visualized through Kibana dashboards.

Additionally, StackLight now expands monitoring coverage within Kubernetes, containers and Ceph, as well as deeper Kubernetes log processing. The architecture has undergone a major evolution with the inclusion of a monitoring and alerting solution built using the open source Prometheus project. Prometheus is a mature open source monitoring system, now maintained as an initiative of the Cloud Native Computing Foundation (CNCF), and approaches the age-old monitoring and alerting problem with a web-scale architecture utilizing a dimensional data model, powerful query engine, Grafana visualization integration, efficient storage, and precise alerting. Prometheus is also easy to operate and provides numerous third-party integrations. StackLight has evolved to use Telegraf to collect metrics and Prometheus Alertmanager for notifications/alerts. StackLight also provides InfluxDB, using it for long-term, resilient metrics storage and as a back-end for Ceilometer to enable Heat-based auto-scaling.

Notifications Service: A notifications dashboard displays all alerts/notifications generated by Prometheus Alertmanager. This screen replaces the previous Nagios tool in StackLight. Alertmanager enables MCP customers to configure where alerts are going to be sent — support is provided for many kinds of endpoints, including email, SMS, PagerDuty, and others.

Notifications Service

All this new integration and capabilities are specifically designed for MCP to provide a view into the open cloud with optimized collectors, dashboards, alarms, faults and event correlation.

In other words, even though open cloud may feel like the Wild, Wild, West, there’s no need to rough it when supporting the challenging needs of your business. Once you gain better insights that enable you to minimize unpredictability and better manage your work environment, you will be able to leave unplanned surprises to your weekend outings. Here at Mirantis, we aim to provide peace of mind with the right set of tools in support of your application deployment needs; leaving it up to you to decide how to spend your weekends.The post Mirantis Cloud Platform releases new features: No need to rough it when you have the right set of tools appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Moving from multicloud complexity to agility with process automation

I recently wrote about how tackling digital transformation in the multicloud world is a significant challenge for businesses. Enterprises need to integrate and manage their multicloud environments to deliver an agile, connected and secure IT infrastructure that supports rapid innovation. But the need for flexibility and agility doesn’t stop with IT infrastructure. It extends to the tasks and processes that impact customer satisfaction, service quality and — critically — the bottom line.
In some industries, such as healthcare, these challenges can mean the difference between life and death. Take the UK National Health Service Blood and Transplant (NHSBT). Their life-saving work facilitates 4,500 organ transplants a year in the United Kingdom. However, since 6,500 people are on the waiting list at any given time, an average of three people die every day while waiting for an organ. The stakes for improving and speeding up the allocation process couldn’t be higher.
The allocation decision process for each type of organ – known as an allocation scheme – is very complex. It needs to account for both the donor’s and the potential recipient’s physiology, clinical situation, and geographic location. Additionally, these decision rules are constantly changing as new medical insights are uncovered. NHSBT’s existing IT systems did not manage this evolving complexity. Deploying a new allocation scheme took more than two years, which added complexity at a time when expedience was essential.
NHSBT worked to transform their allocation process for greater flexibility and business agility. Using the IBM Digital Process Automation platform, it took NHSBT only six months to design and deploy a new heart allocation scheme, all in the cloud. This single, modern user interface sits above and integrates legacy on-premises systems and cloud services to automate more than 40 percent of its rigorous 96-step allocation process.
The process is now digitized all the way from the time a nurse discusses organ donation with family members to the time a donation is offered to a transplant center. By using cloud-based process mapping, business process management (BPM) and operation decision management (ODM) solutions, NHSBT is able to efficiently make future updates to the process. This takes the emphasis off of managing the IT infrastructure needed to run the automation platform.

IBM is well-positioned to help other organizations across diverse industries automate more of their work. While your own business processes may not be a literal matter of life and death, the task of allocating limited resources to achieve critical results is a universal challenge. It is an IBM goal to help organizations reduce the complexity of their multicloud and hybrid cloud environments by automating business processes and tasks at scale.
As we announced last month, we’re partnering with Automation Anywhere to deliver a robotic process automation (RPA) solution to help our clients automate at scale. The IBM RPA offering bundles Automation Anywhere RPA technology with IBM Business Process Manager (BPM) to deliver the joint, integrated value of both offerings. Specific work tasks get referred to automated RPA bots and use BPM to orchestrate multiple RPA activities. The advanced platform is designed to seamlessly integrate systems, people and bots across the widest assortment of processes running on premises or in the cloud.
Process automation, especially in the multicloud environment using the latest in RPA technology, presents a tremendous opportunity for the digital transformation of your business.  I invite you to learn more about how IBM can help you with your automation initiatives by scheduling a no-cost consultation with one of our IBM experts.
The post Moving from multicloud complexity to agility with process automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What to expect in Kubernetes 1.8: an early look at where k8s is going

The post What to expect in Kubernetes 1.8: an early look at where k8s is going appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.8 was planned as a stabilization release, but that doesn’t mean there’s nothing interesting to look forward to.  The release includes early versions of a number of different developments that provide additional features and control, including a fundamental change to how Kubernetes runs.
Deployment and operations: Self-hosting Kubernetes on Kubernetes
Which came first, the chicken or the egg? How do you compile a compiler? What kind of infrastructure runs infrastructure software? That’s the question that’s been facing Kubernetes developers: Kubernetes is a great infrastructure on which to host robust applications, but Kubernetes itself can benefit from those advantages.
The solution is a ‘self-hosted” architecture, in which the Kubernetes control plane, that is, the pieces that make it work, are themselves hosted by Kubernetes.  This software “inception” makes it possible to both operate and use a Kubernetes cluster using the same set of skills.
In Kubernetes 1.8, we have the first experimental version of a self-hosted cluster, easily created with the kubeadm tool. At this point you still have to enable the feature, but the community plans to make this the default for Kubernetes 1.9.
New ways to take control
Kubernetes 1.8 includes a number of different alpha-level features that provide more control over your cluster.
Many of the changes in Kubernetes 1.8 involve storage. For example, you can increase the size of a volume, though this is currently implemented only in the Gluster backend — and at this stage, it only increases the size of the volume, and doesn’t resize the filesystem.  Also, you can now use the Kubernetes API to create a volume snapshot. This functionality is actually at the “prototype” level; for the moment, it doesn’t stop any processes currently running on the volume — a process called “quiescing” — so there’s a possibility that your snapshot may be inconsistent. Still, it’s a look at what’s to come.
On the server side, NFV developers in particular will be glad to hear of the arrival of alternative container-level affinity policies, as well as the ability to request pre-allocated hugepages.
Perhaps the biggest feature, however, is that you now have the ability to create your own binary extensions to the kubectl Kubernetes client. You do this by creating a plugin that provides a new subcommand for kubectl.
Easier security
On the security front, Kubernetes 1.8 makes it possible to figure out exactly what permissions apply to a particular command.  K8s uses Role Based Access Control (RBAC), which can make things completed, but you can now feed a file of roles, rolebindings, clusterroles, or culsterrolebindings to the kubectl auth reconcile command and get back a proper list of rules that includes all of the appropriate implied permissions.
Also, there’s a new SelfSubjectRulesReview API (now in beta), which provides a list of actions that a particular user can perform in a particular namespace, which will make it easier for UI developers to show the appropriate choices.
Networking and Storage improvements
Networking and storage have seen some major work this cycle as well; it’s now possible to specify network policies not just for what can come into a pod, but also what can go out of it. You can also specify rules by IP block. These changes are considered beta.
Also in new “early access” alpha state is new support for a new IP Virtual Server mode for kube-proxy, which is designed to provide both better performance and more sophisticated load balancing algorithms than the current iptables-based architecture.
Meanwhile, StorageClass now provides the opportunity to configure the reclaim policy for dynamically provisioned volumes, rather than always defaulting to delete. You can also use the new VolumeMount.Propogation field (still in alpha) to share mounts between containers, or even between containers and the host.
Developers have also been working on improving the ability to automatically discover and initialize new driver files, called Flexvolume drivers.
Look before you leap
Of course, an upgrade always means changes in behavior that you need to be aware of before committing to the new software so nothing bites you. For example, the release notes point out that “kubectl delete no longer scales down workload API objects prior to deletion. Users who depend on ordered termination for the Pods of their StatefulSet’s must use kubectl scale to scale down the StatefulSet prior to deletion.”
In fact, the release notes specify a number of specific actions you should take before upgrading.  Some are simple, such as changing the version specifications for your objects, but others are more deliberate, such as the removal of the deprecated ThirdPartyResource (TPR) API (migrate to CustomResourceDefinition to keep your data) and the fact that the pod.alpha.kubernetes.io/initialized annotation for StatefulSets is now ignored, so dormant StatefulSets for which this value is false “might become active after upgrading”.
Just be sure to check the release notes before you upgrade.
The post What to expect in Kubernetes 1.8: an early look at where k8s is going appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis