OpenShift 4.2: Expanded Tools and Services for Developers

With the release of Red Hat OpenShift 4.2, developers have a lot to be excited about. New developer-facing tools and enhancements in Red Hat OpenShift 4.2 help improve the developer experience, allowing developers to be more productive.
Developer Perspective in the Web Console
The addition of the new Developer Perspective aims to give developers an optimized experience in the web console with the features and workflows they’re most likely to need to be productive. Developers can focus on higher level abstractions like their application and components, and then drill down deeper to get to the OpenShift and Kubernetes resources that make up their application, if desired.
An interactive Topology view makes it easier for developers to deploy and visualize their applications, and provides quick access to important features such as pod and build logs.
The Developer Perspective has several built-in ways to streamline the process of deploying applications, services and databases. There are options to build and deploy from code in a git repository, to deploy a container image, to deploy from the developer catalog, or from a Dockerfile or YAML/JSON definitions. In addition, you can easily deploy databases for your application to use. 

Clicking on most of these options will give you a wizard-style experience that prompts you for the necessary information.
 

Learn more about the Developer Perspective in this blog post and video.
odo: A Developer-focused Command Line Interface
odo is a developer-focused CLI that helps users write, deploy and test source code faster with OpenShift. Using a few CLI commands and a “git push” style interaction, developers can turn their source code into a running container on OpenShift.
In addition to working with source code changes, odo allows developers to manage other aspects of their deployed source code, such as creating a url for the application, linking a deployed application component to other application components deployed on OpenShift, viewing logs of deployed applications and more. odo helps developers focus on the source code they are writing for applications, rather than all the details of deploying that application component on Kubernetes.

Learn more about odo here.
Red Hat CodeReady Containers
Red Hat CodeReady Containers brings developers the ability to install a pre-built OpenShift environment locally on a laptop or desktop. CodeReady Containers enables local development for OpenShift and helps developers get started with OpenShift quickly and easily.
Learn more about CodeReady Containers in this blog post and video.
Red Hat OpenShift Connector for Microsoft Visual Studio Code, JetBrains IDE (including IntelliJ) and Eclipse Desktop IDE
The Red Hat OpenShift Connector allows developers who work with Red Hat OpenShift to use their preferred development environment without interruption. The extension provides a quick, simple way for developers to work their “inner loop” process of coding, building and testing directly, using their IDE.

Red Hat OpenShift Deployment Extension for Microsoft Azure DevOps
Users of this DevOps toolchain can now deploy their built applications to Azure Red Hat OpenShift, or any other OpenShift cluster directly from Microsoft Azure DevOps. This extension can be downloaded here.

Start Developing on OpenShift 4.2
Developers can access Red Hat developer tools and resources for OpenShift 4.2 along with code repositories, videos and articles at https://developers.redhat.com/openshift/. To get started with OpenShift 4.2 today, go to https://try.openshift.com.
The post OpenShift 4.2: Expanded Tools and Services for Developers appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

53 Things to look for in OpenStack Train

The post 53 Things to look for in OpenStack Train appeared first on Mirantis | Pure Play Open Cloud.
It’s been a while since we did one of these articles marking the release of a new OpenStack release, but with last week’s announcement of the updated Certified OpenStack Administrator exam, we thought it was high time to bring back the tradition.
OpenStack Train was released last week, and includes more than 25,500 code changes by 1,125 developers from 150 different companies. Most components have gotten many big fixes and performance improvements, and some are finishing their transition to Python 3 by announcing that this is the last release to support Python 2.7.
Here’s the list, excerpted from the OpenStack Train release notes, organized by project.
Cinder – Block Storage service¶
Cinder provides on-demand, self-service access to Block Storage resources.

A number of drivers have added support for new features such as multi-attach and consistency groups.
Cinder now has upgrade checks for possible compatibility issues when upgrading to Train.

Designate – DNS service¶
Designate provides scalable, on demand, self service access to authoritative DNS services in a technology-agnostic manner.

Designate now provides full IPv6 Support for the API control plane and the DNS data plane

Glance – Image service¶
Glance provides services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions.

The Glance multi-store feature is now considered to be stable.
Cache prefetching is now done as a periodic task by the glance-api, removing the requirement to add it in cron.

Horizon – Dashboard¶
Horizon provides an extensible unified web-based user interface for all OpenStack services.

Volume multi-attach is now supported.
Horizon now supports the optional automatic generation of a Kubernetes configuration file.

Ironic – Bare Metal service¶
Ironic consists of an OpenStack service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner.

Ironic now provides basic support for building software RAID.
Ironic includes a new tool for building ramdisk images: ironic-python-agent-builder.

Keystone – Identity service¶
Keystone facilitates API client authentication, service discovery, distributed multi-tenant authorization, and auditing.

All keystone APIs now use the default reader, member, and admin roles in their default policies. This means that it is now possible to create a user with finer-grained access to keystone APIs than was previously possible with the default policies. For example, it is possible to create an “auditor” user that can only access keystone’s GET APIs. Please be aware that depending on the default and overridden policies of other OpenStack services, such a user may still be able to access creative or destructive APIs for other services.
All keystone APIs now support system scope as a policy target, where applicable. This means that it is now possible to set [oslo_policy]/enforce_scope to true in keystone.conf, which, with the default policies, will allow keystone to distinguish between project-specific requests and requests that operate on an entire deployment. This makes it safe to grant admin access to a specific keystone project without giving admin access to all of keystone’s APIs, but please be aware that depending on the default and overridden policies of other OpenStack services, a project admin may still have admin-level privileges outside of the project scope for other services.
Keystone domains can now be created with a user-provided ID, which allows for all IDs for users created within such a domain to be predictable. This makes scaling cloud deployments across multiple sites easier as domain and user IDs no longer need to be explicitly synced.
Application credentials now support access rules, a user-provided list of OpenStack API requests for which an application credential is permitted to be used. This level of access control is supplemental to traditional role-based access control managed through policy rules.
Keystone roles, projects, and domains may now be made immutable, so that certain important resources like the default roles or service projects cannot be accidentally modified or deleted. This is managed through resource options on roles, projects, and domains. The keystone-manage bootstrap command now allows the deployer to opt into creating the default roles as immutable at deployment time, which will become the default behavior in the future. Roles that existed prior to running keystone-manage bootstrap can be made immutable via resource update.

Manila – Shared File Systems service¶
Manila provides a set of services for management of shared file systems in a multitenant cloud environment, similar to the way OpenStack provides for block-based storage management through the Cinder project.

Manila share networks can now be created with multiple subnets, which may be in different availability zones.
NetApp backend added support for replication when DHSS=True.
GlusterFS back end has added support for extend/shrink for directory layout.
The Infortrend driver with support for NFS and CIFS shares has been added.
The CephFS backend now supports IPv6 exports and access lists.
The Inspur Instorage driver with support for NFS and CIFS shares has been added.
Support for modifying share type name, description and/or public access fields has been added.

Neutron – Networking service¶
Neutron implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

OVN can now send ICMP “Fragmentation Needed” packets, allowing VMs on tenant networks using jumbo frames to access the external network without any extra routing configuration.
When different subnet pools participate in the same address scope, the constraints disallowing subnets to be allocated from different pools on the same network have been relaxed. As long as subnet pools participate in the same address scope, subnets can now be created from different subnet pools when multiple subnets are created on a network. When address scopes are not used, subnets with the same ip_version on the same network must still be allocated from the same subnet pool.
A new API, extraroute-atomic, has been implemented for Neutron routers. This extension enables users to add or delete individual entries to a router routing table, instead of having to update the entire table as one whole
Support for L3 conntrack helpers has been added. Users can now configure conntrack helper target rules to be set for a router. This is accomplished by associating a conntrack_helper sub-resource to a router.

Nova – Compute service¶
Nova implements services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.

Nova now includes live migration support for servers with a NUMA topology, pinned CPUs and/or huge pages, and/or SR-IOV ports attached when using the libvirt compute driver.
Support for cold migrating and resizing servers with bandwidth-aware Quality of Service ports attached has been added.
This release includes mproved multi-cell resilience with the ability to count quota usage using the Placement service and API database.
A new framework supporting hardware-based encryption of guest memory to protect users against attackers or rogue administrators snooping on their workloads when using the libvirt compute driver has been added. Currently this framework only has basic support for AMD SEV (Secure Encrypted Virtualization).
Nova now has improved operational tooling for tasks such as archiving the database and healing instance resource allocations in Placement.
Coordination with the baremetal service during external node power cycles has been improved.
Support for VPMEM (Virtual Persistent Memory) when using the libvirt compute driver. This provides data persistence across power cycles at a lower cost and with much larger capacities than DRAM, especially benefitting HPC and memory databases such as redis, rocksdb, oracle, SAP HANA, and Aerospike.

Octavia – Load-balancer service¶
Octavia provides scalable, on demand, self service access to load-balancer services in technology-agnostic manner.

You can now apply an Access Control List (ACL) to the load balancer listener. Each port can have a list of allowed source addresses.
Octavia now supports Amphora log offloading. Operators can define syslog targets for the Amphora administrative logs and for the tenant load balancer connection logs.
Amphorae can now be booted using Cinder volumes.
The Amphora images have been optimized to reduce image size and memory consumption.

Placement – Placement service¶
The Placement service tracks cloud resource inventories and usages to help other services effectively manage and allocate their resources.

Placement now includes support for forbidden aggregates, which allows groups of resource providers to only be used for specific purposes, such as reserving a group of compute nodes for licensed workloads.
Support has been added for a suite of features which, combined, enable targeting candidate providers that have complex trees modeling NUMA layouts, multiple devices, and networks where affinity between and grouping among the members of the tree are required. These features will help to enable NFV and other high performance workloads in the cloud.

Swift – Object Storage service¶
Swift provides software for storing and retrieving lots of data with a simple API. It is built for scale and optimized for durability, availability, and concurrency across the entire data set.

Log formats are now more configurable and include support for anonymization.
Swift-all-in-one Docker images are now built and published to https://hub.docker.com/r/openstackswift/saio.

Tacker – NFV Orchestration service¶
Tacker implements Network Function Virtualization (NFV) Orchestration services and libraries for end-to-end life-cycle management of Network Services and Virtual Network Functions (VNFs).

Tacker now includes support for force deleting VNF and Network Service instances.
Partial support of VNF packages has been added.

Blazar – Resource reservation service¶
Blazar’s goal is to provide resource reservations in OpenStack clouds for different resource types, both virtual (instances, volumes, etc) and physical (hosts, storage, etc.).

Blazar now includes support for a global request ID which can be used to track requests across multiple OpenStack services.

Cyborg – Accelerator resources for AI and ML

Cyborg (previously known as Nomad) is an OpenStack project that aims to provide a general purpose management framework for acceleration resources (i.e. various types of accelerators such as GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK and so on).

Karbor – Data Protection Orchestration Service¶
Karbor implements services and libraries to provide project aware data-protection orchestration of existing vendor solutions.

Karbor now includes event notifications for plan, checkpoint, restore, scheduled and trigger operations.
Karbor now enables users to backup image boot servers with the new added data which is located on the root disk.

Kolla¶
Kolla provides production-ready containers and deployment tools for operating OpenStack clouds.

This release Introduces images and playbooks for Masakari, which supports instance High Availability, and Qinling, which provides Functions as a Service.

Kuryr¶
Kuryr provides a bridge between container framework networking and storage models to OpenStack networking and storage abstractions.

Support has been added for tagging all the Neutron and Octavia resources created by Kuryr.

Senlin – Clustering service¶
Senlin implements clustering services and libraries for the management of groups of homogeneous objects exposed by other OpenStack services.

Senlin now has support for webhook v2: previously the webhook API introduced microversion 1.10 to allow callers to pass arbritary data in the body along with the webhook call.

Trove – Database service¶
Trove provides scalable and reliable Cloud Database as a Service functionality for both relational and non-relational database engines, and continues to improve its fully-featured and extensible open source framework.

The cloud administrator can now define the management resources for the trove instance, such as keypair, security group, network, etc., Creating a trove guest image is also now much easier for the cloud administrator or developer by using trovestack script, and users can expose the trove instance to the public with the ability to limit source IP addresses access to the database.

Vitrage – RCA (Root Cause Analysis) service¶
Vitrage’s purpose is to organize, analyze and visualize OpenStack alarms and events, yield insights regarding the root cause of problems, and deduce their existence before they are directly detected.

This release adds new datasources for Kapacitor and Monasca, a new API for Vitrage status and templates versions, and support database upgrades for Vitrage with the alembic tool.

Watcher – Infrastructure Optimization service¶
Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

This release adds a ‘force’ field to Audit. The user can set –force to enable the new option when launching audit. In addition, Grafana has been added as datasource that can be used for collecting metrics, and Watcher can get data from Placement to improve its compute data model.

Zun – Containers service¶
Zun provides an OpenStack containers service that integrates with various container technologies for managing application containers on OpenStack.

The Zun compute agent now reports local resources to the Placement API, and the Zun scheduler gets allocation candidates from the placement API and claim container allocation.

The post 53 Things to look for in OpenStack Train appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift 4.2: The New Cluster Overview Dashboard

Red Hat OpenShift 4.2 is a significant release that brings a number of great enhancements to the Web Console UI, but you’ll notice one of the biggest changes as soon as you log in.
The Cluster Overview Dashboard is the new default landing page of the OpenShift Console and provides a birds-eye view of your cluster’s current health, inventory, capacity, utilization, and activity to make identifying problems and resolving issues easier and faster.
This post will briefly cover what this dashboard is made of, but we know from using it ourselves these past few months that static screenshots won’t quite do it justice. We’re really excited for you to try this new dashboard out in your own clusters, and our User Experience Design team would love to hear any feedback and suggestions you have for future improvements.
A window into your cluster

The Overview Dashboard is designed to answer the following at a quick glance:

What am I looking at?
What’s in this cluster?
Is everything okay?
Is there enough capacity?
What is the cluster up to?

The Details card in the top-left corner answers the first question with the cluster’s ID, provider, and current version. The Inventory card directly below it includes the quantity and current statuses of the nodes, pods, and persistent volume claims within the cluster, with clickable status counters that act as quick shortcuts to the full list pages of those objects.
The Health card in the center of the dashboard is (hopefully) pretty mundane most of the time, with a green check mark indicating that no degraded systems or actively-firing alerts from AlertManager require your attention.
The Capacity and Utilization cards visualize how much resource headroom is available at the moment and how the cluster’s utilization of CPU, Memory, and Disk space has changed over the last hour.
The Events card in the top-right streams in the same cluster-wide events that can be found in the dedicated Events page, with any warning events highlighted.
Finally, the Top Consumers card in the bottom-right corner helps identify the highest consumers of CPU time, Memory, Storage I/O time, and Network bandwidth so that any outliers greatly affecting the cluster’s capacity can be addressed.
Surfacing alerts, warnings, and errors
A perfectly happy and healthy dashboard is always great to see, but the value of including all of this information in one place becomes super clear when things in the cluster go wrong.

Any firing alerts from AlertManager are collected within an Alerts section of the Health card. Resolving those alerts is generally the best first step toward restoring the health of the cluster, but the additional context provided by the object statuses in the Inventory card, errors in the Events card, and resource measurements in the Capacity and Utilization cards can often help to better understand the full scope of each problem.
Funnily enough, we know this because the dashboard started to help our internal OpenShift developers identify and troubleshoot issues almost as soon as the first cards appeared! The errors that bubbled up to the dashboard frequently kicked off discussions that tangibly improved the entire OpenShift product family, and seeing it do so was an absolute thrill (that we probably should have expected).
We can’t wait to hear it helps you and your team as well!
What’s next
The introduction of the Cluster Overview Dashboard in OpenShift 4.2’s Console UI is just one of the many ways we’re improving the observability and management experience of OpenShift, and there’s so much more ahead to look forward to. If you’d like a sneak peek, follow the OpenShift Console and OpenShift Design GitHub repositories to see some of the ideas we’re thinking about for future iterations.
Our User Experience Design team would love to hear your ideas and feedback on this new dashboard and any other areas of the Console UI. Please fill out this feedback survey or sign up to participate in future research opportunities.
 
The post OpenShift 4.2: The New Cluster Overview Dashboard appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Accelerating the application containerization journey

Container strategy has become an essential part of any enterprise’s cloud transformation journey. The flexibility and cost optimizations that containers bring to the table has made many enterprises to adopt a “container first” strategy.
While this is a good guiding principle, many challenges come into play while executing this strategy. The primary challenge is to decide which of the existing applications in an enterprise’s IT landscape can be containerized. Most enterprises have application portfolios built on a range of technologies and platforms. While applications built on modern technologies such as Java are ideal for containerization, established applications on platforms like the mainframe may not be a good fit.
Questions to help guide the application containerization journey
It’s important to evaluate the organization’s existing applications and how these assets may transfer to containers. Questions that should be considered before enterprises begin the containerization journey include:

Which applications in the business portfolio can be containerized?
Is there a real benefit in containerizing an application?
Should a large monolith application be decomposed before it is containerized?
How can application containerization be done faster?
What is the best container platform for the enterprise?
What are the other extensions needed in the container platform to accommodate the applications?
Is there real value in containerizing all the applications?
Will the containerized applications be able to meet existing Service Level Agreements (SLAs)?
What will the upskill path be for developers?

 
The four phases of application containerization
It is important to have a well-defined approach that is supported by accelerators and skills to address the key concerns of enterprises, while realizing the containerization journey.

A “containerize first, transform next” approach is ideal in most cases when containerizing the existing application portfolio. The journey typically consists of four phases:
1. Application assessment for containerization
In this phase, the application portfolio is assessed to determine the container affinity of each application. The outcome of this assessment shows the “ease of containerizing” each application in the portfolio. This assessment can be accelerated by using assets like the .The tool captures various attributes of the applications and associated infrastructure. When applied to the collected data, the inbuilt rules determine the cloud affinity value for the application.
Once the container affinity assessment is complete, an initial list of applications for containerization will be available. The target images and patterns needed can be determined from this list.
It is important to confirm the business case of containerizing the applications. The “as is” cost of running the applications, the transformation cost and the target state run cost will be captured to calculate the ROI. This is a key input for the decision to move forward or not. Once a containerization journey begins, key execution accelerators, like the containerization playbooks are created.
IBM and RedHat have other assets like IBM Cloud Transformation Advisor and RedHat Application Migration Toolkit that provide detailed recommendations to containerize JEE applications. These tools provide steps to be followed to containerize the JEE applications for WebSphere and RedHat targets, respectively.
2. Execution planning
The dependencies of applications are an important aspect to be considered prior to defining the containerization execution plan. Application groups are formed with dependent applications. IBM has multiple assets that collect data from the application runtimes and derive the application dependency model. The containerization waves can be defined based on the analysis of these dependency models. Each wave can include one or more application group.
3. Containerize, lift and shift
The “containerize first, transform next” approach can incorporate the benefits of containers upfront in most cases. This is the general recommended approach for transforming existing applications. The approach typically will not perform any code changes to the application.
The steps that will be performed are related to finalizing the container image and integrating the application with the DevOps toolchain. This will typically have a well-defined set of steps and can be executed in a Migration Factory Model.
There could be a scenario where a monolith application is too large. In this case, it will be best to decompose the application prior to containerization. There may also be a scenario where an existing application is not able to be containerized. In such case, rearchitecting the application should be considered.
End-to-end DevOps is a critical element for the accelerated execution of the containerization journey. A DevOps assessment of the applications in scope should be conducted, while the toolchain should be optimized to meet the needs of the containerized applications.
4. Transform applications
This is the last step in the approach and may not be applicable for all containerized applications. There may be scenarios where it is ideal to modernize an already containerized application to newer architectures like the microservices architecture. This is determined based on detailed analysis of the applications in scope. Transformations usually include rearchitecting and reengineering the applications. Development Squads following the IBM Garage practices are ideal for delivering application transformations.
Streamline application containerization with the right tools
IBM has a variety of methods, tools, assets and service offerings that can streamline an enterprise’s hybrid cloud journey using containers. For example, RedHat OpenShift Container Platform (RHOCP) is the market leading Kubernetes-based container platform. RHOCP abstracts most of the complex tasks associated with container platforms making deploying and managing containers easier. IBM Cloud Paks provide enterprise-ready, containerized software solutions that give clients an open, faster and more secure way to move core business applications to any cloud. IBM Cloud Paks runs on RHOCP and this combination provides a foundation for accelerated application containerization.
Visit IBM Services for Cloud to learn more on how IBM can help in your hybrid cloud journey.
 
The post Accelerating the application containerization journey appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Cycle Trailing Projects and RDO’s Latest Release Train

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Train for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Train is the 20th release from the OpenStack project, which is the work of more than 1115 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-train/.
BUT!
This is not the official announcement you’re looking for.
We’re doing something a little different this cycle – we’re waiting for some of the “cycle-trailing” projects that we’re particularly keen about, like TripleO and Kolla, to finish their push BEFORE we make the official announcement.
Photo by Denis Chick on Unsplash
Deployment and lifecycle-management tools generally want to follow the release cycle, but because they rely on the other projects being completed, they may not always publish their final release at the same time as those projects. To that effect, they may choose the cycle-trailing release model.
Cycle-trailing projects are given an extra three months after the final release date to request publication of their release. They may otherwise use intermediary releases or development milestones.
While we’re super hopeful that these cycle trailing projects will be uploaded to the CentOS mirror before OpenInfrastructure Summit Shanghai, we’re going to do the official announcement just before the Summit with or without the packages.
We’ve got a lot of people to thank!
Do you like that we’re waiting a bit for our cycle trailing projects or would you prefer the official announcement as soon as the main projects are available? Let us know in the comments and we may adjust the process for future releases!
In the meantime, keep an eye here or on the mailing lists for the official announcement COMING SOON!
Quelle: RDO

5 ways hyperlocal climate forecasting can help businesses worldwide

How could your business or organization benefit from access to accurate, hyperlocal weather data? Think of data relevant to a very specific area like single farm, construction site or city block. At Emnotion, we excel in climate forecasting at a hyperlocal level.
A few years ago, we launched our climate forecasting services with the intent of giving farmers critical weather insights to help them plan ahead with foresight. Since then, we’ve discovered that once businesses have good, stable weather data and know how to work with it, they can make a lot of other good things happen.
We based our company in our home country of Israel, where farmers produce enough food to feed nine million residents. They also export produce and flowers globally. Given our arid, drought-prone land, the Israeli government promotes water efficiency by establishing water prices and quotas. In addition, it requires farmers to sign multiyear contracts to receive irrigated water.
Needless to say, farmers must carefully plan ahead to help ensure profitable yields, balancing weather forecasts with water costs, planting cycles, consumer demand and multiple other variables. Unfortunately, farmers cannot rely on local and regional weather service providers to consistently deliver accurate forecasts. As a result, they often contract to receive too little water, in part because they simply don’t anticipate a drought coming.
A cloud-native business goes global
We had our sights on helping Israeli farmers, but we also wanted to help other weather-prone industries, such as construction. To develop our climate forecasting services, we needed affordable, accurate weather data for any place in the world, delivered on an open cloud infrastructure designed for high availability. We found this infrastructure with IBM.
We quickly combined our proprietary algorithms and methodology with raw climate data from The Weather Company, an IBM Business, supported by IBM Cloud. In a short period, our cloud-native company was open for business.
Customers can access our virtual weather assistant through our private cloud. Here, they can receive short-, mid- and long-range forecasts, customized for their particular climate-related risks.
Backed by IBM global weather and data center networks, we can reliably operate anywhere. Already, we have branched out beyond Israel to help customers in Eastern Europe, Australia, South Asia and North America.
5 ways planning for unpredictable weather pays off
Here are just five ways Emnotion can help customers worldwide using The Weather Company data:

Farmers can contract to receive just enough water to foster crop health and better anticipate consumer demand for seasonal, weather-prone crops.
Tower crane operators can help prevent accidents and damages by securing cranes and cargoes before winds gusts hit their worksites.
Insurance providers can offer construction companies that rely on IBM weather data more affordable coverage rates.
Commercial and military drone operators can ground drones before airways grow turbulent.
City officials can advise citizens with heart and lung conditions to stay inside if high pressure systems are expected to intensify air pollution.

Given increasingly erratic global weather patterns, the list will only grow.
Teaming with IBM, we launched our services in just six months. We also saved on capital investments and streamlined our ongoing operational costs. Also, all of our employees can now focus on growing the business rather than managing infrastructure and Internet of Things (IoT) devices.
Finally, the IBM name gives our small company an extra boost of credibility. Many people ask us where our weather data comes from. Once we tell them it comes from IBM, they say “Okay.”
Read the case study for more details.
The post 5 ways hyperlocal climate forecasting can help businesses worldwide appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift 4.2: New YAML Editor

Through our built-in YAML editor, users can create and edit resources right in the Red Hat OpenShift Web Console UI. In the latest release, we’ve upgraded our editor to include language server support.
What is language server support?
The language server support feature uses the OpenAPI schema from Kubernetes to provide content assist inside the YAML editor based on the type of resource you are editing. More specifically, the language server support offers the following capabilities:

Improved YAML validation: The new editor provides feedback in context, directing you to the exact line and position that requires attention.
Document outlining: Document outlines offer a quick way to navigate your code.
Auto completion: While in the editor, language server support will provide you with valid configuration information as you type, allowing you to edit faster.
Hover support: Hovering over a property will show a description of the associated schema.
Advanced formatting: Format your YAML.

 
Why did we add language server support?
Because it’s awesome! With language server support, we improve users’ workflow by making it faster and easier to create and edit resources in the YAML editor. Specifically for novice users, these features can accelerate the learnability of Kubernetes concepts. This experience may look familiar; this utilizes the same editor and language server framework that is used in other tools like Eclipse Che or Visual Studio (VS) Code editor.
How can you use language server support?
Language server support is available in all YAML editors in the console, whether you are creating or editing resources.
Hover over a key to see more information. For example, while creating a Pod you can hover over a key such as containers. This will give you more information about the containers in the Pod and how you can define them using YAML.

When available, the property help will link out to more documentation around that property.

Use Ctrl + Space to activate auto complete.

If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.2 features, please take this brief 3-minute survey.

The post OpenShift 4.2: New YAML Editor appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.2: Declarative Dynamic UI for your Operator

When building a Kubernetes-native application, CustomResourceDefinitions (CRD) are the primary way to extend the Kubernetes API with custom resources. This post will cover generating a creation form for your custom resource based on an OpenAPI-based schema. Afterward, we are going to talk about the value you can get through Operator Descriptors to fulfill more complex interactions and improve the overall usability of your application.
Generate creation form based on OpenAPI schema
Many of our partners (ISVs) have certain requirements when building a UI form to guide users creating an instance of their application or custom resource managed by their Operators. Starting from Kubernetes 1.8, CustomResourcesDefinition (CRD) gained the ability to define an optional OpenAPI v3 based validation schema. In Kubernetes 1.15 and beyond, any new feature for CRDs will be required to have a structural schema. This is important not only for data consistency and security, but it also enables the potential to design and build a richer UI to improve the user experience when creating or mutating custom resources.
 
For example, here is a CRD manifest from one of our partners:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: null
  name: couchbaseclusters.couchbase.com
spec:
  …
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            …
            cluster:
              properties:
                …
                autoFailoverServerGroup:
                  type: boolean
                autoFailoverTimeout:
                  maximum: 3600
                  minimum: 5
                  type: integer
                …
              required:          
              …
             – autoFailoverTimeout
              …

 
With the validation info, we can start associating these fields with corresponding UI input fields. Since the autoFailoverServerGroup field is expecting a boolean data type, we can either assign this field with a checkbox, a radio button or a toggle switch. As for autoFailoverTimeout field, we can simply limit the input type as an integer between 5 to 3600. We can also denote that autoFailoverTimeout is a required field, while autoFailoverServerGroup is optional. So far, everything looks good. However, things start to get complicated for other data types or complex nested fields.
 
From our partners who build Operators, one common use case we see is that the custom resource needs a Secret object as a prerequisite for creating an instance. In the CRD manifest, this would be specified similar to the code snippet below:

  …
  properties:
    credentials:
      type: string
      …

 
As we can see, the only viable validation from OpenAPISchema checks only the data type as “string” and is fairly limited in terms of usability. Wouldn’t it be great if the UI could provide a searchable dropdown list of all the existing Secrets in your cluster? It could not only speed up the filling process but also reduce possible human errors in comparison with manually entry. This is where Operator Lifecycle Manager (OLM) descriptors come in.
Operator Descriptors enhancements
Prerequisites
 

Install Operator Lifecycle Manager

 
The Operator Lifecycle Manager (OLM) can be installed with one command on any Kubernetes cluster and interact with the OKD console. If you’re using Red Hat OpenShift 4, the OLM is pre-installed to manage and update the Operators on your cluster.
 
 

Generate an Operator manifest, ClusterServiceVersion

 
Generate the ClusterServiceVersion (CSV) that represents the CRDs your Operator manages, the permissions it requires to function, and other installation information with the OLM. See Generating a ClusterServiceVersion (CSV) for more information on generating with Operator SDK, or manually defining a manifest file. You’ll only have to do this once, then carry these changes forward for successive releases of the Operator.
specDescriptors
OLM introduces the notion of “descriptors” of both spec and status fields in Kubernetes API responses. Descriptors are intended to indicate various properties of a field in order to make decisions about their content. The schema for a descriptor is the same, regardless of type:

type Descriptor = {
  path: string; // Dot-delimited path of the field on the object
  displayName: string;
  description: string;

  /* Used to determine which “capabilities” this descriptor has, and which 
     React component to use */
  ‘x-descriptors’: SpecCapability[] | StatusCapability[];
  value?: any; // Optional value 
}

 
The x-descriptors field can be thought of as “capabilities” (and is referenced in the code using this term). Capabilities are defined in types.ts provide a mapping between descriptors and different UI components (implemented as React components) using URN format.
 

k8sResourcePrefix descriptor 

 
Recall the use case previously mentioned for specifying a Kubernetes resource in the CRD manifest. The “k8sResourcePrefix” is the OLM descriptor for this goal. 

k8sResourcePrefix = ‘urn:alm:descriptor:io.kubernetes:’,

 
Let’s take CouchbaseCluster as an example to see how this descriptor can be adopted in the Couchbase Operator’s CSV file. First, inside the CRD manifest (couchbasecluster.crd.yaml):

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: null
  name: couchbaseclusters.couchbase.com
spec:
  …
  names:
    kind: CouchbaseCluster
  …
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            …
            authSecret:
             minLength: 1
             type: string
            …
            tls:
             properties:
               static:
                 properties:
                   member:
                     properties:
                       serverSecret:
                       type: string
                     type: object
                  operatorSecret:
                     type: string
                type: object
              type: object
          required:          
          …
          – authSecret
          …

 
The validation block specifies a Secret object (authSecret) that stores the admin credential is required for creating a CouchbaseCluster custom resource. And for TLS (tls), it requires additional two Secret objects, one as Server Secret (tls.static.member.serverSecret), and the other as the Operator Secret (tls.static.operatorSecret).
 
To utilize the OLM Descriptors, inside Couchbase Operator’s CSV file, we first specify the k8sResourcePrefix descriptor as a “Secret” object (urn:alm:descriptor:io.kubernetes:Secret) and then point it to the fields on the CouchbaseCluster CRD object in the “path” field.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: couchbase-operator.v1.1.0
  …
spec:
  customresourcedefinitions:
    owned:
    – description: Manages Couchbase clusters
      displayName: Couchbase Cluster
      kind: CouchbaseCluster
      name: couchbaseclusters.couchbase.com
      …
      specDescriptors:
      – description: The name of the secret object that stores the admin
          credentials.
        displayName: Auth Secret
        path: authSecret
        x-descriptors:
        – urn:alm:descriptor:io.kubernetes:Secret
        …
      – description: The name of the secret object that stores the server’s
          TLS certificate.
        displayName: Server TLS Secret
        path: tls.static.member.serverSecret
        x-descriptors:
        – urn:alm:descriptor:io.kubernetes:Secret
        …
      – description: The name of the secret object that stores the
          Operator’s TLS certificate.
        displayName: Operator TLS Secret
        path: tls.static.operatorSecret
        x-descriptors:
        – urn:alm:descriptor:io.kubernetes:Secret

 
Let’s take a closer look. In the CSV file:

Under the spec.customresourcedefinitions.owned section (i.e. the CRD is that owned by this Operator, which can be multiple CRDs), specify the metadata of your custom resource. 
Since this is for “creating or mutating” the custom resource, assign the k8sResourcePrefix descriptor under “specDescriptors” section as input. 
description and displayName fields are pretty straightforward being the information that can be displayed on the UI as help text and field title. 
path – is used for pointing the field on the object in the dot-delimited path as to where it is inside the CRD (i.e. couchbasecluster.crd.yaml). 
x-descriptors – Assign this field with the `k8sResourcePrefix` descriptor and specify the resource type as “Secret” in “urn:alm:descriptor:io.kubernetes:Secret”.

Now, let’s take a look in OpenShift console. We will have to install the Couchbase Operator from “OperatorHub” first so the Operator is ready to be used on the cluster.
We can create CouchbaseCluster instance via the “Installed Operator” view:

Next, switch to the form view and see that “searchable dropdown component” shows up on the UI. This component allows us to look for existing Secrets on the cluster and is pointed to the corresponding fields on the CouchbaseCluster object. It’s that simple.

resourceRequirements descriptor 

Specifying how much CPU and memory (RAM) each container needs for a pod is another worth mentioned use case. Again, in the couchbasecluster.crd.yaml manifest, we can see fields for specifying the resource limits and requests for a running pod in bold:


spec:
  …  names:
    kind: CouchbaseCluster
  …
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
            …
            servers:
             items:
               properties:
                 name:
                   minLength: 1
                   pattern: ^[-_a-zA-Z0-9]+$
                   type: string
                 pod:
                   properties:
                     …
                     resources:
                       properties:
                         limits:
                               properties:
                                 cpu:
                                   type: string
                                 memory:
                                   type: string
                                 storage:
                                   type: string
                               type: object
                         requests:
                               properties:
                                 cpu:
                                   type: string
                                 memory:
                                   type: string
                                 storage:
                                   type: string
                           type: object
                    …

 
As we can see, these fields are nested and could be tricky to convert and organize them to the form fields. Alternatively, we can take advantage of the resourceRequirements descriptor by including it in Couchbase Operator’s CSV file and pointing to the resources field of the CouchbaseCluster object.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: couchbase-operator.v1.1.0
  …
spec:
  customresourcedefinitions:
    owned:
    – description: Manages Couchbase clusters
      displayName: Couchbase Cluster
      kind: CouchbaseCluster
      name: couchbaseclusters.couchbase.com
      …
      specDescriptors:
        …
      – description: Limits describes the minimum/maximum amount of compute
          Resources required/allowed.
        displayName: Resource Requirements
        path: servers[0].pod.resources
        x-descriptors:
        – urn:alm:descriptor:com.tectonic.ui:resourceRequirements
        …

 
The Resource Requirement react component will then show up on the UIs for creating or mutating your custom resource in OpenShift console. For example, in Create Couchbase Cluster view, UI shows both Limits and Requests fields:

On the other hand, in the CouchbaseCluster Details view, you can access the widget to configure the Resource Limits and Requests as shown as the screenshots below, respectively:

nodeAffinity, podAffinity, and podAntiAffinity descriptors

For assigning your running pods to nodes, it can be achieved by the affinity feature, which consists of two types of affinity, Node Affinity and Pod Affinity/Pod Anti-affinity. In the CRD manifest, these affinity related fields could be very nested and fairly complicated (see the nodeAffinity, podAffinity, and podAntiAffinity fields in alertmanager.crd.yaml).
 
Similarly, we can leverage nodeAffinity, podAffinity, and podAntiAffinity descriptors and point them to the affinity field of the Alertmanager object.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: prometheusoperator.0.27.0
  …
spec:
  …
  displayName: Prometheus Operator
  …
  customresourcedefinitions:
    owned:
    …
    – name: alertmanagers.monitoring.coreos.com
      version: v1
      kind: Alertmanager
      displayName: Alertmanager
      description: Configures an Alertmanager for the namespace
      …
      specDescriptors:
        …
        – description: Node affinity is a group of node affinity scheduling                    
          displayName: Node Affinity
          path: affinity.nodeAffinity
          x-descriptors:
          – urn:alm:descriptor:com.tectonic.ui:nodeAffinity
   – description: Pod affinity is a group of inter pod affinity scheduling rules
          displayName: Pod Affinity
          path: affinity.podAffinity
          x-descriptors:
          – urn:alm:descriptor:com.tectonic.ui:podAffinity
   – description: Pod anti affinity is a group of inter pod anti affinity scheduling rules
          displayName: Pod Anti-affinity
          path: affinity.podAntiAffinity
          x-descriptors:
          – urn:alm:descriptor:com.tectonic.ui:podAntiAffinity
        …

 
Later when we go ahead and create an  Alertmanager instance in console, we will see those UI widgets with clear visual grouping along with input instruction for guiding how we can specify affinity using key/value pair with the logical operator. The “operator” field is a dropdown that provides viable options and the “value” field is enabled/disabled dynamically based on the operator being specified. And for “preferred” related rules, the weighting will be required. 

Through talking with customers, we’ve learned the majority of our users treat the UI as the medium to learn and explore the technical or API details. The Affinity descriptor is one good example of the desired UX we strive to provide.

statusDescriptors

So far we have covered the OLM descriptors for the spec fields in Kubernetes API responses. In addition, OLM also provides a set of statusDescriptors for referencing fields in the status block of a custom resource. Some of them come with an associated react component too for richer interactions to the API. One example is podStatuses descriptor. 

podStatuses descriptor 

podStatuses statusDescriptor is usually being paired with podCount specDescriptor. User can specify the desired size of the custom resource being deployed with podCount specDescriptor. podStatuses statusDescriptor provides a dynamic graphical widget for better representing the latest member status of the custom resource being created or mutated.

Following the same pattern, in the code snippet below, we can see how etcd Operator applies podCount and podStatuses descriptors in the CSV file for users to create, mutating, and displaying etcdCluster custom resource in the console.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: etcdoperator.v0.9.4
  …
spec:
  …
  displayName: etcd
  …
  customresourcedefinitions:
    owned:
    …
    – name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.
      …
      specDescriptors:
        – description: The desired number of member Pods for the etcd cluster.
          displayName: Size
          path: size
          x-descriptors:
          – ‘urn:alm:descriptor:com.tectonic.ui:podCount’
        …
      statusDescriptors:
        – description: The status of each of the member Pods for the etcd cluster.
          displayName: Member Status
          path: members
          x-descriptors:
          – ‘urn:alm:descriptor:com.tectonic.ui:podStatuses’
        …

What’s next?
We hope the content and examples covered in this post will trigger community-wide discussions on how to improve the overall UX of Operator-managed application for the end-users. If you would like to learn more about OLM Descriptors, check out the github page where you can see the full list of specDescriptors and statusDescriptors that are currently available. Share your experience or feedback to Operator Lifecycle Manager (OLM) with github issues. If you want to explore more and contribute to Operator Descriptors, check out the contributing guide.
 
The post OpenShift 4.2: Declarative Dynamic UI for your Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

3 critical capabilities of a multicloud solution

For today’s modern enterprise, the benefits of a hybrid multicloud IT strategy are tried and true. According to an IBM Institute for Business Value Survey, 98 percent of companies plan to operate in a multicloud environment within the next three years, but fewer than half have multicloud management strategies in place.
Managing a multicloud environment does introduce complexity and can be difficult without the right tools and strategy. In order to optimize performance, control costs and support a complicated mix of applications, you need an effective means to handle working across multiple clouds from multiple vendors.
Cloud functions and technologies from different vendors all come with their own management and operation tools. Until you integrate your multiple clouds, the true benefits of a multicloud environment—including overall reduction in IT costs, improved operational speed and agility, and better IT and business alignment—will remain untapped. A truly robust and unified solution will have a few key capabilities for managing the complexity of a hybrid multicloud environment.
3 things to look for in a multicloud management solution
An effective IT management solution provides a clear view of all applications and tools to ensure accordance with compliance and security standards. The most effective multicloud management solution will also help enable automation and scale easily across the enterprise.
1. Visibility
It’s critical to know where your business application components are running. You have to monitor the health of resources (such as deployments, pods and Helm releases) across Kubernetes environments, whether they’re in public or private clouds, and in the appropriate business context.
2. Governance
As cloud-native environments proliferate across the enterprise, DevOps teams are tasked with ensuring that these environments are managed according to enterprise governance and security policies. A single dashboard provides you with a consistent set of configuration and security policies for managing an increasing number of cloud-native components.
3. Automation
Whether an application is cloud native or traditional, it’s crucial to efficiently manage and deliver services through end-to-end automation while enabling developers to build applications aligned with enterprise policies. Just as important is a consistent and flexible way to deploy applications across environments, including backup and disaster recovery options and the ability to move workloads. You’ll also need the ability to provision, configure and deliver individual Kubernetes clusters as a service in any cloud.
The right cloud management solution unlocks multicloud benefits
While your business might be on the right track by having adopted a hybrid multicloud infrastructure, only a sound and well-suited management solution can unlock the true value of cloud. Without a new approach specifically geared for effective multicloud management, cloud environments remain disconnected, and may prove to be more of a liability than an advantage.
However, a comprehensive IT management solution will integrate scattered workloads, bridge security gaps and provide the necessary visibility to optimize multiple cloud environments and streamline business operations.
Read the smart paper “Manage IT: Orchestrate and simplify multicloud” to learn more about IBM multicloud management solutions and our comprehensive approach to optimizing the cloud for improved multicloud deployment.
 
The post 3 critical capabilities of a multicloud solution appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Community Blog Round Up 21 October 2019

Just in time for Halloween, Andrew Beekhof has a ghost story about the texture of hounds.
But first!
Where have all the blog round ups gone?!?
Well, there’s the rub, right?
We don’t usually post when there’s one or less posts from our community to round up, but this has been the only post for WEEKS now, so here it is.
Thanks, Andrew!
But that brings us to another point.
We want to hear from YOU!
RDO has a database of bloggers who write about OpenStack / RDO / TripleO / Packstack things and while we’re encouraging those people to write, we’re also wondering if we’re missing some people. Do you know of a writer who is not included in our database? Let us know in the comments below.
Photo by Jessica Furtney on Unsplash
Savaged by Softdog, a Cautionary Tale by Andrew Beekhof
Hardware is imperfect, and software contains bugs. When node level failures occur, the work required from the cluster does not decrease – affected workloads need to be restarted, putting additional stress on surviving peers and making it important to recover the lost capacity.
Read more at http://blog.clusterlabs.org/blog/2019/savaged-by-softdog
Quelle: RDO