Improved Offline Publishing

The best technology is invisible and reliable. You almost forget it’s there, because things just work. Bad technology never disappears into the background — it’s always visible, and worse, it gets in your way. We rarely stop to think “My, what good Wifi!” But we sure notice when the Wifi is iffy.

Good technology in an app requires solid offline support. A WordPress app should give you a seamless, reliable posting experience, and you shouldn’t have to worry whether you’re online or offline while using WordPress Mobile. And if we’ve done our jobs right, you won’t have to! 

We all need fewer worries in life, so if you haven’t already head to https://apps.wordpress.com/get/ to download the apps.

Offline Publishing

On the go and without a connection?  No worries! The apps will now remember your choices and once you’re back online,your content will be saved and published as requested.  But if you changed your mind about publishing a post while you’re still offline, you can still safely cancel it.

The new Offline Publishing flow.

This improved publishing flow comes together with a revamped UI for yourf post status.  You’ll be able to clearly see which posts are pending, saving or publishing.

Smoother Messaging

We removed several alerts that were being presented while you were offline.  These blocking alerts required you to take action but often provided no insights on either what the problem was, or how to resolve it.

  They have been replaced with contextual non-blocking messages both within the UI, and in notices appearingright above the toolbar.

As a result, you’ll see less disruptive and uninformative alerts, and more inline and informative messages, such as the one shown above.

Safeguards

We also added some safeguards to ensure there are no surprises!

You can cancel offline publishing.

Modifying posts that are scheduled for publishing will cancel the publishing action. Don’t worry, though – you can always reschedule the post for publishing.

All queued save and publishing operations will be canceled if your device stays offline for more than 48 hours.  We want you to be in complete control of what gets published and when.
Quelle: RedHat Stack

OpenShift 4.3: The Project Launcher

In Red Hat OpenShift 4.2, we introduced a number of new console customization features, including ConsoleNotifications, ConsoleExternalLogLinks, ConsoleLinks, and ConsoleCLIDownloads. New in 4.3, the ConsoleLink feature has been extended to cover even more use cases. In addition to the User Menu, Help Menu, and Application Menu, users can now add links to specific project dashboards.

You can add a launcher card to a project dashboard by using the ConsoleLink CRD. Each of the links in the above screenshot is a separate ConsoleLink instance.

Users can define
namespaceDashboard, namespaceSelector
, and namespaces array to further customize their launcher.

namespaceDashboard
holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces.

namespaceSelector
is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces.

namespaces
is an array of namespace names in which the dashboard link should appear.

Take a closer look at the sample. We used namespaceDashboard to show the link in a single namespace; in this case, we’re adding it to the namespace ‘my-project’.
apiVersion: console.openshift.io/v1
kind: ConsoleLink
metadata:
name: example-namespace-dashboard
spec:
href: ‘https://www.example.com’
location: NamespaceDashboard
text: Namespace Dashboard Link
namespaceDashboard:
namespaces:
– my-project

This simple YAML will create the Launcher card below. Remember that you can add multiple links to the launcher card to easily display relevant external links. Operators can also use CRDs to add their own project level links in an automated fashion.

The new ConsoleLink location makes it easier than ever for cluster admins to link to project-specific applications relevant to a particular project. For adding cluster-wide scoped links to the User Menu, Help Menu, and Application Menu, check out our console customization blog.
If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.3 features, please take this brief 3-minute survey.
The post OpenShift 4.3: The Project Launcher appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Accessible bookcast streaming service becomes securely available on IBM Cloud

Accessible entertainment isn’t often easily accessible. The mainstream entertainment industry doesn’t cater to people who can’t see or hear well; people with PTSD, autism or epilepsy; or those who are learning English as a second language. These population segments can become isolated, and then marginalized when they can’t enjoy the latest feature film – even if it has subtitles – or best-selling book.
Wenebojo is a new streaming service designed to help solve that challenge with bookcasts. A bookcast is an immersive experience that combines audio narration, pictures and closed captioning. Packaged for easy consumption, each short story or series of episodes is available on demand on a smart TV, tablet, phone or computer.
Wenebojo, named for a mythical Native American storyteller, is designed to be inclusive, and transition people from television to reading. Bookcasts are accessible to people of all abilities, and as such the platform is a mashup of entertainment and a social initiative.
The offering comes from All Chaos Press, a subsidiary of Solitaire Interglobal (SIL).
Piracy protection: Keeping the intellectual capital in bookcasts secure
The Wenebojo parent company, Solitaire Interglobal (SIL), is a predictive performance service provider and has been doing complex modeling since 1978. This year, SIL will do well over a quarter billion security and performance models and we have literally trillions of data points that show us where security breaches occur, what platforms they cluster around and the conditions surrounding breaches. We know the impact of security breaches, how much it costs, what gets lost, how long it takes to recover and so on.
Wenebojo is a disruptive technology and there is no acceptable risk profile for data loss or variability in delivery of service. To support the Wenebojo bookcasts, SIL chose IBM Z on Cloud, paired with the IBM Cloud Hyper Protect Services solution and built on the IBM LinuxONE platform.
Being able to keep both client data and the intellectual capital of the writers secure is a big thing. There’s so much pirating of intellectual capital that we needed something that was going to be as hacker-proof as possible. We also needed to fit a fairly complex architecture, because we need the ability to scale based on what side of our system is being stressed, whether it’s the streaming back end or the customer-interfacing travelogue front end. We ran a huge number of models to determine what the impact was and what the risk factors were. We determined that we couldn’t find the support we needed anyplace other than IBM Z on Cloud.
Scaling the experience: Handling expected platform growth
For PTSD sufferers or those with epilepsy, Wenebojo is a godsend because there are no big explosions and no triggers. For those learning English, putting something that reads to them and displays the full, correct text as it goes along helps them learn to read. This is proving to be very important for English-as-a-second-language (ESL) students, because even the best ESL programs can’t build vocabulary like intriguing stories can.
Bookcasts are better than books on tape because we created it in such a way that a commuter can click on the bookcast and their 20-minute commute is exactly one chapter. Or if you’re waiting in the doctor’s office, you can play something that’s very short. They’re also the perfect length for a bedtime story.
We have predicted that Wenebojo will grow and keep growing for at least seven years before it levels off, so we need the platform to be scalable. We can’t wait six months to put in a new machine. And with the IBM Cloud, we don’t have to. We call up IBM and say, “We need to expand. We just got this influx of people.” And they’re able to get everything up and running in less than a day.
We’re seeing interest from a variety of potential customers, including prison systems, school systems and hospitals. With IBM, we’re ready for rapid growth as more and more people begin to stream Wenebojo bookcasts.
Read the case study for more details.
The post Accessible bookcast streaming service becomes securely available on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift 4.3: Alertmanager Configuration

Alerts are only useful if you know about them. That’s why we’re working on adding features to Red Hat OpenShift to make it easier for you to find out about potential problems and solve them before they become incidents. The new cluster overview dashboard is great for looking at the status of a cluster. But to get information when you might be away from your cluster, you’ll need to correctly configure your alerting system. One of the first things you should do when you set up a cluster is to use the tools described in this post. Without correct configuration you will not get critical alerts outside of the OpenShift console, and may miss out on features designed to reduce your mean time to resolution.
Alertmanager Configuration

OpenShift 4.3 contains a new Alertmanager section on the cluster settings page. The options it provides make it easier than ever to tell OpenShift’s monitoring tools how and where to send you notifications.

The first group is the alert routing settings. These fields determine how alerts are grouped into notifications and how long to wait before sending the notifications. Those notifications are then sent to Receivers that can be created and edited from the bottom of the page.
Receivers
Every OpenShift cluster needs a default receiver to handle any alerts not sent to other places. The default receiver that comes with a fresh install is initially very basic, so your first step should be to configure it to suit your needs. For more complex team structures, you may want to send different kinds of alerts to different places by creating more receivers. The easiest way to do this is to simply click the create receiver button. We currently offer forms for two types of receivers–webhook and PagerDuty–but more form types coming in the future.

Once you’ve entered the necessary details for the receiver, you can add some routing labels to decide which alerts will be sent there. For instance, you could send warning alerts to an email address and critical alerts to a specific Slack channel.

You can use these forms to create a robust alerting system. But for really complex configuration, it helps to go straight to the source. Switch to the YAML tab to view the raw version of your config and make any necessary changes. You can also use this view to create receiver types that are not currently supported by forms.
Information in the right places
Using the new Alertmanager configuration tools in OpenShift 4.3, you can direct alerts to the teams that need them–and avoid bothering teams that don’t. These features are part of an effort to make problem-solving in OpenShift simpler and reduce time to resolution. Follow along with the OpenShift Console and OpenShift Design GitHub repositories to see what new work is happening. If you’d like to provide feedback on any of the new 4.3 features, please take this brief 3-minute survey
The post OpenShift 4.3: Alertmanager Configuration appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Top 5 DevOps predictions for 2020

There are five DevOps trends that I believe will leave a mark in 2020. I’ll walk you through all five, plus some recommended next steps to take full advantage of these trends.
In 2019, Accenture’s disruptability index discovered that at least two-thirds of large organizations are facing high levels of industry disruption. Driven by pervasive technology change, organizations pursued new and more agile business models and new opportunities. Organizations that delivered applications and services faster were able to react more swiftly to those market changes and were better equipped to disrupt, rather than becoming disrupted. A study by the DevOps Research and Assessment Group (DORA) shows that the best-performing teams deliver applications 46 times more frequently than the lowest performing teams. That means delivering value to customers every hour, rather than monthly or quarterly.
2020 will be the year of delivering software at speed and with high quality, but the big change will be the focus on strong DevOps governance. The desire to take a DevOps approach is the new normal. We are entering a new chapter that calls for DevOps scalability, for better ways to manage multiple tools and platforms, and for tighter IT alignment to the business. DevOps culture and tools are critical, but without strong governance, you can’t scale. To succeed, the business needs must be the driver. The end state, after all, is one where increased IT agility enables maximum business agility. To improve trust across formerly disconnected teams, common visibility and insights into the end-to-end pipeline will be needed by all DevOps stakeholders, including the business.
 
DevOps trends in 2020
What will be the enablers and catalysts in 2020 driving DevOps agility?
Prediction 1: DevOps champions will enable business innovation at scale. From leaders to practitioners, DevOps champions will coexist and share desires, concerns and requirements. This collaboration will include the following:

A desire to speed the flow of software
Concerns about the quality of releases, release management, and how quality activities impact the delivery lifecycle and customer expectations
Continual optimization of the delivery process, including visualization and governance requirements

Prediction 2: More fragmentation of DevOps toolchains will motivate organizations to turn to value streams. 2020 will be the year of more DevOps for X, DevOps for Kubernetes, DevOps for clouds, DevOps for mobile, DevOps for databases, DevOps for SAP, etc. In the coming year, expect to see DevOps for anything involved in the production and delivery of software updates, application modernization, service delivery and integration. Developers, platform owners and site reliability engineers (SREs) will be given more control and visibility over the architectural and infrastructural components of the lifecycle. Governance will be established, and the growing set of stakeholders will get a positive return from having access and visibility to the delivery pipeline.
Figure 1: UrbanCode Velocity and its Value Stream Management screen enable full DevOps governance.
 
Prediction 3: Tekton will have a significant impact on cloud-native continuous delivery. Tekton is a set of shared open-source components for building continuous integration and continuous delivery systems. What if you were able to build, test and deploy apps to Kubernetes using an open source, vendor-neutral, Kubernetes-native framework? That’s the Tekton promise, under a framework of composable, declarative, reproducible and cloud-native principles. Tekton has a bright future now that it is strongly embraced by a large community of users along with organizations like Google, CloudBees, Red Hat and IBM.
Prediction 4: DevOps accelerators will make DevOps kings. In the search for holistic optimization, organizations will move from providing integrations, and move to creating sets of “best practices in a box.” These will deliver what is needed for systems to talk fluidly, but also remain auditable for compliance. These assets will become easier to discover, adopt and customize. Test assets that have been traditionally developed and maintained by software and system integrators will be provided by ambitious user communities, vendors, service providers, regulatory services and domain specialists.
Prediction 5: Artificial intelligence (AI) and machine learning in DevOps will go from marketing to reality. Tech giants, such as Google and IBM, will continue researching how to bring the benefits of DevOps to quantum computing, blockchain, AI, bots, 5G and edge technologies. They will also continue to look at how technologies can be used within the activities of continuous deployment, continuous software testing prediction, performance testing, and other parts of the DevOps pipeline. DevOps solutions will be able to detect, highlight, or act independently when opportunities for improvement or risk mitigation surface, from the moment an idea becomes a story until a story becomes a solution in the hands of their users.
 
Next steps
Companies embracing DevOps will need to carefully evaluate their current internal landscape, then prioritize next steps for DevOps success.
First, identify a DevOps champion to lead the efforts, beginning with automation. Establishing an automated and controlled path to production is the starting point for many DevOps transformations and one where leaders can show ROI clearly.
Then, the focus should turn toward scaling best practices across the enterprise and introducing governance and optimization. This includes reducing waste, optimizing flow and shifting security, quality and governance to the left. It also means increasing the frequency of complex releases by simplifying, digitizing and streamlining execution.
Figure 2: Scaling best practices across the enterprise.
 
These are big steps, so accelerate your DevOps journey by aligning with a long-term vision vendor that has a reputation of helping organizations navigate transformational journeys successfully. OVUM and Forrester have identified organizations that can help support your modernization in the following OVUM report, OVUM webinar and Forrester report.
Do you agree with these predictions? Do you have any others? Maybe an early 2020 DevOps success story? Looking forward to reading those on Twitter at @IBMCloud.
 
The post Top 5 DevOps predictions for 2020 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift 4.3: Managing Catalog Sources in the OpenShift Web Console

An Operator Lifecycle Manager (OLM) CatalogSource is a collection of operator metadata. OLM uses CatalogSources to build the list of available operators that can be installed from OperatorHub in the OpenShift web console. In OpenShift 4.3, the web console has added support for managing the out-of-the-box CatalogSources as well as adding your own custom CatalogSources.
You can create a custom CatalogSource using the OLM Operator Registry. The operator-framework/operator-registry repository has a Dockerfile to build a minimal registry server image using some example manifests and can be used to create your own custom catalog images. Run the following commands (replacing quay.io/my-organization/example-registry with your own repository) to build the example. You’ll need a repository at an image registry like quay.io for your image.
git clone https://github.com/operator-framework/operator-registry.git
cd operator-registry
docker build -t quay.io/my-organization/example-registry:latest -f upstream-example.Dockerfile .
docker push quay.io/my-organization/example-registry:latest

To add the catalog source in OpenShift console, log in as a cluster administration. Go to the Cluster Settings page under the Administration navigation section and click on the Global Configuration tab. This page lists the configuration resources for various cluster components and is a great place to explore what cluster configuration is available.
Click the OperatorHub link and then click the Sources tab to see the currently installed CatalogSources.

You’ll see several already present on the cluster, which populate OperatorHub with Red Hat, Certified and Community operators. These default catalog sources can now be enabled and disabled inside OpenShift console from the action menu for each table row.
To add a new CatalogSource, click Create Catalog Source. In the form, give the CatalogSource a name and optionally a display name and publisher. Enter the image name you built above. You can choose whether the CatalogSource is available for all namespaces or a single namespace.

Click Create. You’ll see the new catalog source in the list. After a few minutes, OLM will load the catalog, and console will show the number of operators available.

Visit the OperatorHub page under the Operators navigation section to see the new operators! Operators from a custom CatalogSource will be categorized as Provider Type “Custom,” and you can use the click the Provider Type “Custom” filter to the left of the page to see just the new operators.

If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.3 features, please take this brief 3-minute survey.
The post OpenShift 4.3: Managing Catalog Sources in the OpenShift Web Console appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Create a new OCP application deployment using Ceph RBD volume (Rails + PostgreSQL)

> Note: This example application deployment assumes you already have an OpenShift test cluster or have followed the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.

In this Blog the ocs-storagecluster-ceph-rbd storage class will be used by an OCP application + database deployment to create RWO (ReadWriteOnce) persistent storage. The persistent storage will be a Ceph RBD (RADOS Block Device) volume (object) in the Ceph pool ocs-storagecluster-cephblockpool.
To do so we have created a template file, based on the OpenShift rails-pgsql-persistent template, that includes an extra parameter STORAGE_CLASS that enables the end user to specify the storage class the Persistent Volume Claim (PVC) should use. Feel free to download https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/configurable-rails-app.yaml to check on the format of this template. Search for STORAGE_CLASS in the downloaded content.
oc new-project my-database-app
curl <a href=”https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/configurable-rails-app.yaml”>https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/configurable-rails-app.yaml</a> | oc new-app -p STORAGE_CLASS=ocs-storagecluster-ceph-rbd -p VOLUME_CAPACITY=5Gi -f –

After the deployment is started you can monitor with these commands.
oc status
oc get pvc -n my-database-app
This step could take 5 or more minutes. Wait until there are 2 Pods in Running STATUS and 4 Pods in Completed STATUS as shown below.
watch oc get pods -n my-database-app
Example output:
NAME READY STATUS RESTARTS AGE
postgresql-1-deploy 0/1 Completed 0 5m48s
postgresql-1-lf7qt 1/1 Running 0 5m40s
rails-pgsql-persistent-1-build 0/1 Completed 0 5m49s
rails-pgsql-persistent-1-deploy 0/1 Completed 0 3m36s
rails-pgsql-persistent-1-hook-pre 0/1 Completed 0 3m28s
rails-pgsql-persistent-1-pjh6q 1/1 Running 0 3m14s

You can exit by pressing Ctrl+C
Once the deployment is complete you can now test the application and the persistent storage on Ceph. Your HOST/PORT will be different.
oc get route -n my-database-app
Example output:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
rails-pgsql-persistent rails-pgsql-persistent-my-database-app.apps.cluster-a26e.sandbox449.opentlc.com rails-pgsql-persistent

Copy your rails-pgsql-persistent route (different than above) to a browser window to create articles. You will need to append /articles to the end.
Example http:///articles
Enter the username and password below to create articles and comments. The articles and comments are saved in a PostgreSQL database which stores its table spaces on the Ceph RBD volume provisioned using the ocs-storagecluster-ceph-rbd storageclass during the application deployment.
username: openshift
password: secret
Lets now take another look at the Ceph ocs-storagecluster-cephblockpool created by the ocs-storagecluster-ceph-rbd Storage Class. Log into the toolbox pod again.
TOOLS_POD=$(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)
oc rsh -n openshift-storage $TOOLS_POD
Run the same Ceph commands as before the application deployment and compare to results in prior section. Notice the number of objects in ocs-storagecluster-cephblockpool has increased. The third command lists RBDs and we should now have two RBDs.
ceph df
rados df
rbd -p ocs-storagecluster-cephblockpool ls | grep vol

You can exit the toolbox by either pressing Ctrl+D or by executing exit.
Matching PVs to RBDs
A handy way to match persistent volumes to Ceph RBDs is to execute:
oc get pv -o ‘custom-columns=NAME:.spec.claimRef.name,PVNAME:.metadata.name,STORAGECLASS:.spec.storageClassName,VOLUMEHANDLE:.spec.csi.volumeHandle’

Example output:
NAME PVNAME STORAGECLASS VOLUMEHANDLE
ocs-deviceset-0-0-gzxjb pvc-1cf104d2-2033-11ea-ac56-0a9ccb4b29e2 gp2
ocs-deviceset-1-0-s87xm pvc-1cf33c42-2033-11ea-ac56-0a9ccb4b29e2 gp2
ocs-deviceset-2-0-zcjk4 pvc-1cf4f825-2033-11ea-ac56-0a9ccb4b29e2 gp2
db-noobaa-core-0 pvc-3008e684-2033-11ea-a83b-065b3ec3da7c ocs-storagecluster-ceph-rbd 0001-0011-openshift-storage-0000000000000001-3c0bb177-2033-11ea-9396-0a580a800406
postgresql pvc-4ca89d3d-2060-11ea-9a42-02dfa51cba90 ocs-storagecluster-ceph-rbd 0001-0011-openshift-storage-0000000000000001-4cbba393-2060-11ea-9396-0a580a800406
rook-ceph-mon-a pvc-cac661b6-2032-11ea-ac56-0a9ccb4b29e2 gp2
rook-ceph-mon-b pvc-cde2d8b3-2032-11ea-ac56-0a9ccb4b29e2 gp2
rook-ceph-mon-c pvc-d0efbd9d-2032-11ea-ac56-0a9ccb4b29e2 gp2
lab-ossm-hub-data pvc-dc1d4bdc-2028-11ea-ad6c-0a9ccb4b29e2 gp2

The second half of the VOLUMEHANDLE column mostly matches what your RBD is named inside of Ceph. All you have to do is append csi-vol- to the front like this:
Get the full RBD name of our postgreSQL PV in one command
oc get pv pvc-4ca89d3d-2060-11ea-9a42-02dfa51cba90 -o jsonpath='{.spec.csi.volumeHandle}’ | cut -d ‘-‘ -f 6- | awk ‘{print “csi-vol-“$1}’

> Note: You will need to adjust the above command to use your PVNAME name
Example output:
csi-vol-4cbba393-2060-11ea-9396-0a580a800406
Now we can check on the details of our RBD from inside of the tools pod:

TOOLS_POD=$(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)
oc rsh -n openshift-storage $TOOLS_POD rbd -p ocs-storagecluster-cephblockpool info csi-vol-4cbba393-2060-11ea-9396-0a580a800406

Example output:
rbd image ‘csi-vol-4cbba393-2060-11ea-9396-0a580a800406′:
size 5 GiB in 1280 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 95e4f3973e8
block_name_prefix: rbd_data.95e4f3973e8
format: 2
features: layering
op_features:
flags:
create_timestamp: Tue Dec 17 00:00:57 2019
access_timestamp: Tue Dec 17 00:00:57 2019
modify_timestamp: Tue Dec 17 00:00:57 2019

> Note: You will need to adjust the above command to use your RBD name

Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Create a new OCP application deployment using Ceph RBD volume (Rails + PostgreSQL) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.3: Dashboard refinements and the new Project dashboard

The Cluster Overview dashboard we introduced in Red Hat OpenShift 4.2 was a significant and well-received addition to the Web Console, and our team has greatly enjoyed seeing how OpenShift users (and even our own developers) have been using it to identify and resolve issues they otherwise may not have noticed.
We’ve made a number of changes both big and small to the dashboard based on our user research findings and the feedback we’ve collected from readers like you. This post covers some of the key improvements and introduces a new member of the dashboard family that we think developers in particular are going to love.
If you’d like to help us make these dashboards even better in the future, consider filling out our survey to provide feedback and sign up for future research opportunities.
Let’s dive in.
Visual polish

With the help of visual designers on our User Experience Design team and a closer adherence to the open source PatternFly design system, the 4.3 dashboard provides a much cleaner first impression. The new Red Hat font, adjusted spacing, fewer separators, and more clearly-defined charts make this the cleanest window into OpenShift clusters yet.
More signal, less noise

Dashboards have a tendency to become disorganized smörgåsbords of cards and information that compete for attention as they gain more functionality. When that kind of noise starts to creep in, figuring out what’s important and what actions need to be taken at a glance can become difficult.
Our team made a conscious effort in 4.3 to reduce the amount of visual noise from the previous iteration and boost the important signals that make managing and fixing clusters easier. Along with a variety of smaller iconography and behavioral tweaks, we were able to combine three cards into one without losing any functionality and integrate links to other areas of the Console (like alert details pages) when they become contextually relevant.
These changes make the dashboard a more effective launching point than ever, requiring less guesswork and fewer clicks to find and fix issues.
Card-by-card refinements
There are a ton of little details I could dig into, but for the sake of your eyes and my fingers I’ll cover the highlights and let the rest be nice little surprises. If you prefer to spoil nice little surprises, take a peek at our detailed design documentation in the OpenShift Design GitHub repository.

The Details card now includes the cluster’s API address, a link to the OpenShift Cluster Manager, the current upgrade channel, and a handy link to the cluster’s settings page. You’ll also see a version upgrade notice here when one is available.
Resources in the Inventory card can now be clicked to view the full list page of each type, and the card now includes the cluster’s Storage Classes as well.

The Status card regains the same subsystem statuses that were available in the Cluster Status page of 4.0, with the response rates of each control plane component included in a popover. The alerts that appear in this card also now include links to view additional information about each one.

When the Red Hat Quay operator is installed an additional Quay Image Security status appears that flags any container vulnerabilities that are found, with links to the Quay console for additional information. Learn more about the Console’s Quay integration in this post.

The revised Activity card condenses all events into denser, easily-scannable rows and highlights certain long-running operations (like cluster updates) in a new Ongoing section so they can’t easily be missed.

Finally, the Utilization card now incorporates the best pieces of 4.2’s Capacity and Top Consumers cards to show the current resource usage and remaining capacity in one place. Clicking any measurement reveals the top consumers of each resource by either project, node, or pod, and the card’s charts can be adjusted to show the last 1, 6, or 24 hours of resource consumption.
The new Project Dashboard

Forgive me for saving the biggest news for last, but the new Project dashboard (which is also available in the Developer perspective, of course) provides the same familiar dashboard experience but scoped to project-level resources and metrics.
The Inventory card includes the quantity and statuses of deployments, pods, PVCs, services, routes, config maps, and secrets, and the Utilization card includes the network utilization and historical pod count of the current project. Any resource quotas applied to the project are also included, and an additional Launcher card with relevant external links can be added via the new ConsoleLink CRD.
What’s next
New features, refinements, and surprises, of course!
Check out the OpenShift Console and OpenShift Design GitHub repositories if you’d like a sneak peek, but until then, we would really love to hear your ideas and feedback! Please fill out our brief survey to get in touch, or sign up to participate in future OpenShift research opportunities.
The post OpenShift 4.3: Dashboard refinements and the new Project dashboard appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Top Questions: Containers and VMs Together

The post Top Questions: Containers and VMs Together appeared first on Mirantis | Pure Play Open Cloud.
(original post date: 10/1/19)
We had a great turnout to our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era” and tons of questions came in via the chat — so many that we weren’t able to answer all of them in real-time or in the Q&A at the end. We’ll cover the answers to the top questions in two posts (yes, there were a lot of questions!).
First up, we’ll take a look at IT infrastructure and operations topics, including whether you should deploy containers in VMs or make the leap to containers on bare metal.
VMs or Containers?

Among the top questions was whether users should just run a container platform on bare metal or run it on top of their virtual infrastructure — Not surprising, given the webinar topic.

A Key Principle: one driver for containerization is to abstract applications and their dependencies away from the underlying infrastructure. It’s our experience that developers don’t often care about the underlying infrastructure (or at least they’d prefer not to). Docker and Kubernetes are infrastructure agnostic. We have no real preference.
The goal – yours and ours: provide a platform that developers love to use, AND provide the operational and security tools required to keep your applications running in production and maintain the platform.
So VMs or containers?  It depends. If all of your operational expertise and tooling is built around virtualization, you might not want to change that right out of the gate when you deploy a container platform. On the other hand, if cost reduction or performance overhead is more important to you, maybe you’ll decide you don’t want to pay for a hypervisor anymore. The good news — when you containerize applications, you can likely reduce the number of VMs and underlying servers by at least 30 or 40 percent.

Whatever you do, avoid making a container platform decision that’s driven purely by your infrastructure of today. If developers feel like they don’t have flexibility, they quickly adopt their own tools, creating a second wave of shadow IT.
Containers and Networking
We didn’t go very deep on networking or storage on the webinar because they’ are topics than can easily fill multiple webinars on their own, but there were a few common questions.
How do you connect multiple containers together and expose the applications to external users and services?

For simple applications, you can use the networking tools that are built right into both the Docker Engine and Kubernetes.
If you’re running a container and want to map an external port to an internal port you can add a simple parameter to a command to open communications:
docker run –publish <external port>:<internal port> <image name>
Both Swarm and Kubernetes allow two containers to communicate with each other while keeping that connection hidden from external traffic. You can do that very simply on a Docker Engine or Swarm cluster using Docker Compose to define your services and their networks. In Kubernetes, you of course have similar capabilities.

What are Calico and Tigera, and how do they fit in to the networking design? What about NSX or other networking solutions?

For more advanced applications or those running in production, you might want additional features and capabilities beyond what the built-in networking drivers support. When you have hundreds or thousands of containers, you’ll need a better way to handle routing, discovery, security and other network concerns at scale. The Kubernetes community supports more advanced networking plugins through a standardized Container Networking Interface (CNI). CNI plugins provide enhanced capabilities you won’t find in the default network drivers.
Project Calico, by Tigera, is one of the most common open source CNI plugins you will find in Kubernetes, and it works for both Linux and Windows containers. Calico is maintained by Tigera who works closely with the Kubernetes community to define and contribute to the CNI standard. Docker Enterprise includes Calico as our “batteries included, but swappable” CNI plugin.
For enterprises that need even greater security and management including auditing, reporting, greater scale, integration with service meshes, you might look to a commercial product like Tigera Secure.
CNI ensures a standard networking interface that can support different ecosystem solutions. If you’re a VMware customer and already invested in NSX, you might go that route instead.

Sizing and Optimizing Infrastructure for Containers
We received quite a few questions about how to size a design for Docker & Kubernetes. They boiled down to two main questions:
What’s the maximum number of containers you can run on a single Docker host? 

The answer: it depends. Remember, containers are just processes that consume RAM and CPU directly from the host. Since there’s no hypervisor layer and additional OS between the application and the hos, a host should be able to run at least as many processes as containers as it did prior to containerization. In fact, most organizations end up running around 40% more work on a host because multiple applications can share the same base OS, and because many VMs are over-provisioned.

How do I size my environment for Docker/Kubernetes?

As you start thinking about running containers in production, you should look at bringing in expertise to help guide you through this exercise. Similar to the previous answer, on average we see about a 40% reduction in the number of VMs, but that average is across a broad set of applications and you’ll want to learn how to estimate as you go forward and add more applications. We’ve seen customers do it on their own, but it takes time and you end up learning a lot from your own mistakes before you get it right.  You can greatly accelerate your path with a little help.

Getting Started
We saved this one for last because this is by far the area with the most questions. Fortunately, they’re much easier questions to answer and many of the resources are free!
Where can I learn more about optimizing and managing containers?

If you want to know more about the Docker Trusted Registry, Docker Kubernetes Service and the Universal Control Plane then you can get a free hosted trial of Docker Enterprise. Again, an introductory walkthrough is provided.
Want classes and training with an instructor? We have that, too. There are classes for operators and developers; and classes for Kubernetes and Security.

Next week, we’ll cover questions about Kubernetes and Docker together, software pipelines and trusted content.
The post Top Questions: Containers and VMs Together appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift 4.3: Spoofing a User

Imagine you’re a cluster administrator managing a huge number of users. A user reaches out to you with a problem: “My console is broken.” There’s seemingly an infinite number of possible explanations for why this user can’t access the console. However, you can’t see their system and they have difficulty explaining what the console is doing. The Red Hat OpenShift team recently met with a university customer whose admins frequently run into this scenario. Luckily, OpenShift 4.3’s web console UI addresses this exact problem. New to 4.3, we’ve introduced the ability to spoof users and groups.
Users of Red Hat OpenShift have long had the ability to impersonate Role Bindings through the web console UI. However, since the addition of the user management section, users and administrators can now spoof other users and groups.
To impersonate a user, simply navigate to the Users page under the User Management section of the navigation. Open the menu for a particular user and select “Impersonate User ‘[user]’”. In our case, we are impersonating the user “Ali.”

Note that a user can have multiple role bindings; thus, by spoofing the user instead of a role binding, system administrators can impersonate the user and see the console with their exact authorization credentials. This allows for much quicker and easier troubleshooting access, enabling admins to resolve issues faster and more efficiently.
Once you’ve elected to impersonate a user, you’ll be brought to the Projects page in their view. A console notification banner will appear to remind you that you are viewing the console from Ali’s perspective. You can stop by clicking the “Stop Impersonation” link in the banner.
Administrators can also choose to impersonate a group. To do this, simply navigate to the Groups page under the User Management section of the navigation. Just as we did for user impersonation, open the menu for a particular group and select “Impersonate Group ‘[group]’”. In this case, we are impersonating the group “system:authenticated”.

You can also access impersonation tools from the Role Bindings page. Again, open the menu for a particular role binding and subject and select “Impersonate User ‘[User]’”.
Access is also available from the Role Bindings tab within a particular role.

We know that helping others troubleshoot can be a huge pain point for our users, so we hope that this new feature helps administrators resolve issues faster and more easily. To learn more about our other user management updates, check out our User Management Improvements blog.
If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.3 features, please take this brief 3-minute survey.
The post OpenShift 4.3: Spoofing a User appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift