OpenShift Commons Briefing #58: Open Source Application Segmentation with Aporeto’s Trireme

In this briefing, Dimitri Stiliadis, CEO, Co-Founder of Aporeto gives an introduction to the Trireme open source project. Trireme takes a different approach to application segmentation by treating the problem as what it is: an authentication and authorization problem. Every application component, such as process, a container, a Kubernetes POD, has an identity. A segmentation function is a simple policy that defines identities of the endpoints that are allowed to communicate with each other.
Quelle: OpenShift

Who’s cloud is it, anyway? Find out at InterConnect 2017

Are you ready for the cloud event of the year? Get ready to meet cloud experts, problem-solvers and industry leaders you admire. Not to mention some very special guests—like TV star, improv legend and the next Aaron Burr in “Hamilton,” Wayne Brady.
IBM is gearing up for InterConnect 2017 in Las Vegas, where IBM will host 24,000 attendees who are using cloud and AI to leapfrog today’s experiences.
Think of it like this: imagine all of your favorite musicians will share the stage at the concert of the year. Are you really going to miss it? Would you rather read about it after it happens? Absolutely not. InterConnect is no different. You don’t want to just read about the client success stories, the insights and the cloud trends and technologies rapidly reshaping business. Join us, talk with the experts in your field and share your own cloud stories.
In case you need reasons to attend InterConnect —or need to make the case to your boss to send you—here are my top 5:

Engage in discussions around pressing social issues, like solving online harassment, with IT leaders and cyber bullying activist Wayne Brady
Hear the keynote from IBM CEO Ginni Rometty
Attend 2,000 breakout sessions on trends and topics affecting your business
Connect, collaborate and share expertise and knowledge with thousands of professionals like you
Gain industry expertise through 200+ labs and hundreds of training and certification opportunities

Need more reasons to join the best cloud event of the year? The IBM cloud team will share blogs from executives with stories, previews and insider tips to help you get the most out of the event for your organization. Stay tuned to this blog, and all stories will be accessible here.
Browse InterConnect 2017 keynotes, sessions and entertainment here. And stay tuned to this blog to learn more about the people and technologies that are reshaping the future of business. Don’t wait, register today.
The post Who’s cloud is it, anyway? Find out at InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

ERPs and the coming cloud revolution

Change awaits some of the most hallowed ground in enterprise technology: enterprise resource plan (ERP) software and systems.
ERP systems, many of which come from Oracle and SAP, are the glue that binds corporations. ERPs manage and integrate vital systems of record in areas such as planning, purchasing, sales, finance and human resources. These systems are 10-ton rigs; the dreadnoughts of corporate IT.
So important are ERPs that many chief information and chief technology officers consider them sacrosanct. After all, such systems have taken blood, sweat, tears and considerable investment to implement over the years. The prevailing attitude is, “Don’t mess with my ERP.”
However, these days, when I talk with clients about their ERPs, I sense a cultural shift resulting from the power and possibilities of cloud technology.
Cloud is a route, not a destination. It uses data as a natural resource to drive competitive advantage. To make use of data, corporate innovators use cloud tools and processes of their own choosing. They work across whatever platforms they want and retain the option to adapt and pivot. On a practical level, innovators are looking to manage more complex data and workflow integrations by bridging cloud and on-premises architectures.
These innovators also want to heal the big pain points of ERP: the high cost of ownership, the complexity that defeats quick scaling, slow application development and a shortage of specialized skills.
One of the best approaches to the ERP challenge is outsourcing with a cloud managed services (CMS) strategy. With a fully managed cloud service, a CMS provider can build and manage the infrastructure. It can also manage operating systems, patching, backup, middleware and other functions.
Are you a candidate for a managed ERP solution on cloud? In my experience, many organizations that move their ERP systems to the cloud are in one or more of the following situations:

They want to drive costs out of their infrastructure, particularly when it’s time to replace hardware.
They’re consolidating systems onto the cloud and want take advantage of its capabilities.
They need a new or revamped ERP system and don’t want to invest the capital to do it on premises.
They don’t have the people or skills to implement and maintain the most robust ERP solution.

There are other reasons to take advantage of ERP in the cloud. It’s ideal for setting up new locations or quickly deploying to new overseas markets. With cloud ERP, you could even eliminate traditional offices. It also makes collaboration easier among internal staff, partners and clients across geographies.
Let me be clear: moving ERP to the cloud is not simply flipping a switch. It’s complex, particularly data migration. Integrating cloud with your on-premises infrastructure can be equally challenging. Carefully consider cloud’s impact not just on your ERP environment, but on employees and business processes as well.
Working with an experienced implementation partner to develop a roadmap can help navigate the move from on-premises ERP to cloud. For example, we have clients that ask us to host their entire SAP or Oracle landscapes, including production and support systems. We also work with some large companies not yet ready to put their gigantic and global production systems in the cloud, so they keep their production environments in-house, but run the rest of the ancillary support systems in the cloud.
Regardless of the way enterprises choose their future ERP journeys, the digital revolution promises new innovations and value as those organizations move to the cloud.
Learn more about IBM Cloud Managed Services.
The post ERPs and the coming cloud revolution appeared first on news.
Quelle: Thoughts on Cloud

Announcing Red Hat OpenShift Container Platform 3.4 GA

It’s an exciting day for Red Hat as we announce the general availability of the latest release of Red Hat OpenShift Container Platform, version 3.4. This new release provides significant enhancements to OpenShift in order to lower the barrier of adoption of containers in the enterprise with simplified storage provisioning, enhanced multi-tenant capabilities and new reference architectures in hybrid cloud environments. We’ve written the following blog posts to provide you with more details and even a few demos
Quelle: OpenShift

Hybrid Management using Red Hat CloudForms (Video)

This week, we explore Red Hat CloudForms cloud management platform (CMP) and its capability to manage multiple clouds. This demonstration video focuses on hybrid management and highlights some of its key features. These include:

infrastructure and cloud visibility,
centralized management of virtual machines, instances and containers,
workload lifecycle management and day 2 operations,
historical reports and dashboards, including showback and chargeback,
resource monitoring and optimization,
compliance and governance with security policies and alerts.

 

 
Additional information about the latest release of Red Hat CloudForms 4.2 can be found on this blog post announcement.
 
 
Quelle: CloudForms

Wind River and Mirantis Collaborate on OpenStack NFV Proof of Concept Project

The post Wind River and Mirantis Collaborate on OpenStack NFV Proof of Concept Project appeared first on Mirantis | The Pure Play OpenStack Company.
As part of both companies commitment to industry standards and interoperability, Wind River and Mirantis recently completed a joint Proof of Concept interoperability project at Wind River’s Network Functions Virtualization (NFV) lab in Santa Clara, California.
The goal of the project was modest: to demonstrate that Wind River’s Titanium Server Carrier Grade software virtualization platform could be deployed in federation with the latest, most advanced version of Mirantis Pure Play Web-Scale OpenStack distribution.
As expected, this goal was readily achieved, proving:  A) The significance and importance of adhering to open, standard interfaces; B) The value of healthy ‘coopetition’ for our respective customers and the industry as a whole.
Here are some specifics surrounding the project:

Hardware Baseline: Dual socket, Intel Xeon E5 Servers (provided by several Titanium Cloud H/W partners)
Wind River Software Baseline: Titanium Server Release 3
Mirantis Software Baseline: Mirantis OpenStack 9.1 + Ubuntu 14.04
Project Configuration:

Mirantis OpenStack installed as the primary OpenStack Region across one set of servers
Titanium Server installed as a secondary OpenStack Region for high performance & high reliability workloads across a second, separate set of servers
The Mirantis Region hosted OpenStack Keystone services (user identities and credentials) which were shared with the Titanium Server Region
Once installed and operational, the Horizon dashboards of both systems were able to see and administer resources in either Region.  i.e. Using the Mirantis dashboard, users could view and manage the Titanium Server virtual resources and workloads – together with the native Mirantis Region virtual resource and workloads.  Similarly, Titanium Server dashboard users could see and manage resources in either the Titanium Server Region or the Mirantis Region.

The results of this project are extremely important and powerful for the end user.  Having the ability to manage an entire cloud, containing different types of workloads, from a single user interface is fantastic.  By deploying and taking advantage of the shared services  built into OpenStack and enabled through OpenStack Regions, users are able to choose the software platform which best meets the needs and SLAs of their applications and services – without sacrificing ease of use and manageability.
Technical accomplishments aside, this project has shown that together, Wind River and Mirantis have the willingness and capability to leverage their respective strengths to the benefit of their customers.  This is part of the original promise of NFV, and it is impressive to actually see put into practice!
(Originally published on the Wind River blog.)
The post Wind River and Mirantis Collaborate on OpenStack NFV Proof of Concept Project appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OpenShift Container Platform Reference Architecture Implementation Guides

We’ve got a design for your next cloud-based container deployment.

An inordinate amount of time can be spent researching and debating architectural decisions, tooling, parameters, or a required sequence of tasks when trying to deploy a project to the cloud. Start your project on the right foot and take advantage of the Red Hat OpenShift Container Platform Reference Architecture implementation guides!
Quelle: OpenShift

9 tips to properly configure your OpenStack Instance

In OpenStack jargon, an Instance is a Virtual Machine, the guest workload. It boots from an operating system image, and it is configured with a certain amount of CPU, RAM and disk space, amongst other parameters such as networking or security settings.
In this blog post kindly contributed by Marko Myllynen we’ll explore nine configuration and optimization options that will help you achieve the required performance, reliability and security that you need for your workloads.
Some of the optimizations can be done inside a guest regardless of what has the OpenStack Cloud Administrator enabled in your cloud. However, more advanced options require prior enablement and, possibly, special host capabilities. This means many of the options described here will depend on how the Administrator configured the cloud, or may not be available for some tenants as they are reserved for certain groups. More information about this subject can be found on the Red Hat Documentation Portal and its comprehensive guide on OpenStack Image Service. Similarly, the upstream OpenStack documentation has some extra guidelines available.
The following configurations should be evaluated for any VM running on any OpenStack environment. These changes have no side-effects and are typically safe enable even if unused

1) Image Format: QCOW or RAW?
OpenStack storage configuration is an implementation choice by the Cloud Administrator, often not fully visible to the tenant. Storage configuration may also change over the time without explicit notification by the Administrator, as he/she adds capacity with different specs.
When creating a new instance on OpenStack, it is based on a Glance image. The two most prevalent and recommended image formats are QCOW2 and RAW. QCOW2 images (from QEMU Copy On Write) are typically smaller in size. For instance a server with a 100 GB disk, the size of the image in RAW format, might be only 10 GBs when formatted into QCOW2. Regardless of the format, it is a good idea to process images before uploading them to Glance with virt-sysprep(1) and virt-sparsify(1).
The performance of QCOW2 depends on both the hypervisor kernel and the format version, the latest being QCOW2v3 (sometimes referred to as QCOW3) which has better performance than the earlier QCOW2, almost as good as RAW format. In general we assume RAW has better overall performance despite the operational drawbacks (like the lack of snapshots) or the increase in time it takes to upload or boot (due to its bigger size). Our latest versions of Red Hat OpenStack Platform automatically use the newer QCOW2v3 format (thanks to the recent RHEL versions) and it is possible to check and also convert between RAW and older/newer QCOW2 images with qemu-img(1).
OpenStack instances can either boot from a local image or from a remote volume. That means

Image-backed instances benefit significantly by the performance difference between older QCOW2 vs QCOW2v3 vs RAW.
Volume-backed instances can be created either from QCOW2 or RAW Glance images. However, as Cinder backends are vendor-specific (Ceph, 3PAR, EMC, etc), they may not use QCOW2 nor RAW. They may have their own mechanisms, like dedup, thin provisioning or copy-on-write

.
As a general rule of thumb, rarely used images should be stored in Glance as QCOW2, but an image which is used constantly to create new instances (locally stored), or for any volume-backed instances, using RAW should provide better performance despite the sometimes longer initial boot time (except in Ceph-backed systems, thanks to its copy-on-write approach). In the end, any actual recommendation will depend on the OpenStack storage configurations chosen by the Cloud Administrator..
2) Performance Tweaks via Image Extra Properties
Since the Mitaka version, OpenStack allows Nova to automatically optimize certain libvirt and KVM properties on the Compute host to better execute a particular OS in the guest. To provide the guest OS information to Nova, just define the following Glance image properties:

os_type=linux # Generic name, like linux or windows
os_distro=rhel7.1 # Use osinfo-query os to list supported variants

Additionally, at least for the time being (see BZ#), in order to make sure the newer and more scalable virtio-scsi para-virtualized SCSI controller is used instead of the older virt-blk, the following properties need to be set explicitly:

hw_scsi_model=virtio-scsi
hw_disk_bus=scsi

All the supported image properties are listed at the Red Hat Documentation portal as well as other CLI options. 
3) Prepare for Cloud-init
“Cloud-init” is a package used for early initialization of cloud instances, to configure basics like partition / filesystem size and SSH keys.
Ensure that you have installed the cloud-init and cloud-utils-growpart packages in your Glance image, and that the related services will be executed on boot, to allow the execution of “cloud-init” configurations to the OpenStack VM.
In many cases the default configuration is acceptable but there are lots of customization options available, for details please refer to the cloud-init documentation.
4) Enable the QEMU Guest Agent
On Linux hosts, it is recommended to install and enable the QEMU guest agent which allows graceful guest shutdown and (in the future) automatic freezing of guest filesystems when snapshots are requested, which is a necessary operation for consistent backups (see BZ#):

yum install qemu-guest-agent
systemctl enable qemu-guest-agent

In order to provide the needed virtual devices and use the filesystem freezing functionality when needed, the following properties need to be defined for Glance images (see also BZ#):

hw_qemu_guest_agent=yes # Create the needed device to allow the guest agent to run
os_require_quiesce=yes # Accept requests to freeze/thaw filesystems

5) Just in case: how to recover from guest failure

Comprehensive instance fault recovery, high availability, and service monitoring requires a layered approach which as a whole is out of scope for this document. In the paragraphs below we show the options that can be applicable purely inside a guest (which can be thought as being the innermost layer). The most frequently used fault recovery mechanisms for an instance are:

recovery from kernel crashes
recovery from guest hangs (which do not necessarily involve kernel crash/panic)

In the rare case the guest kernel crashes, kexec/kdump will capture a kernel vmcore for further analysis and reboot the guest. In case the vmcore is not wanted, kernel can be instructed to reboot after a kernel crash by setting the panic kernel parameter, for example “panic=1”.
In order to reboot an instance after other unexpected behavior, for example high load over a certain threshold or a complete system lockup without a kernel panic, the watchdog service can be utilized. Other actions than &;reboot&; can be found here. The following property needs to be defined for Glance images or Nova flavors.

hw_watchdog_action=reset

Then, install the watchdog package inside the guest, then configure the watchdog device, and finally, enable the service:

yum install watchdog
vi /etc/watchdog.conf
systemctl enable watchdog

By default watchdog detects kernel crashes and complete system lockups. See the watchdog.conf(5) man page for more information, e.g., how to add guest health-monitoring scripts as part of watchdog functionality checks.
6) Tune the Kernel
The simplest way to tune a Linux node is to use the “tuned” facility. It’s a service which configures dozens of system parameters according to the selected profile, which in the OpenStack case is “virtual-guest”. For NFV workloads, Red Hat provides a set of NFV tuned profiles to simplify the tuning of network-intensive VMs, .
In your Glance image, it is recommended to install the required package, enable the service on boot, and activate the preferred profile. You can do it by editing the image before uploading to Glance, or as part of your cloud-init recipe:

yum install tuned
systemctl enable tuned
tuned-adm profile virtual-guest

7) Improve networking via VirtIO Multiqueuing
Guest kernel virtio drivers are part of the standard RHEL/Linux kernel package and enabled automatically without any further configuration as needed. Windows guests should also use the official virtio drivers for their particular Windows version, greatly improving network and disk IO performance.
However, recent advanced in Network packet processing in the Linux kernel and also in user-space components created a myriad of extra options to tune or bypass the virtio drivers. Below you&;ll find an illustration of the virtio device model (from the RHEL Virtualization guide).
Network multiqueuing, or virtio-net multi-queue, is an approach that enables parallel packet processing to scale linearly with the number of available vCPUs of a guest, often providing notable improvement to transfer speeds especially with vhost-user.
Provided that the OpenStack Admin has provisioned the virtualization hosts with supporting components installed (at least OVS 2.5 / DPDK 2.2), this functionality can be enabled by OpenStack Tenant with the following property in those Glance images where we want network multiqueuing:

hw_vif_multiqueue_enabled=true

Inside a guest instantiated from such an image, the NIC channel setup can be checked and changed as needed with the commands below:

ethtool -l eth0 to see the current number of queues
ethtool -L eth0 combined <nr-of-queues> # to set the number of queues. Should match the number of vCPUs

There is an open RFE to implement multi-queue activation by default in the kernel, see BZ#.

8) Other Miscellaneous Tuning for Guests

It should go without saying that right-sized instances should contain only the minimum amount of installed packages and run only the services needed. Of a particular note, it is probably a good idea to install and enable the irqbalance service as, although not absolutely necessary in all scenarios, its overhead is minimal and it should be used for example in SR-IOV setups (this way the same image can be used regardless of such lower level details).
Even though implicitly set on KVM, it is a good idea to explicitly add the kernel parameter no_timer_check to prevent issues with timing devices. Enabling persistent DHCP client and disabling zeroconf route in network configuration with PERSISTENT_DHCLIENT=yes and NOZEROCONF=yes, respectively, helps to avoid networking corner case issues.
Guest MTU settings are usually adjusted correctly by default, but having a proper MTU in use on all levels of the stack is crucial to achieve maximum network performance. In environments with 10G (and faster) NICs this typically means the use of Jumbo Frames with MTU up to 9000, taking possible VXLAN encapsulation into account. For further MTU discussion, see the upstream guidelines for MTU or the Red Hat OpenStack Networking Guide.
9) Improving the way you access your instances
Although some purists may consider incompatible running SSH inside truly cloud-native instances, especially in auto-scaling production workloads, most of us will still rely on good old SSH to perform configuration tasks (via Ansible for instance) as well as maintenance and troubleshooting (e.g., to fetch logs after a software failure).
The SSH daemon should avoid DNS lookups to speed up establishing SSH connections. For this, consider using UseDNS no in /etc/ssh/sshd_config and adding OPTIONS=-u0 to /etc/sysconfig/sshd (see sshd_config(5) for details on these). Setting GSSAPIAuthentication no could be considered if Kerberos is not in use. In case instances frequently connect to each other, the ControlPersist / ControlMaster options might be considered as well.
Typically remote SSH access and console access via Horizon are enough for most use cases. During development phase direct console access from the Nova compute host may also be helpful, for this to work enable the serial-getty@ttyS1.service, allow root access via ttyS1 if needed by adding ttyS1 to /etc/securetty, and then access the guest console from the Nova compute with virsh console <instance-id> &;devname serial1.

We hope with this blog post you&8217;ve discovered new ways to improve the performance of your OpenStack instances. If you need more information, remember we have tons of documents in our OpenStack Documentation Portal and that we offer the best OpenStack courses of the industry, starting with the free of charge CL010 Introduction to OpenStack Course.
Quelle: RedHat Stack

Cloud Native App Developers Delight! Container Storage Just Got a Whole Lot Easier

The new Red Hat OpenShift Container Platform offers a rich user experience with dynamic provisioning of storage volumes, automation, and much more. (Republished from the original blog post by Michael Adam and Sayan Saha at redhatstorage.redhat.com) Earlier today, Red Hat announced general availability of Red Hat OpenShift Container Platform 3.4 which includes key features such [&;]
Quelle: OpenShift