The silver lining: Why businesses should invest in cloud transformation

Cloud computing is nearing its second decade of existence. Since 2000, the industry and technology landscape has matured greatly, with organizations evolving from using cloud experimentally to cloud being a platform for innovation and running entire businesses.
However, despite the growth in acceptance, enterprise cloud adoption and the rate at which enterprises actively run workloads on cloud is still low. A 2017 survey by 451 Research indicated that only 45 percent of workloads are deployed to some type of cloud. Even this may be optimistic. A more recent McKinsey survey estimated that the median cloud adoption rate for enterprises may be around 19 percent.
Gathering cloud adoption business inputs
One of the most important considerations enterprises face is the sustained business benefits or measurable value that cloud provides against the investments required. This is not only a question from those just starting their cloud journey, but also those transforming their entire enterprise using cloud, moving from “cloud 1.0” to “cloud 2.0”.
A common tactic to simplify decision making around cloud adoption and quickly increase usage is focusing on activities such as workload migration or application modernization to help transition applications from an established platform to one that is cloud-based. These are technical initiatives that focus on the nature of the application itself, the type of cloud environment it runs in, and the overall plan to modernize applications or migrate whole workloads to the cloud. While this is important and necessary, this perspective is only part of the equation. Organizational financial and cultural inputs must also successfully guide decisions around achieving long-term successful and business-value-based cloud transformation objectives.
For instance, there is a perception that all applications or workloads will have long-term transformational impact and cost benefit when moved to cloud. However, this may not always be the case. Fortunately, there is a silver lining in the cloud discussion: these insights can be derived and definitively validated using The Cloud Adoption and Transformation Framework.
Understanding the business value of cloud adoption and transformation
Initiatives such as workload migration or application modernization help facilitate the move to cloud, but these initiatives alone do not create the holistic perspective required to achieve value from cloud transformation. What’s missing is the more complete context summarized in the picture below.

As part of moving to cloud, enterprises seek to transition from their current state to a future state that includes the actualization of new cloud-based capabilities and practices. Considerations for the current state include talent and skills, the nature of services currently consumed or delivered, hardware, software, and communications services.
To bridge the gap between the current state and the desired future state, enterprises typically employ application transformation techniques. A more holistic view extends this application-focused view with additional inputs as depicted above.
Future-state capabilities can be organized by dimension (architecture and technology, culture and organization, security and compliance, methodology, and so on). Cloud adoption accelerators may include reskilling, automation and reimagining parts of an organization design, such as the introduction of concepts, or including a center of competency to nurture, deploy and scale new ways of working.
These cloud adoption accelerators become the levers for driving the rate and pace of cloud transformation, the key elements in which enterprises should invest for the desired business outcomes in the timeframe planned. An enterprise may choose to alter the pace at which change will take place. For example, a lower adoption rate of 40 percent takes a more cautious approach that extends the time required for key milestones to be attained, in exchange for lower risk. A rate of 80 percent may decrease the time needed to realize key milestones, but may require greater investment to support required changes.
Defining KPIs and measuring success
Key performance indicators (KPIs) measure effectiveness and can help an enterprise continuously calibrate the cloud decisions made to assure alignment to interim milestones, and, importantly, strategic intent. This may include a minimum set of operational metrics supporting business and technical objectives and aligned to the expectations for cloud and for sustained transformation goals. These metrics can guide the implementation of a business-value-driven case since they serve as milestones in the transformation journey.
Example categories and KPIs include:

Platform and service performance, including service availability percentage, responsiveness rate and service capacity rate.
Customer fulfillment and provisioning, including lead time to fulfill and provision, plus demand backlog size.
Service quality, including deploy success percentage, failure rate percentage and incident rate.

These metrics are important to continue to keep focus on what is important to the business supported by information technology. Here are some useful references on total cost and value of cloud ownership and what-if-analysis given the various considerations.
The move to cloud may be challenging. There are no silver bullets given the complexity of these systems. However, there is a systematic method to help you navigate through these decisions. This is the silver lining. It is the integrated set of decisions that need to be made together that assures long-term success even in the face of complexity.
To start or expand this essential conversation, you can schedule a complimentary cloud adoption briefing to discuss how your organization can use cloud adoption transformation to get on track to think, transform and thrive, ultimately realizing significant business outcomes with cloud.
The post The silver lining: Why businesses should invest in cloud transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 2

The growth and innovation in the Kubernetes project, since it first launched just over four years ago, has been tremendous to see. In part 1 of my blog, I talked about how Red Hat has been a key contributor to Kubernetes since the launch of the project, detailed where we invested our resources and what […]
The post OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Mirantis Announces Latest Mirantis Cloud Platform at KubeCon Bringing Kubernetes on Premises

The post Mirantis Announces Latest Mirantis Cloud Platform at KubeCon Bringing Kubernetes on Premises appeared first on Mirantis | Pure Play Open Cloud.
The release includes a large number of enhancements and new features for running K8S with enterprise on-premise infrastructure

SUNNYVALE, CA – December 10, 2018 – Mirantis announced today from KubeCon North America the release of Mirantis Cloud Platform (MCP). The new release offers a number of enhancements and new features including the ability to deploy Kubernetes on premises, as well as much improved and more rigorous quality assurance in all areas of the product.

This latest version of MCP follows on the heels of the recently announced MCP EdgeM, which offers operators a complete software solution designed for edge cloud use cases. Highlights of the latest MCP release includes OpenStack Queens, Kubernetes 1.11 and OpenContrail 4.0. It also includes upgrades and updates to the DriveTrain component upgrade pipeline, granular OpenStack Ocata to Pike Upgrade pipeline, Kubernetes Upgrade Pipeline, as well as security improvements in all areas and improved documentation experience.

“With Kubernetes becoming the de-facto infrastructure API and standard for building new applications, the need for virtual machines is devolving into a security layer for containers,” said Adrian Ionel, Mirantis co-founder and CEO. “Longer term, we believe customers will run Kubernetes on bare metal and we are looking to enable this with subsequent MCP releases.”

Over the course of the last 3 years, the world’s top brands have been partnering with Mirantis to build and support their K8S infrastructure, including Volkswagen, Reliance Jio and AT&T.

If you are interested in receiving a live demo of the latest MCP or MCP Edge, stop by the Mirantis booth P5 at KubeCon.

About Mirantis

Mirantis is the flexible infrastructure company harnessing open source to free application owners from operations concerns. The company employs a unique build-operate-transfer approach to deliver two distinct products:

Mirantis Cloud Platform, which is based on Kubernetes and OpenStack and helps services providers and enterprises run highly tunable private clouds powered by infrastructure-as-code and based on open standards.
Mirantis Application Platform, which is based on Spinnaker and helps enterprises adopt cloud native continuous delivery to realize cloud ROI at scale.

To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest open clouds in the world. Its customers include iconic brands such as Adobe, AT&T, Comcast, Reliance Jio, State Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com

The post Mirantis Announces Latest Mirantis Cloud Platform at KubeCon Bringing Kubernetes on Premises appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 1

As we approach the end of another year for Red Hat OpenShift and Kubernetes, and another Kubecon, which I believe will be even bigger than the last, it’s a great time to reflect on both where we’ve been and where we’re going. In this blog I will look back over the past 4+ years since […]
The post OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 1 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Technology company improves agility and growth with IBM Cloud

As artificial intelligence (AI), augmented reality and the Internet of Things (IoT) increasingly influence our everyday lives, the demand for sophisticated chips to support these innovations is on the rise. Nanometer-scale chip fabrication is extremely complex and requires the highest purity to eliminate even minuscule levels of contamination.
Entegris is a leader in specialty chemicals and advanced materials solutions for the microelectronics industry and other sectors that are driving these megatrends. The growing demand for our company’s services means that Entegris is expanding rapidly. We were looking to scale business systems securely, while cutting costs and boosting margins.
Entegris relies on SAP applications for core business processes. As transaction volumes increased, our reporting and analytics workload was growing – yet batch processes to prepare month-end reports required transaction systems to be halted, and backups required offline periods, causing unwelcome interruptions.
The shift to SAP HANA
To help the company continue to grow, Entegris evaluated switching to SAP HANA because of its high-performance, in-memory database technology. But the move would require a large investment in new infrastructure, and absorb internal resources for planning, deployment and management, a cost we wanted to defer or reduce.
We upgraded to SAP Business Suite powered by SAP HANA and planned to migrate the existing business processes without changing them significantly. Before making the migration, though, we decided to review our infrastructure approach. We investigated the advantages of switching to fully cloud-enabled operations and found that selecting a cloud approach could provide the scalability we needed at much lower operational costs, while still providing a secure, robust application environment.
Transitioning with help from IBM
Entegris selected IBM Cloud for SAP Applications combined with IBM Cloud Managed Services. The fully managed IBM services include operating system support, monitoring, and network management. They also provide the expert advice and support needed for enterprise-level work, while also offering the advantages of scalability, reliability and cost efficiency.
To help the transition to IBM Cloud, IBM created the virtual machines and configured the applications and databases to aid our deployment of the SAP HANA solutions. The rollout was completed in phases, starting with smaller solutions and moving on to the larger ones.
The transformation
Moving to IBM Cloud immediately improved Entegris’ business agility in three key areas:

Improved flexibility. We can now create new SAP environments for testing simply by adding cloud capacity instead of having to wait for hardware arrival and set up. We’re also better able to manage system refreshes. Before the migration, we couldn’t refresh more than one SAP instance at a time due to the limited capacity of our fixed server. By using IBM Cloud, we can quickly add capacity to complete multiple system refreshes, and then scale it back once that capacity is no longer needed.
Increased app availability. Since we can easily and quickly refresh SAP environments, Entegris can now keep applications available during month-end reporting, backups and system refreshes. As a result, we’ve seen a boost in productivity for Entegris’ around-the-clock work schedule.
Faster response times. We can also run reports using near-real-time information, because IBM Cloud enables faster SAP data load times. For example, SAP application dialog responses are now 50 percent faster. As our workload grows, we can scale up compute capacity to meet rising demand without any delay.

We chose IBM Cloud Managed Services because IBM offered a higher-performing product at a lower cost. It also offered executive commitment to the migration process, so we could take on less risk as IBM were aligned with our business objectives.
The IBM Managed Services approach gives us the scalability, flexibility and capacity we need to make the most of the SAP Business Suite. We can now better tune our manufacturing to help us optimize the business. We’re reducing capital requirements, cutting waste and improving fulfillment, and IBM and SAP are central to our success.
Read the case study for more details.
The post Technology company improves agility and growth with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

[Podcast] PodCTL – Kube Security, Kube 1.13 and KubeCon

Heading into the week of KubeCon, we wanted to make sure that listeners had some basics to prepare them for a week of learning and announcements. We discussed the severe Kubernetes bug (Kubernetes Privilege Escalation Flaw) and available patches, all of the new features in Kubernetes 1.13, as some previews of things to expect from […]
The post [Podcast] PodCTL – Kube Security, Kube 1.13 and KubeCon appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

UK software firm chooses IBM Cloud for cloud-native banking platform

Thought Machine, a software developer based in the United Kingdom, has teamed with IBM to accelerate its new cloud-native banking platform.
The platform, called Vault, “allows banks to fully realize the benefits of IBM Cloud and has been developed to be highly flexible, giving banks the ability to quickly add new products, accommodate shifts in a bank’s strategy or react to external changes in the market”, reports CloudPro.
IBM and Thought Machine are opening a global practice headquartered in London that will be staffed by existing and new consultants with banking transformation and implementation expertise.
“Vault offers the first core banking platform built in the cloud from the ground up, massively scalable and with the flexibility to create and launch new products and services in days versus months, at unprecedented levels of cost and speed,” Jesus Mantas, managing partner of IBM Global Business Services, said.
For more details, read the full story at CloudPro.
The post UK software firm chooses IBM Cloud for cloud-native banking platform appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Calling an Embedded Ansible Playbook from the VM Provision State Machine

CloudForms 4.6 provided the ability to run embedded Ansible playbooks as methods, and it can be useful to include such a playbook in an existing workflow such as the VM Provision state machine.

In this example an Ansible playbook method is used at the AcquireIPAddress state to insert an IP address, netmask and gateway into the VM provisioning workflow. A cloud-init script is then used at first boot to set the values in the new VM using nmcli.
 
Creating the Instance and Method
 
A new acquire_ip_address instance and method are defined in the usual manner. The method is of Type: playbook and is defined to run on Hosts: localhost
 
 

 
The input parameters for the playbook method are dynamic. Two parameters miq_provision_request_id (the request ID) and miq_provision_id (the task ID), are defined as follows:

 
The new instance is added to the AcquireIPAddress state of the VM Provision state machine:

 
Inserting the IP Details into the VM Provision Workflow
 
The playbook can write the acquired IP details back into the provision task’s options hash in either of two ways: using the RESTful API, or using an Ansible role.
 
Calling the CloudForms RESTful API
 
The first example playbook uses the CloudForms RESTful API to write the retrieved IP details back in to the provision task’s options hash. To simplify the example the IP address, netmask and gateway are defined as static vars; in reality these would be retrieved from a corporate IPAM solution such as Infobox.
 

– name: Acquire and Set an IP Address
  hosts: all
  gather_facts: no
  vars:
  – ip_addr: 192.168.1.66
  – netmask: 24
  – gateway: 192.168.1.254
    
  tasks:
  – debug: var=miq_provision_id
  – debug: var=miq_provision_request_id
 
  – name: Update Task with New IP and Hostname Information
    uri:
      url: “{{ manageiq.api_url }}/api/provision_requests/{{ miq_provision_request_id }}/request_tasks/{{ miq_provision_id }}”
      method: POST
      body_format: json
      body:
        action: edit
        resource:
          options:
            addr_mode: [“static”, “Static”]
            ip_addr: “{{ ip_addr }}”
            subnet_mask: “{{ netmask }}”
            gateway: “{{ gateway }}”
      validate_certs: no
      headers:
        X-Auth-Token: “{{ manageiq.api_token }}”
      body_format: json
      status_code: 200

 
Using the manageiq-vmdb Ansible Role
 
The second example playbook uses the manageiq-vmdb Ansible role (GitHub – syncrou/manageiq-vmdb: Manageiq Role to modify / lookup vmdb objects ) to write the retrieved IP details back into the provision task’s options hash. Once again the IP address, netmask and gateway are defined as static vars for simplicity of illustration.
 

– name: Acquire and Set an IP Address
  hosts: all
  gather_facts: no
  vars:
  – ip_addr: 192.168.1.66
  – netmask: 24
  – gateway: 192.168.1.254
  – auto_commit: true
  – manageiq_validate_certs: false
     
  roles:
    – syncrou.manageiq-vmdb
    
  tasks:
  – debug: var=miq_provision_id
  – debug: var=miq_provision_request_id
 
  – name: Get the task vmdb object
    manageiq_vmdb:
      href: “provision_requests/{{ miq_provision_request_id }}/request_tasks/{{ miq_provision_id }}”
    register: task_object
   
  – name: Update Task with new IP and Hostname Information
    manageiq_vmdb:
      vmdb: “{{ task_object }}”
      action: edit
      data:
        options:
          addr_mode: [“static”, “Static”]
          ip_addr: “{{ ip_addr }}”
          subnet_mask: “{{ netmask }}”
          gateway: “{{ gateway }}”

 
In these example playbooks the netmask variable is defined in CIDR format rather than as octets, to be compatible with nmcli.
 
Configuring the IP Address at First Boot
 
Configuring a NIC with IP address details is a guest operating system operation, and so must be performed when the VM or instance first boots. For this example a template cloud-init script is defined in Compute -> Infrastructure -> PXE -> Customization Templates in the WebUI, as follows:
 
<%
   root_password = MiqPassword.decrypt(evm[:root_password])
   hostname = evm[:hostname]
   ip_addr = evm[:ip_addr]
   subnet_mask = evm[:subnet_mask]
   gateway = evm[:gateway]
   dns_servers = evm[:dns_servers]
   dns_suffixes = evm[:dns_suffixes]
%>
#cloud-config
ssh_pwauth: true
disable_root: false
users:
  – default
  – name: ansible-remote
    shell: /bin/bash
    sudo: [‘ALL=(ALL) NOPASSWD:ALL’]
    ssh_authorized_keys:
      – ssh-rsa AAAAB3NzaC1yc2E…

chpasswd:
  list: |
    root:<%= root_password %>
  expire: false
runcmd:
  ## Setup motd
  – echo Welcome to VM <%= hostname %>, provisioned by Red Hat CloudForms on $(date) > /etc/motd
  – rm -f /root/*
  – nmcli –fields UUID con show | awk ‘!/UUID/ {print}’ | while read line; do nmcli con delete uuid $line; done
  – nmcli con add con-name eth0 ifname eth0 type ethernet
    ip4 “<%= ip_addr %>/<%= subnet_mask %>”
    gw4 “<%= gateway %>”
  – nmcli con mod eth0
    ipv4.dns “<%= dns_servers %>”
    ipv4.dns-search “<%= dns_suffixes %>”
    connection.autoconnect yes
  – nmcli con up eth0
  – hostnamectl set-hostname <%= hostname %>
  – systemctl mask cloud-init-local cloud-init cloud-config cloud-final
 
If the cloud-init script is selected from the Customize tab of the provisioning dialog, CloudForms will make the variable substitutions at run-time and inject the resultant script into the VM or instance to be run at first boot.
Quelle: CloudForms

Playing with REST API

Overview
In this article, I will describe how REST API works natively in Red Hat CloudForms.
REST stands for Representational State Transfer. REST is a web standard based architecture and uses HTTP protocol for data communication. It revolves around resources where every component is a resource and is accessed by a common interface using HTTP standards method.  
Red Hat CloudForms provides APIs to integrate external systems and initiate provisioning via CloudForms. In CloudForms, REST can be accessed by adding “/api” prefix to the URL.  
 
https://<IP or hostname of appliance> /api/

How to play
In order to work with REST APIs, there are various REST API client tools  :
 
 

Internet Browser : Put a REST API call into the browser Address Bar

 

CURL :  Command line tool for HTTP client

 
curl -k -u username:password -X GET “Accept: application/json” https://<IP or hostname of appliance>/api/
 

Insomnia : A powerful REST API client with cookie management, code generation, authentication for Linux, Mac and Window etc.

 

HTTP Methods
Red Hat CloudForms API uses JSON (Javascript Object Notation Format) for data exchange format.  JSON is a commonly used format for data exchange and storing. The primary or most commonly used HTTP verbs are POST, GET, PUT, PATCH, OPTIONS, HEAD and DELETE. These correspond to create, read, update, delete (or CRUD) operations respectively.
 

Method
DEFINITION
Example (Using CURL)

GET
Return a specific resource
curl -k -u user:password -X GET “Accept: application/json”

https://<IP>/api/providers/

POST
Perform an action on the resource
curl -k –user user:password -i -X POST -H “Accept: application/json” -d ‘ { “type” : “ManageIQ::Providers::Redhat::InfraManager”, “name” : “RHEVM Provider”, “hostname” : “hostname of provider”, “ipaddress” : “IP”, “credentials” : { “userid” : “username”, “password” : “*****”}}’

https://<IP>/api/providers

PUT
Update or replace  a resource
curl -k –user username:password -i -X PUT -H “Accept: application/json”
-d ‘{ “name” : “updated service name” }’
   http://<IP>/api/services/<service_id>

DELETE
Delete a resource
curl -k –user user:password -i -X DELETE -H “Accept: application/json”

https://<IP>/api/providers/<provider_id&gt;

OPTIONS
Get the metadata
curl -k –user username:password -X OPTIONS “Accept: application/json”

https://<IP>/api/providers/

HEAD
Same as GET, but transfers the status line and header section only.
HEAD method is identical to GET except that the server MUST NOT return a message body in response.This method is often used for testing hypertext links for validity, accessibility, and recent modification.

PATCH
Update or modify a resource
curl -k –user username:password -i -X PATCH -H “Accept: application/json”
-d ‘[{ “action”: “edit”, “path”: “name”, “value”: “A new Service name” },{ “action”: “add”, “path”: “description”, “value”: “A Description for the new Service” },
 { “action”: “remove”, “path”: “display” }
]’
   http://<IP>/api/services/<service_id>

 
Updating resources
As shown in the above table, there are a couple of ways to update attributes in a resource. You can update a resource with PUT or PATCH method. Now, the question is When to use PUT and When PATCH?
 
For Example
“When a client needs to replace an existing Resource entirely, they can use PUT. When they’re doing a partial update, they can use HTTP PATCH.”
For instance, when updating a single field of the Resource, sending the complete Resource representation might be cumbersome and utilizes a lot of unnecessary bandwidth. In such cases, the semantics of PATCH make a lot more sense.
 
How to authenticate REST APIs
REST APIs authentication can be done by two ways :
 

Basic Authentication : The most simple way to deal with authentication is to use HTTP basic authentication in which the username and password credentials are passed with each HTTP request.

Token Based Authentication: For multiple API calls to the appliance, it is recommended to use this approach. In this approach, client requests a token for the username and password. Then the token is used instead of username and password for each API call.

 
Acquiring a Token :
Request:
 curl -k -u user:password -X GET “Accept: application/json” https://<IP>/api/auth
Response:
{“auth_token”:”4cb1fb32508350796caf32c12808fee2″,”token_ttl”:600,”expires_on”:”2017-12-01T11:25:06Z”}
 
Query with Token
curl -k -i -X GET “Accept: application/json” -H “X-Auth-Token: “token” https://<IP>/api/hosts

Delete a Token
curl -k -i -X DELETE -H “Accept: application/json” -H “X-Auth-Token: 21fe54dd14dc89c219d62f651497a54″ https://<IP>/api/auth

Moreover, the duration of token is about 10 minutes and we can change/modify the duration from CloudForms operational portal by navigating to Configuration -> Server -> Advanced -> api: -> token_ttl

Query Specification
 
Query specification identifies the controls available when querying collections. While querying, we can specify control attributes in the GET URL as value paris. There are three main techniques comes under query specifications. Let’s take a look on them.
 
Paging : In this capability, there are two attributes available as offset and limit. Offset means first item to return and limit means the number of items to return.
Sorting: In sorting, we can sort the attributes by order , options and attributes. For example: by specifying “sort_by=atr1,atr2” , “sort_order=asc or des”
Filtering:  This helps user to filter the data according to the use case. The syntax for filters is :
filter[]=attribute op value
where op means operators 
 
Return Codes
 
Success :  200 : OK, 201 : Created, 202 : Accepted, 204 : No content
 
Client Errors: 400 : Bad Request, 401 : Unauthorized, 403 : Forbidden, 404 : Not Found, 415 : Unsupported Media Type
 
Server Errors: 500 : Internal Server error
 
Troubleshooting
A good place to troubleshoot is to look into standard log files under /var/www/miq/vmdb/log on the CloudForms appliance. All the api related logs are recorded under /var/www/miq/vmdb/log/api.log. In order to dig deeper, changing the level of log is much recommended. You can change the log level by navigating to Configuration → Server → Advanced → :level_api: debug.

Conclusion
I hope after reading this article you will get basic understanding about how CloudForms can be managed via REST API’s. You can find the full Rest API documentation here.
Quelle: CloudForms