Multi-container pods and container communication in Kubernetes

The post Multi-container pods and container communication in Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Containers are often intended to solve a single, narrowly defined problem, such as a microservice, but in the real world, problems require multiple containers for a complete solution. In this article, we’re going to talk about combining multiple containers into a single Kubernetes Pod, and what it means for inter-container communication.
What is a Kubernetes Pod?
Let’s start by explaining what a Pod is in the first place. A Pod is is the smallest unit that can be deployed and managed by Kubernetes. In other words, if you need to run a single container in Kubernetes, then you need to create a Pod for that container. At the same time, a Pod can contain more than one container, usually because these containers are relatively tightly coupled. How tightly coupled?  Well, think of it this way: the containers in a pod represent processes that would have run on the same server in a pre-container world.
And that makes sense, because in many respects, a Pod aces like a single server. For example, each container can access the other containers in the pod as different ports on localhost.
Why does Kubernetes use a Pod as the smallest deployable unit, and not a single container?
While it would seem simpler to just deploy a single container directly, there are good reasons to add a layer of abstraction represented by the Pod. A container is an existing entity, which refers to a specific thing. That specific thing might be a Docker container, but it might also be a rkt container, or a VM managed by Virtlet. Each of these has different requirements.
What’s more, to manage a container, Kubernetes needs additional information, such as a restart policy, which defines what to do with a container when it terminates, or a liveness probe, which defines an action to detect if a process in a container is still alive from the application’s perspective, such as a web server responding to HTTP requests.
Instead of overloading the existing “thing” with additional properties, Kubernetes architects have decided to use a new entity, the Pod, that logically contains (wraps) one or more containers that should be managed as a single entity.
Why does Kubernetes allow more than one container in a Pod?
Containers in a Pod run on a “logical host”; they use the same network namespace (in other words, the same IP address and port space), and the same IPC namespace. They can also use shared volumes. These properties make it possible for these containers to efficiently communicate, ensuring data locality. Also, Pods enable you to manage several tightly coupled application containers as a single unit.
So if an application needs several containers running on the same host, why not just make a single container with everything you need? Well first, you’re likely to violate the “one process per container” principle. This is important because With multiple processes in the same container, it is harder to troubleshoot the container because logs from different processes will be mixed together, and it is harder manage the processes lifecycle, for example to take care of “zombie” processes when their parent process dies. Second, using several containers for an application is simpler, more transparent, and enables decoupling software dependencies. Also, more granular containers can be reused between teams.
Use Cases for Multi-Container Pods
The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a primary application. There are some general patterns for using helper processes in Pods:

Sidecar containers “help” the main container. Some examples include log or data change watchers, monitoring adapters, and so on. A log watcher, for example, can be built once by a different team and reused across different applications. Another example of a sidecar container is a file or data loader that generates data for the main container.
Proxies, bridges, and adapters connect the main container with the external world. For example, Apache HTTP server or nginx can serve static files. It can also act as a reverse proxy to a web application in the main container to log and limit HTTP requests. Another example is a helper container that re-routes requests from the main container to the external world. This makes it possible for the main container to connect to localhost to access, for example, an external database, but without any service discovery.

While you can host a multi-tier application (such as WordPress) in a single Pod, the recommended way is to use separate Pods for each tier, for the simple reason that you can scale tiers up independently and distribute them across cluster nodes.
Communication between containers in a Pod
Having multiple containers in a single Pod makes it relatively straightforward for them to communicate with each other. They can do this using several different methods.
Shared volumes in a Kubernetes Pod
In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod.
Kubernetes Volumes enables data to survive container restarts, but these volumes have the same lifetime as the Pod. That means that the volume (and the data it holds) exists exactly as long as that Pod exists. If that Pod is deleted for any reason, even if an identical replacement is created, the shared Volume is also destroyed and created anew.
A standard use case for a multi-container Pod with a shared Volume is when one container writes logs or other files to the shared directory, and the other container reads from the shared directory. For example, we can create a Pod like so:
apiVersion: v1
kind: Pod
metadata:
name: mc1
spec:
volumes:
– name: html
emptyDir: {}
containers:
– name: 1st
image: nginx
volumeMounts:
– name: html
mountPath: /usr/share/nginx/html
– name: 2nd
image: debian
volumeMounts:
– name: html
mountPath: /html
command: [“/bin/sh”, “-c”]
args:
– while true; do
date >> /html/index.html;
sleep 1;
done
In this example, we define a volume named html. Its type is emptyDir, which means that the volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. The 1st container runs nginx server and has the shared volume mounted to the directory /usr/share/nginx/html. The 2nd container uses the Debian image and has the shared volume mounted to the directory /html. Every second, the 2nd container adds the current date and time into the index.html file, which is located in the shared volume. When the user makes an HTTP request to the Pod, the Nginx server reads this file and transfers it back to the user in response to the request.

You can check that the pod is working either by exposing the nginx port and accessing it using your browser, or by checking the shared directory directly in the containers:
$ kubectl exec mc1 -c 1st — /bin/cat /usr/share/nginx/html/index.html

Fri Aug 25 18:36:06 UTC 2017

$ kubectl exec mc1 -c 2nd — /bin/cat /html/index.html

Fri Aug 25 18:36:06 UTC 2017
Fri Aug 25 18:36:07 UTC 2017
Inter-process communications (IPC)
Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory.
In the following example, we define a Pod with two containers. We use the same Docker image for both. The first container, producer, creates a standard Linux message queue, writes a number of random messages, and then writes a special exit message. The second container,  consumer, opens that same message queue for reading and reads messages until it receives the exit message. We also set the restart policy to ‘Never’, so the Pod stops after termination of both containers.
apiVersion: v1
kind: Pod
metadata:
name: mc2
spec:
containers:
– name: producer
image: allingeek/ch6_ipc
command: [“./ipc”, “-producer”]
– name: consumer
image: allingeek/ch6_ipc
command: [“./ipc”, “-consumer”]
restartPolicy: Never
To check this out, create the pod using kubectl create and watch the Pod status:
$ kubectl get pods –show-all -w
NAME READY STATUS RESTARTS AGE
mc2 0/2 Pending 0 0s
mc2 0/2 ContainerCreating 0 0s
mc2 0/2 Completed 0 29
Now you can check logs for each container and verify that the 2nd container received all messages from the 1st container, including the exit message:
$ kubectl logs mc2 -c producer

Produced: f4
Produced: 1d
Produced: 9e
Produced: 27
$ kubectl logs mc2 -c consumer

Consumed: f4
Consumed: 1d
Consumed: 9e
Consumed: 27
Consumed: done

There is one major problem with this Pod, however, and it has to do with how containers start up.
Container dependencies and startup order
Currently, all containers in a Pod are being started in parallel and there is no way to define that one container must be started after other container. For example, in the IPC example, there is a chance that the second container might finish starting before the first one has started and created the message queue. In this case, the second container will fail, because it expects that the message queue already exists.
Some efforts to provide some measure of control over how containers start, such as Kubernetes Init Containers, which start first (and sequentially), are under development, but in a cloud native environment, it’s always better to plan for failures outside of your immediate control.  For example, one way to to fix this issue would be to change the application to wait for the message queue to be created.
Inter-container network communication
Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1. Create a ConfigMap with the nginx configuration file. Incoming HTTP requests to port 80 will be forwarded to port 5000 on localhost:
apiVersion: v1
kind: ConfigMap
metadata:
name: mc3-nginx-conf
data:
nginx.conf: |-
user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

sendfile on;
keepalive_timeout 65;

upstream webapp {
server 127.0.0.1:5000;
}

server {
listen 80;

location / {
proxy_pass http://webapp;
proxy_redirect off;
}
}
}
Step 2. Create a multi-container Pod with the simple web app and nginx in separate containers. Note that for the Pod, we define only nginx port 80. Port 5000 will not be accessible outside of the Pod.
apiVersion: v1
kind: Pod
metadata:
name: mc3
labels:
app: mc3
spec:
containers:
– name: webapp
image: training/webapp
– name: nginx
image: nginx:alpine
ports:
– containerPort: 80
volumeMounts:
– name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
– name: nginx-proxy-config
configMap:
name: mc3-nginx-conf
Step 3. Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 –type=NodePort –port=80
service “mc3″ exposed
Step 4. Identify port on the node that is forwarded to the Pod:
$ kubectl describe service mc3

NodePort: <unset> 31418/TCP

Now you can use your browser (or curl) to navigate to your node’s port  to access the web application through reverse proxy, as in:

http://myhost:31418

This request will then be forwarded to port 5000 of the webapp container.

Exposing multiple containers in a Pod
While this example shows how to use a single container to access other containers in the pod, it’s quite common for several containers in a Pod to listen on different ports — all of which need to be exposed. To make this happen, you can either create a single service with multiple exposed ports, or you can create a single service for every poirt you’re trying to expose.
Where to go from here
By creating pods, Kubernetes provides a great deal of flexibility for orchestrating how containers behave, and how they communicate with each other. They can share file volumes, they can communicate over the network, and they can even communicate using IPC.
That’s just the beginning, of course. Interested in seeing what else you can do with Kubernetes? Check out our new Kubernetes and Docker Bootcamp II (KD200). If you register early enough, you can even get 50% off while the class is in beta. Hope to see you there!
The post Multi-container pods and container communication in Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

53 things to look for in OpenStack Pike

The post 53 things to look for in OpenStack Pike appeared first on Mirantis | Pure Play Open Cloud.
This week we’re expecting the 16th release of OpenStack, code-named Pike so we thought we’d give you our traditional 53 things to look for in advance of our What’s New in OpenStack Pike webinar, which is scheduled for September 7.
OpenStack Compute Service (Nova)

Cells v2 multi-cell deployment: The default deployment is a single cell, but you can now create multi-cell deployments using the Cells v2 API — though with limitations with multiple cells.  Cells v1 is now deprecated.
Reworking of the Nova quota system to count resources at the point of creation: If the requested resources aren’t available, you’ll get an error; you don’t need to do anything to take advantage of this change.
More efficiently use resources with the PCIWeigher weigher: PCI devices are specialized hardware, so you want to make sure that only workloads that need them occupy those hosts. Use the [filter_scheduler] pci_weight_multiplier configuration option to prevent non-PCI workloads from being scheduled to those hosts.
Nodes can remove themselves from service if they’re not functioning properly using the [compute]/consecutive_build_service_disable_threshold configuration option.
Keep your instances from using all of the physical CPUs on your host by using the reserved_host_cpus to reserve some for the hypervisor.
The Placement API can now look at qualitative “traits” of various resources to better serve requests.

OpenStack Networking Service (Neutron)
Neutron PTL Kevin Benton tells us we should look for:

“Support for zero-downtime upgrades from Ocata (a.k.a. rolling upgrades)
haproxy is now used instead of the neutron namespace proxy agent for reduced memory usage on the server running the metadata proxy
Improvements to stability/performance

Improved stability of the OVS openflow-based firewall
Initial support for Python3
Improved communications pattern between server and L2 agents to reduce the Neutron server load
Conditional compare-and-swap updates in the Neutron HTTP API to give clients race-safe ways to update resources
DHCP agent support for subnets on other segments of a routed network

QoS Improvements

Support for bandwidth limit rules in the QoS extension to set bandwidth rate limits
Bidirectional bandwidth limit QoS rules in the OVS and Linux Bridge drivers
Egress bandwidth limit QoS rules for SR-IOV
A new API to retrieve supported QoS rule types by the loaded drivers

DVR Improvements

Support for partially distributed routing for limited availability external networks
Fix for DVR to work with floating IPs associated with unbound ports used in VRRP scenarios
DVR fast exit routing via the compute node for packets that don’t need network address translation

Support for quota usage amounts in quota API
Support for individual DNS domains set per Neutron port
Support for per-network MTU overrides
Support for user-defined tags on all standard Neutron resources”

OpenStack Block Storage Service (Cinder)
Cinder PTL Sean McGinnis tells us:

“We added a “revert to snapshot” feature that allows users to switch a volume’s data back to the point in time of the last snapshot.
Under certain conditions, we now support extending a volume that is in-use. This was previously only allowed if a volume was not attached to an instance. But Pike Cinder with Pike Nova using the libvirt driver can now extend a volume in use and reflect that change to the running instance.
We’ve added a backend_default config section. Prior to this, if you had a setting you would like to apply to all storage backends you needed to set that config option in each backend’s config section. This allows setting “default” for backends that can be overridden in the backend specific config, but otherwise will take the configured default.
Added volume group replication support. Prior to this, an admin could configure an entire backend to be replicated. With this option, users are able to define a group of volumes based on their own needs (all volumes that are part of an application, only DB volumes, etc) and have that group of volumes replicated to a secondary backend. Only a handful of drivers support this so far, but now that it is available we expect more backends to support it in coming releases.”

OpenStack Image Service (Glance)

Avoid exposing the Tasks API to end users by using the new tasks_api_access policy to enable Glance to use ordinary user credentials to manage the tasks that accomplish the interoperable image import process.

OpenStack Orchestration Service (Heat)

Heat PTL Rico Lin tells us that the project has added new resources, including:

Neutron Trunk resource support (OS::Neutron::Trunk)
Support new Magnum Cluster and Cluster Template resources (OS::Magnum::Cluster and OS::Magnum::ClusterTemplate)
Custom resource type managed by Mistral workflows (OS::Mistral::ExternalResource)
Add Zun Container resources (OS::Zun::Container)

He also talks about the ability to use the get_reality function when updating: “You can use a `converge` flag in update API request and that update action will actually pull resources from services(like nova server, cinder volume) and update against reality. For example, I create a instance with flavor m1.samll, and some one update it through nova API and resize that instance to use m1.large, with `converge` flag, it will detect that instance flavor has been changed and will trigger update against flavor and change it back to m1.small.”

OpenStack Dashboard Service (Horizon)

Just as we’ve had the ability to configure OpenStack clients by downloading openrc files from Horizon, Pike now gives us the ability to download a clouds.yaml file for os-client-config.
Create and delete ports in your networks using the project network details table. (As an operator, you can turn this on and off using policies.)
You can now specify “any” IP protocol and “any” port number when adding a security group rule.
You can now see which security groups apply to which Neutron ports.

OpenStack Identity Service (Keystone)
Keystone PTL Lance Bragstad tells us that “the following are some highlights of what we accomplished:

Registering default policies in code – this makes maintenance of policy files easier for operators, especially if they use mostly defaults
Enhanced security for passwords stored in SQL – the SQL identity backend has been updated to support more secure password hashing mechanisms that are more inline with industry standards”

OpenStack Object Storage Service (Swift)
Swift PTL John Dickinson let us know that these are “some of the major new features in Pike for Swift:

Support for globally-distributed erasure codes. This is made up of

Replicated erasure code fragments
Composite rings for more explicit data placement
Per-policy config options

Global erasure codes are implemented by replicating the erasure-coded fragments of an object. This “EC replication” allows each independent region to function even if the cross-region network is down, and it allows for failures in one region to use the remote region to recover.
In order to implement global erasure codes, we first had to support “composite rings”. A composite ring is a data placement ring that is made up of two or more “normal” rings. The component rings are built independently, using distinct devices in distinct regions. Building the composite rings in this way allows dispersion of replicas or fragments in a more explicit way (e.g. you can specify 4x replication with 2x in each region or you can specify 10+4 EC replicated across 2 regions).
We also added the ability to override proxy config options on a per-policy basis. This allows, for example, the ability to set read affinity for only some storage policies.”

OpenStack Telemetry Service (Ceilometer)
Telemetry PTL Julien Danjou tells us to look for the following additions to Ceilometer:

“Add support for Manila
Add support for SDN controllers”

OpenStack DNS as a Service (Designate)

Designate now enables you to schedule across pools.

OpenStack Bare Metal Provisioning Program (Ironic)
Ironic PTL Dmitry Tantsur tells us to look for:

“Booting from Cinder volumes
Physical network awareness
Rolling upgrades”

OpenStack File Service (Manila)

You can now set quotas per share type, as well as for the number of share groups and share group snapshots.
Shares backed by CephFS can now use the NFS protocol.
Manila has also added additional specs and support for IPv4 and IPv6 support, including validation of IPv6-based addresses and the ability to know whether IPv4 or IPv6 are supported in a driver.

OpenStack Containers Project (Magnum)

By default, Kubernetes clusters now Include the kubernetes dashboard.
Magnum now includes a monitoring stack based on cAdvisor, node-exporter, Prometheus and Grafana, but it must be enabled.
You can now restrict the access of Magnum’s trustID so that it doesn’t have unrestricted access to every service in your OpenStack project.

OpenStack Application Catalog Project (Murano)
Murano PTL Felipe Monteiro says that “some important things to look out for are:

Policy in code to fulfill: https://review.openstack.org/#/c/469954/
Murano environments can now select which volume/volume snapshots they want as an attachment”

OpenStack Big Data as a Service (Sahara)
Sahara PTL Telles Nobrega says that

“The major feature that we brought this cycle was the introduction of a new image generation and validation system. We still rely on disk image builder for most images, but we started with CDH on Pike. This system allows the user to create images using libguestfs and not rely on DIB anymore.”

OpenStack Policy as a Service (Congress)
Congress PTL Eric K tells us that “A focus for Congress Pike has been usability, especially for someone getting started. Here are some of the things to look forward to in the Pike release.

Policy library
An integrated library of useful policies for an administrator to customize and activate, allowing an administrator to quickly get value out of Congress even before learning how to author policy.
Monitoring panel
A monitoring panel that summarizes at a glance the number and seriousness of policy violations in a stack and offers drill-down into more details.”

OpenStack on OpenStack (TripleO)
TripleO PTL Emilien Macchi tells us to look for the following:

“The major work done in Pike cycle is the containerization of services deployed by TripleO.
We’re also supporting the upgrade from an Ocata baremetal deployment to a containerized Pike deployment, driven by Ansible tasks.
After composable roles in the previous releases, TripleO now supports Composable networks, so operators have full control on network configuration for their custom roles.”

OpenStack Workflow Service (Mistral)
Mistral PTL Renat Akhmerov says to look for:

“Finished the first version of Actions API (mistral-lib repo)
More advanced publishing of workflow variables (different scopes, more flexible etc.)
Mistral OpenStack actions can now run in different regions Mistral actions can now run in the engine (no need in external executors)”

So that’s 53, but it’s only a tiny portion of the changes in OpenStack Pike.  If you want to hear more, join us next week for the What’s New in OpenStack Pike webinar!
The post 53 things to look for in OpenStack Pike appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to make AI a reality with App Connect

Artificial Intelligence (AI) is a big deal. IDC predicts widespread adoption of AI across multiple industries, with worldwide revenues increasing from “nearly $8.0 billion in 2016 to more than $47 billion in 2020,” according to a prediction from IDC. And AI has arrived for integration solutions. Here’s why.
AI and the digital revolution
The growing interest in AI comes at a time when many businesses are transforming their organizational processes and models to leverage the latest technological capabilities – a process known as digital transformation. This shift aims to create more agile, innovative, efficient, data-driven and people-centric companies that deliver the best possible customer experiences. By integrating AI into this approach, businesses seek to not only automate digital processes but also to gain valuable insights from their data. This data can be used to identify new market opportunities and predict customer requirements.
So AI is a powerful tool for shaping and delivering an excellent customer experience. But its impact doesn’t need to be limited to the largest multinational corporations with huge budgets and specialist teams. Companies of all sizes can benefit from AI.
AI through cognitive integration
With App Connect, IBM’s cloud-based application integration solution, your business can add cognitive capabilities to your data flows in just a few clicks. App Connect features a no-code approach and intuitive Designer UI so you can integrate applications and build powerful data flows which can be exposed as APIs. By adding AI to this process, your business can quickly and effectively harness the extensive capabilities within IBM Watson to analyze data and automatically feed critical business applications.
While other software vendors will likely develop the ability to deliver cognitive capability within their applications, few are presently able to do so. By contrast, App Connect users can currently take advantage of the following cognitive connectors to Watson:

Watson Tone Analyzer: uses linguistic analysis to detect emotional, social, and language tones in written text
Watson Retrieve and Rank: can surface the most relevant information from a collection of documents
Watson Language Translator: uses Watson to translate text from one language to another
Watson Natural Language Classifier: helps your application understand the language of short texts, and make predictions about how to handle them
IBM Watson Campaign Automation: helps manage email marketing and lead-generation activities

Applying AI through integration
In practice, these connectors result in a variety of compelling use cases. I’ll use Watson Tone Analyzer as an example. By analyzing the tone a customer takes when sending a message through Salesforce Service Cloud, App Connect can apply conditional logic to determine what action to take next. If the customer uses particularly positive language, App Connect could prompt Survey Monkey to send the customer a feedback survey.
But if Watson determines the tone is particularly negative, App Connect could instruct Salesforce to find the customer’s account manager and alert them through MessageHub, suggesting they contact the customer to resolve the issue. To give the account manager adequate information to resolve the customer’s problem, App Connect could even supplement existing company data using Lucy, the cognitive enrichment service built on IBM Watson.
In this case, by using Watson Tone Analyzer connector as part of an App Connect integration flow, the user saves the time it would have taken to manually process customer requests. More critically, the customer receives a positive experience because they got hands-on resolution from an informed account manager.
Want to see this capability in action?  Watch this video for to see how this capability can be used to build a powerful integration flow to solve a similar use case:

For more information about how IBM App Connect and Watson enable users to drive greater efficiency and an excellent customer experience through a deeper understanding of their data, click here.
The post How to make AI a reality with App Connect appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How your business can benefit from robotic process automation

Last month we announced the IBM and Automation Anywhere partnership poised to deliver an advanced robotic process automation-powered (RPA) platform. And this week we announced a new offering, IBM Robotic Process Automation with Automation Anywhere, available September 22. So what does this mean for your business? Here are some key takeaways for potential clients.
First, some details on value that the Automation Anywhere and IBM joint offering can bring to companies. Though there are other companies offering a combination of business process management and RPA, IBM and Automation Anywhere bring a depth of knowledge and breadth of capabilities that are unmatched in this area. Forrester ranks IBM as a leader in Business Process Management software and Automation Anywhere as a leader in Robotic Process Automation in its most recent Forrester Wave reports. This partnership will enable companies to leverage superior solutions for managing and improving routine processes, freeing time and resources to create greater business value.
Customers are already using the two offerings. IBM software manages process flows and decisions, and Automation Anywhere software automates human tasks and legacy integrations. One of the main goals of this partnership is to better integrate the offerings and make it even more beneficial for customers to use them together.
You might be wondering how RPA will be integrated with IBM Business Process Manager (BPM). If you currently have BPM, you can add RPA to it by referring specific work tasks to automated RPA bots—for example, loan origination with manual data entry.  If you currently have RPA, you could add BPM to orchestrate multiple RPA activities and to handle RPA exceptions—for example, order process tasks with exceptions. In fact, BPM Express is bundled with Automation Anywhere Enterprise in the new RPA offering to provide this joint value.
For IBM Operational Decision Manager (ODM) users who are looking to adopt RPA, ODM can be used to manage and automate decisions required in an RPA task—for example, shipping charge calculations based upon complex business rules followed by manual data entry.
These integrations could drive faster turnaround times for approvals and reduce errors associated with managing business processes manually.  Digital process automation offerings will provide these integrations with the availability of the RPA offering, followed by additional updates and enhanced integrations later in 2017 and in 2018.
How will this partnership affect other service providers and customers that are currently working with different RPA platforms? IBM digital process automation customers who are already using alternate RPA platforms will not be adversely impacted. IBM digital process automation offerings can work with RPA offerings sold by other providers. But the new joint solution will make it easier to use Automation Anywhere’s RPA capabilities within IBM digital process automation software.
So when can you get started with this offering? You can order it now, with availability in September, so contact your account representative for more information.
To learn more about optimizing RPA to suit your roadmap watch the “IBM and Automation Anywhere: Better together” webcast replay here.
The post How your business can benefit from robotic process automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Accessing Guest RDP and SSH via Custom Buttons

First let’s talk about Remote Session vs Remote Console, they are often confused.
 

Remote Session – Provides the user a server session on the remote host. Multiple sessions can be established with same or different credentials.
Remote Console (Also known as Remote Control) – Provides the actual console screen to the user, still a session but the systems local session. Only one console session can exist. Any credentials with rights to log on locally can obtain the system session. (Default in Windows is Deny)

 

In CloudForms you will have full visibility of all Virtual Machines and Instances in the providers configured. The inventory for any of these objects will include the Hostnames configured. Given that the hostname is available, you can with any Remote Desktop Protocol (RDP) or Secure Shell (SSH) client make a connection.
 
Question : Can we make CloudForms call our client?
Answer : Yes, assuming your clients for RDP or SSH have an association scheme in the browser you are using to access CloudForms, normally the case.
 
The association scheme is the prefix to the address, normally known as the URI protocol. Example;
 

HTTP://…… – In a browser will take you to a web site.
FTP://….. – In a browser should open a file view to a FTP source.

 
And in our case we want;
 

RDP://….. – In a browser will ask you to launch a RDP client installed locally to the browser.
SSH://….. – Similarly, if you use this prefix, it should launch Terminal or whatever SSH client is associated to the scheme SSH://.

 
Therefore this capability is available CLIENT SIDE. It is advised that you test with your browser first a simple connection such as;
 
ssh://
 
This should prompt you to launch an external program to the browser, and you should get a terminal opening with a login prompt to the server. If this works correctly, we can configure CloudForms to do the same.
 
Configuring CloudForms
Creating the Button
We will create two buttons in a button group.
First we need a button group.
 

 
I called my button group “VM Ops”, we will add two buttons to this group after we have created them, the graphic shows these buttons present in my VM Ops group.

Create a Button for RDP
 

 
Set the Request to “launch_url_rdp”, and select the “Open URL” check box.

Create a button for SSH
 

 
Again set the request to “launch_url_ssh”, similarly check the “Open URL” check box.
Recap
We have two buttons in a button group, both of which launch a request called “launch_url_”, type being SSH or RDP.

Automate Methods
The buttons call into Automate to an instance named “launch_url_”. We need to create instances and methods for the buttons to follow.

Copy Request Class
First copy the ManageIQ/System/Request Class to an enabled domain of your choice.
If you already have domain with the request class present you can use this class and bypass this step. Continue with Creating Instances.
 

 
Here is my example copied to a domain called Sample. Yours won’t have any instances or methods yet.

Create Instance for RDP
Create a new instance as follows
 

 
Set the first “meth1” connection to “launch_url_rdp”

Create Instance for SSH
Create another new instance in the same Request Class in your domain as follows;
 

 
Set the first “meth1” connection to “launch_url_ssh”

Create Methods
The instances are now created and configured to methods that do not exist, so let’s create those.

Create Method for RDP
Create a new method and enter the code shown.
 

 
The ruby method code is very simple,

Line 1 – First we load the current VM in view as an object called “VM”.
Line 2 – Log to the automate log file the hostname that we are going to use.
Line 4 – Set in the VM object loaded the attribute “remote_console_url” with the value of the  VM hostname, with the scheme set to RDP.

Create Method for SSH
Create a new method and enter the code shown.
 

 
The method for “launch_url_ssh” is virtually identical to the previous one created for “launch_url_rdp” with only the name of the method and the url scheme changing.

Try it out!
Windows/RDP Test
Navigate to a Windows VM.
 

 
You can see your new “VM Ops” button group.
 

 
Select the RDP console button, after 2-5 seconds the browser should launch your local RDP client.
You maybe prompted by your browser as follows;
 

 
CoRD is the RDP client I have installed, click Open CoRD launches the application as follows;
 

 
 
Linux/SSH Test
Navigate to a Linux VM.
 

 
You can see your new “VM Ops” button group.

 
Select the SSH console button, after 2-5 seconds the browser should launch your local SSH client, usually Terminal.
You maybe prompted by your browser as follows;

 
Terminal is default installed on Mac OS X, CentOS, Red Hat Enterprise Linux but you will need to install a terminal application on Windows machines, such as PuTTY.
 

 

Summary
This concludes how to add custom buttons to CloudForms for SSH and RDP virtual machine access.
 
Further information to the actual implementation of “Open URL” in custom buttons can be found in this ManageIQ Pull Request.
 
We will improve this blog in time by showing you how you can use wildcard instances and assertions in Automate so that you have “launch_url_console”, and instances and methods automatically select the right scheme based on the operating system type. Also some more robustness around using public or private ip addresses and hostnames.
Quelle: CloudForms

Retailers and producers turn to IBM Blockchain to improve food safety

According to the World Health Organization, around 420,000 people die each year because of food contaminated by bacteria, chemicals, viruses parasites and toxins. With more information and increased capacity monitor food safety, those deaths could be prevented.
That’s the idea behind a collaboration between a group of 10 food producers and retailers — Dole, Driscoll’s, Golden State Foods, Kroger, McCormick and Company, McLane Company, Nestlé, Tyson Foods, Unilever and Walmart — and IBM Blockchain. They’re working together to highlight the “most urgent areas” in the global food supply chain.
Forbes explains further:
By using blockchain, when a problem arises, the potential is to quickly identify what the source of contamination is since one can see across the whole ecosystem and where all the potential points of contamination could be using the data to pinpoint the source. As such it is “ideally suited” according to IBM to address these challenges because it establishes a trusted environment for all transactions.

IBM has already run a number of pilot projects to demonstrate how its blockchain platform, which is available via IBM Cloud, can improve food safety and traceability. One such project announced in October 2016, a collaboration with Walmart and Tsinghua University, looked to improve tracking and transit of food in China.
Read Forbes‘ full article to find out more about how food companies are collaborating with IBM Blockchain.
The post Retailers and producers turn to IBM Blockchain to improve food safety appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

NFV & Carrier SDN

The post NFV & Carrier SDN appeared first on Mirantis | Pure Play Open Cloud.
The post NFV & Carrier SDN appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis