Introduction to YAML, Part 2: Kubernetes Services, Ingress, and repeated nodes

The post Introduction to YAML, Part 2: Kubernetes Services, Ingress, and repeated nodes appeared first on Mirantis | Pure Play Open Cloud.
In part 1 of this series, we looked at the basics behind YAML and showed you how to create basic Kubernetes objects such as Pods and Deployments using the two basic structures of YAML, Maps and Lists. Now we’re going to look at enhancing your YAML documents with repeated nodes in the context of Kubernetes Services, Endpoints, and Ingress.
Let’s start with a basic scalar value.
A simple repeated scalar value in YAML: building a Kubernetes Service
To see how we can create a simple repeated value, we’re going to look at Kubernetes Services. A complete look at Services is beyond the scope of this article, but there are three basic things you need to understand:

Services are how pods communicate, either with either other or with the outside world. They do this by specifying a port for the caller to use, and a targetPort, which is the port on which the Pod itself receives the message.
Services know which pods to target based on labels specified in the selector.
Services come in four different types:

ClusterIP: The default ServiceType, a ClusterIP service makes the service reachable from within the cluster via a cluster-internal IP.
NodePort: A NodePort service makes it possible to access a Service by directing requests to a specific port on every Node, accessed via the NodeIP. (Kubernetes automatically creates a ClusterIP service to route the request.) So from outside the cluster, you’d send the request to <NodeIP>:<NodePort>.
LoadBalancer: In order to use a LoadBalancer service, you have to be using a cloud provider that supports it; it’s the cloud provider that actually makes this functionality available. This service sits on top of NodePort and ClusterIP services, which Kubernetes creates automatically.
ExternalName: In production situations, you will likely want to use ExternalName, which maps the service to a CNAME record such as a Fully Qualified Domain Name.

OK, with the basics under our belt, let’s take a look at actually creating one.
Repeated values with anchors and aliases
In part 1, we covered the basics of creating Kubernetes objects using YAML, and creating  a Service is no different.
apiVersion: v1
kind: Service
metadata:
 name: nginx
 labels:
   app: nginx
spec:
 selector:
   app: nginx
 ports:
 – port: 80
   name: http
   targetPort: 80
 – port: 443
   name: https
   targetPort: 80
As you can see, we’re creating an object just as we did in Part 1, with metadata and a spec. Metadata is the same as it was when we were dealing with Deployments, in that we are specifying information about the object and adding labels to any instances created.
As for the spec, a Service needs two basic pieces of information: a selector, which identifies Pods that it should work with (in this case, any pods with the label app=nginx) and the ports the service manages. In this case, we have two external ports, both of which get forwarded to port 80 of the actual pod.
So let’s make this more convenient.  We can create an anchor that specifies a value, then use an alias to reference that anchor.  For example:
apiVersion: v1
kind: Service
metadata:
 name: nginx
 labels:
   app: nginx
spec:
 selector:
   app: nginx
 ports:
 – port: &target 80
   name: http
   targetPort: *target
 – port: 443
   name: https
   targetPort: *target
We create the anchor with the ampersand (&), as in &target, then reference it with the alias created with the asterisk (*), as in *target.  If we were to put this into a file and create it using kubectl, we would get a new Service, as we can see:
$ kubectl get svc
NAME      TYPE     CLUSTER-IP   EXTERNAL-IP   PORT(S) AGE
kubernetes   ClusterIP 10.96.0.1    <none>     443/TCP       32d
nginx     ClusterIP  10.107.206.48 <none> 80/TCP,443/TCP   13m
If we then went on to describe the service, we could see that the values carried through:
$ kubectl describe svc nginx
Name:           nginx
Namespace:      default
Labels:         app=nginx
Annotations:    kubectl.kubernetes.io/last-applied-configuration:
                 {“apiVersion”:”v1″,”kind”:”Service”,”metadata”:{“annotations”:{},”labels”:{“app”:”nginx”},”name”:”nginx”,”namespace”:”default”},”spec”:{“p…
Selector:       app=nginx
Type:           ClusterIP
IP:             10.107.206.48
Port:           http  80/TCP
TargetPort:     80/TCP
Endpoints:      <none>
Port:           https  443/TCP
TargetPort:     80/TCP
Endpoints:      <none>
Session Affinity:  None
Events:         <none>

Now if we wanted to change that port, we could do it simply by changing the anchor:
apiVersion: v1
kind: Service
metadata:
 name: nginx
 labels:
   app: nginx
spec:
 selector:
   app: nginx
 ports:
 – port: &target 88
   name: http
   targetPort: *target
 – port: 443
   name: https
   targetPort: *target
We can then apply the changes…
$ kubectl apply -f test.yaml
service/nginx configured
… and look at the newly configured service:
$ kubectl describe svc nginx
Name:           nginx
Namespace:      default
Labels:         app=nginx
Annotations:    kubectl.kubernetes.io/last-applied-configuration:
                 {“apiVersion”:”v1″,”kind”:”Service”,”metadata”:{“annotations”:{},”labels”:{“app”:”nginx”},”name”:”nginx”,”namespace”:”default”},”spec”:{“p…
Selector:       app=nginx
Type:           ClusterIP
IP:             10.107.206.48
Port:           http  88/TCP
TargetPort:     88/TCP
Endpoints:      <none>
Port:           https  443/TCP
TargetPort:     88/TCP
Endpoints:      <none>
Session Affinity:  None
Events:         <none>

As you can see, all three values were changed by simply changing the anchor. Handy, but fortunately, we can also create anchors for more complicated structures.
Anchors for non-scalars: Creating Endpoints
Endpoints are, as in other applications, the target to which you’ll send your requests in order to access an application. Kubernetes creates them automatically, but you can also create them manually and link them to a specific service.  For example:
apiVersion: v1
kind: Endpoints
metadata:
 name: mytest-cluster
subsets:
 – addresses:
     – ip: 192.168.10.100
   ports:
     – name: myport
       port: 1
       protocol: TCP
 – addresses:
     – ip: 192.168.10.101
   ports:
     – name: myport
       port: 1
       protocol: TCP
 – addresses:
     – ip: 192.168.10.102
   ports:
     – name: myport
       port: 1
       protocol: TCP
As you can see, what you have here is the basic structure, only instead of a spec, we have subsets, each of which consists of one or more IP addresses and the ports to access them.
So now let’s look at creating an anchor out of one of those port definitions:
apiVersion: v1
kind: Endpoints
metadata:
 name: mytest-cluster
subsets:
 – addresses:
  – ip: 192.168.10.100
   ports: &stdport
     – name: myport
       port: 1
       protocol: TCP
 – addresses:
     – ip: 192.168.10.101
   ports: *stdport
 – addresses:
     – ip: 192.168.10.102
   ports: *stdport
If we describe the endpoints we can see that they’ve been created as we expect:
$ kubectl describe endpoints mytest-cluster
Name:      mytest-cluster
Namespace: default
Labels:    <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
            {“apiVersion”:”v1″,”kind”:”Endpoints”,”metadata”:{“annotations”:{},”name”:”mytest-cluster”,”namespace”:”default”},”subsets”:[{“addresses”:…
Subsets:
 Addresses:       192.168.10.100,192.168.10.101,192.168.10.102
 NotReadyAddresses:  <none>
 Ports:
Name Port  Protocol
—- —-  ——–
myport  1 TCP

Events:  <none>
But when you’re using an alias for a structure such as this, you’ll often want to change a specific value and leave the rest intact.  We’ll do that next.
Changing a specific value: Kubernetes Ingress
In this final section, we’ll look at creating a Kubernetes Ingress, which makes it simpler to create access to your applications. We’ll also look at another aspect of using aliases.
In the previous section we looked at replacing entire objects with an alias, but sometimes you want to do that with slight changes.  For example, we might have an Ingress that looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: test-ingress
 annotations:
   nginx.ingress.kubernetes.io/rewrite-target: /
spec:
 rules:
 – http:
     paths:
       – path: /testpath
         backend: &stdbe
           serviceName: test
           servicePort: 80
       – path: /realpath
         backend: *stdbe
       – path: /hiddenpath
         backend: *stdbe
In this case, we have three paths that all point to the same service on the same port.  But what if we want to have one path that points to another port? To do that we want to override one of the existing values, like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: test-ingress
 annotations:
   nginx.ingress.kubernetes.io/rewrite-target: /
spec:
 rules:
 – http:
   paths:
     – path: /testpath
       backend: &stdbe
         serviceName: test
         servicePort: 80
     – path: /realpath
       backend: *stdbe
     – path: /hiddenpath
       backend:
          << : *stdbe
         servicePort: 443
Now, a couple of things to note here. First off, the alias represents a value, so it has to have a name. We can’t use backend as the name, because we need *stdbe down one level so that we can replace servicePort.  So to reference the fact that we’re going up one level, we’re using the << notation. Then we can add another servicePort value to the same level of the hierarchy.
Now if we go ahead and apply this YAML, we can see the results:
$ kubectl apply -f test.yaml
ingress.extensions/test-ingress configured

$ kubectl describe ingress test-ingress
Name:          test-ingress
Namespace:     default
Address:     
Default backend:  default-http-backend:80 (<none>)
Rules:
 Host  Path Backends
 —-  —- ——–
 *
    /testpath test:80 (<none>)
    /realpath test:80 (<none>)
    /hiddenpath test:443 (<none>)
Annotations:
 kubectl.kubernetes.io/last-applied-configuration:  {“apiVersion”:”extensions/v1beta1″,”kind”:”Ingress”,”metadata”:{“annotations”:{“nginx.ingress.kubernetes.io/rewrite-target”:”/”},”name”:”test-ingress”,”namespace”:”default”},”spec”:{“rules”:[{“http”:{“paths”:[{“backend”:{“serviceName”:”test”,”servicePort”:80},”path”:”/testpath”},{“backend”:{“serviceName”:”test”,”servicePort”:80},”path”:”/realpath”},{“backend”:{“serviceName”:”test”,”servicePort”:443},”path”:”/hiddenpath”}]}}]}}
 nginx.ingress.kubernetes.io/rewrite-target:  /
Events:                                     <none>
So that’s anchors and aliases. If you want more information on YAML, including using specific data types, feel free to check out this webinar on YAML and Kubernetes objects.
 
The post Introduction to YAML, Part 2: Kubernetes Services, Ingress, and repeated nodes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

3 open source solutions that could help mitigate natural disasters

In the last 20 years, more than 2.5 billion people have been affected by natural disasters. In only the past few years, Puerto Rico suffered a massive hurricane, wildfires in California destroyed thousands of homes and an earthquake devastated parts of Mexico City.
To help pilot technology that addresses these issues, IBM sponsored Call for Code, a massive, open source challenge that brings together the developer community to solve some of the world’s toughest problems. More than 100,000 developers have contributed, and the results have been astounding. A key theme that emerged in 2018 submissions is connecting people to much-needed aid immediately following natural disasters.
Here are three such disaster relief projects from Call for Code:
1. Project Lantern
After the 2017 earthquake in Mexico City, Suba Udayasankar was determined to help protect her community.
She teamed up with a group of engineers from around the world to create Project Lantern, a combined hardware and software solution that helps people stay connected when normal connections are down.
The solution works by distributing a series of 3D-printed devices called lanterns across the city. The lanterns sync to the cloud when an internet connection is available and store data locally when it is not. All of the lanterns then connect with each other to create a local, offline mesh network. This enables connectivity and communication during disaster scenarios.
Watch the video.
2. Drone Aid
Hurricane Maria left many communities in Puerto Rico struggling to receive aid. Some of the people who needed the most help lived in rural areas where communications were especially challenging.
After seeing the tragic impact on his home island, Pedro Cruz knew he had to do something to help, so he created Drone Aid.
Drone Aid works using a visual vocabulary that drones are programmed to understand. Members of the community are then given signage that communicates in this vocabulary.
When disaster strikes, aid workers can use drones to quickly communicate with victims and shorten response times, even when roads are damaged or unavailable.
Watch the video.
3. WOTA
In the aftermath of the 2011 Tsunami in Japan, some homes were left without running water for as long as three months. Shelters were able to provide drinking water, but without water for showers and washing, victims faced serious health risks.
The solution, developed by Richard Yuwono, Shohei Okudera and Ryo Yamada, is WOTA, a compact, inexpensive water sensor module that uses Internet of Things (IoT) technology to measure properties such as water quality, flow and pressure.
WOTA works by placing modules at multiple locations in the treatment process to monitor the water flowing into and out of filters. The modules send the data to the lab, where the team can analyze quality, predict when filters need to be replaced and detect anomalies. The system can run offline, but when a connection is available, data is sent to the cloud and stored along with user and environmental data.
WOTA makes purification extremely efficient. The system can recover and recycle more than 95 percent wastewater from a shower. WOTA showers are also portable and can be set up at shelters with a single tank of water.
All of this makes it easier for communities to provide disaster victims with access to clean water and help reduce health risks during difficult times.
Watch the video.
Code and Response
IBM leaders were so impressed by Call for Code solutions that there is now an effort in place to turn these ideas into realities. The $25 million, four-year initiative will build, fortify, test and launch open technology solutions to help communities.
Cloud technology has the power to connect devices and bring people together during some of the toughest possible situations.
To learn more about how IBM is working to make these ideas a reality and join the 2019 challenge, visit the official Code and Response webpage.
The post 3 open source solutions that could help mitigate natural disasters appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How to automatically generate a new metric and a new log stream in Service Mesh

One of the advantage of deploying a microservice-based application in an Istio service mesh is to allow one to externally control service monitoring, tracing, request (version) routing, resiliency testing, security and policy enforcement, etc., in a consistent way across those services, for the application as a whole. In this blog we will focus on the […]
The post How to automatically generate a new metric and a new log stream in Service Mesh appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Upcoming Silicon Valley OpenShift Commons Gathering, March 11 on Operating at Scale with Speakers Google, Facebook, Uber, Red Hat and Rook

Check out the packed Agenda for the OpenShift Commons Gathering in Santa Clara on March 11th! The OpenShift Commons Gathering on “Operating at Scale” will feature speakers from Uber, Google, Facebook, Rook.io and Red Hat. Be Sure to Add the Gathering to Your Kubecon/NA Registration Today! The OpenShift Commons Gathering brings together experts from all over […]
The post Upcoming Silicon Valley OpenShift Commons Gathering, March 11 on Operating at Scale with Speakers Google, Facebook, Uber, Red Hat and Rook appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

The top 6 cloud announcements from Think 2019

A central theme of IBM Think 2019 was cloud transformation.
Thousands of business leaders and technical experts from around the world gathered in San Francisco to share ideas, build partnerships and learn about the future of IT, including what’s next with the cloud. For those who weren’t able to attend, here are some of the biggest cloud highlights from the event.
1. The IBM Chairman’s Address.
IBM chairman, president and CEO Ginni Rometty took the stage to share her vision for how enterprises can move beyond early adoption and tackle the second chapter of cloud transformation. Watch the replay.
2. Watson Anywhere.
With Watson Anywhere, the full suite of IBM Watson products will now run anywhere: on any cloud, from any vendor. For the first time ever, artificial intelligence (AI) is truly multicloud. Read the announcement.
3. New IBM cloud services.
Many clients are looking for ways to tackle some of the complexities and challenges of multicloud.
With new IBM Services for Cloud Strategy and Design, IBM is providing a comprehensive set of new consulting services to advise organizations on their cloud journeys. Read the announcement blog post.
4. Breakthroughs in cloud security.
According to the IBM Institute for Business Value, 57 percent of the IT managers surveyed worry about security and compliance. At Think 2019, IBM Cloud announced a host of new security capabilities that will help create end-to-end protection in the multicloud world. Read the announcement.
5. New IBM VMware capabilities.
IBM and VMware showed off new security and integration capabilities for modernizing and migrating workloads. Read the announcement.
5. Open source panel.
Ginni Rometty sat down with the world’s leading experts on open source to discuss the present and future of open innovation. Watch the replay.
These are only a few of the many big announcements and discussions that took place at Think 2019. To learn more, visit the website and browse recordings of some of the major sessions.
The post The top 6 cloud announcements from Think 2019 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Credits uses cloud to encourage mass adoption of public blockchain

Blockchain is attracting a lot of interest, especially since cryptocurrencies have exploded into the mainstream.
To access a market that’s estimated to be worth trillions of dollars, our company, Credits, created a fast, cost-effective public blockchain platform hosted on scalable IBM Cloud infrastructure.
Removing barriers to blockchain
Innovators across all industries are looking to the capabilities of blockchain to revolutionize their businesses. Yet many are still hesitant to take the leap, deterred by high costs and low transaction speed.
Credits’ founders wanted to change this. We set out to create a network capable of processing more than a million transactions per second for low fees. We developed one of the first completely autonomous blockchain platforms, which enables the creation of services using smart contracts and process scheduling.
To help Credits launch the platform commercially, we needed a cloud platform with high processing power, low latency and a cost-effective pricing model. We began looking for a reliable cloud service provider to get our unique solution to market quickly.
Building solid foundations
We chose IBM Cloud solutions because they beat the competition on quality, price and support. Crucially, IBM doesn’t charge for internal traffic, which offers us huge savings compared to competing offerings. IBM is also a leading expert in private blockchain technology, helping us to take advantage of its extensive knowledge to accelerate development of our platform.
IBM ran a proof of concept to find us the optimal infrastructure, designing an environment based on IBM Cloud bare metal servers. The platform is distributed across IBM Cloud data centers around the globe: in the US, Brazil, India, Singapore, the Netherlands, the UK and Germany. With a dedicated IBM consultant to help, migrating our solution to the IBM Cloud environment was a seamless experience.
The beta version of our blockchain platform is now available, providing users with a fast, scalable platform for development of decentralized applications (dApps). It includes autonomous smart contracts and an internal cryptocurrency.
Aiming high
Thanks to the exceptional power and pricing of IBM Cloud, we can offer a speed and cost of transaction far superior to our closest competitors.
The Credits platform supports volumes of more than a million transactions per second, compared to the seven per second and 300 per second currently offered by the two most established public blockchain providers. It offers a transaction execution speed of as little as 0.01 seconds, when our competitors take minutes. Our transaction fee can be low as a tenth of a US penny, rather than the $10 currently charged by a major provider.
Our work with IBM has been so successful that we signed a six-year contract for IBM solutions. We are also working with IBM to apply our unique blockchain technology to Internet of Things (IoT) use cases to increase trust and security.
Many companies don’t know where to get started with blockchain. Credits is providing an answer with help from IBM Cloud. With our platform’s guaranteed performance and scaling, innovators can focus on developing dApps that create real competitive advantage for their enterprises.
Read the case study for more details.
The post Credits uses cloud to encourage mass adoption of public blockchain appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What makes an intelligent business process management suite (iBPMS) stand above the rest?

Digital transformation is about completing work differently, and not simply for the sake of change.
It should address the dynamic business needs of the customer while optimizing the operational processes that impact the cost of service. As organizations pursue transformation, they often quickly realize that better customer and employee experiences need better and smarter automation solutions that provide workflow capabilities that address needs today and dynamically realign workflow solutions based on the demands for tomorrow.
Current digital transformation programs struggle because they address the process problems for today but do not provide flexibility for change.
Intelligent business process management suites (iBPMS) combine business process management (BPM) software with additional capabilities such as artificial intelligence (AI) to help companies dynamically automate more types of start-to-finish experiences. These suites are often cloud-enabled and provide low-code tools that help citizen developers create workflow solutions very quickly.
An iBPMS is an integrated set of technologies that coordinates people, machines and things (as in Internet of Things) and support traditional business process management requirements. These technologies also offer:

Intelligence and support for industry and organizational specific processes
A greater level of collaboration throughout the process, which encourages a wider adoption across organizational departments and can lead to a higher level of change and improvement within current processes
Support for integration to various middleware and back-end technologies, providing organizations with the capability to go to market quickly with new offerings

Choosing the right iBPMS platform provider
Picking the right technology provider can be daunting, even when you have a clear goal and strategy, because there are so many providers saying many of the same things. The iBPMs space is no different.
The “right” iBPMS platform provider depends a lot on the current state of your business process maturity, the activities that support your processes, availability of skilled resources (systems, humans, robots) and access to data. Offering capabilities such as AI can play a major role as well.
When it comes to iBPMS selection, the evaluation criteria have changed over the years as new technology providers enter the field and others move to the sidelines. Gartner recently released its Magic Quadrant for Intelligent Business Process Management Suites  in which it evaluated vendors across four use cases, including digital business optimization, digital business transformation, self-service intelligent business process automation and adaptive case management.
Gartner evaluated 21 iBPMS vendors on their ability to execute and completeness of vision, while outlining key vendor strengths and cautions. Vendors in the Leader quadrant have strong product roadmaps that support intelligent operations for client business. Leaders also provide a business-oriented methodology and the ability to grow with changing needs.

IBM named a Leader
According to the Gartner report, “In this latest reinvention of its suite, IBM has taken different products that had been loosely integrated and reimagined the collection as a unified platform — operating from a common data model with a consistent, web-based design and end-user experience. The end result enables multiple roles to collaborate on building intelligent applications.”
The IBM comprehensive Automation Platform for Digital Business, a powerful combination of process, task and decision automation paired with content services, is the driving force behind Gartner naming IBM a Leader in the Magic Quadrant for Intelligent Business Process Management for the sixth time in a row.
Get the full Gartner report comparing iBPMS vendors.
The IBM Automation Platform for Digital Business enables clients to automate workflows and decisions while deriving insight from the content within those business processes with speed and at scale. IBM clients have created and are running more than 50,000 applications on this platform.
Learn more about IBM workflow automation.
Disclaimer:
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. This Gartner document is available upon request from IBM.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The post What makes an intelligent business process management suite (iBPMS) stand above the rest? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Re-Imagining Virtualization with Kubernetes and KubeVirt – Part II

KubeVirt – Traditional Virtualization with the New Kubernetes Previously, in Part I of this blog series, we introduced the idea that KubeVirt is about balance: Mature virtualization features and concepts, yet Kubernetes philosophy and semantics. Pods and containers disappear when stopped. Virtual machines have a life cycle and their configurations persist so they can be […]
The post Re-Imagining Virtualization with Kubernetes and KubeVirt – Part II appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4: A NoOps Platform

In the previous post I described the goals that helped shape the OpenShift 4 vision.  We want to make the day to day of software operations effortless – for operations teams and for developers.  How do we make that goal – a NoOps platform for operations – a reality? What does “NoOps” mean in this […]
The post OpenShift 4: A NoOps Platform appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

IBM Cloud Owners’ Declaration of Rights: Do you know yours?

Cloud computing companies including IBM are being judged not just by what we can achieve by using data, but by whether we can be trusted with your data.
In 2019, the policies and protections that vendors provide for cloud and cloud-enabled technologies, such as artificial intelligence (AI), are such increasingly fundamental issues that it’s helpful to think of them as “rights”. This is that manage sensitive information about each of us.
American citizens depend on cloud services — whether we know it or not — for services from filing taxes to food safety and national security. We think when government IT leaders cloud vendors they should keep several things in mind to be responsible and transparent stewards of data.

Make sure your cloud provider has clear policies on how it uses data that aligns with your agency’s values and regulations. IBM was one of the first cloud vendors to publicly commit to a strong policy about data privacy and transparency.
Make sure that your data remains your data when working with cloud vendors and their services. The insights you generate from analytics or AI shouldn’t be used by your vendor to gain profit with other clients.

There’s a lot more you should not only expect, but demand from your cloud provider. We’ve outlined these expectations and are calling it our Cloud Owners’ Declaration of Rights, explained here in an infographic for you to download and share. Ask your cloud vendor how their cloud data policies satisfy these rights.

The post IBM Cloud Owners’ Declaration of Rights: Do you know yours? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud