IBM and VMware extend their partnership to IBM Business Partners

At IBM InterConnect 2016, IBM and VMware announced a strategic partnership to accelerate enterprise hybrid cloud adoption. Since then, the two companies have jointly provided additional services and solutions for clients to optimize, integrate and extend their VMware environments to the cloud with ease and speed.
This initiative has grown to include more than 1,000 clients and 4,000 IBM service professionals.
And now, in an industry first, IBM and VMware are pleased to announce in conjunction with the PartnerWorld Leadership Conference that they’ve expanded their partnership to IBM Business Partners, providing even more reach to VMware and IBM Cloud users around the world with a portfolio of services that includes planning, architecture, migration, and end-to-end management.  This exclusive agreement enables IBM Business Partners to resell VMware Cloud Foundation and VMware software licenses on IBM Cloud.
VMware Cloud Foundation
This standardized software defined data center solution brings together IBM Bluemix infrastructure with VMware vSphere, Virtual SAN, NSX and SDDC Manager for a seamless hybrid cloud experience.
VMware software licenses
Organizations can deploy a custom VMware environment by combining IBM Bluemix infrastructure and VMware software licenses to fit their unique business requirements.
Another benefit is that IBM Business Partners and their clients can bring their own VMware licenses to the IBM Bluemix infrastructure.
VMware will be joining IBM at InterConnect 2017, 19 &; 23 March. Check out the VMware on IBM Cloud sessions.

The post IBM and VMware extend their partnership to IBM Business Partners appeared first on news.
Quelle: Thoughts on Cloud

Red Hat Announces Schedule and Speaker Line-Up for OpenShift Commons Gathering March 28th in Berlin

The OpenShift Commons Gathering will bring together the brightest technical minds to discuss the future of OpenShift and its related upstream open source projects. With OpenShift Container Platform quickly gaining adoption around the world, the OpenShift Commons Gathering will feature talks from upstream project leads and case studies from users like Red Hat, Google, Microsoft Azure, Amadeus, T-Systems, Volvo, Weave, CNCF and more. This event will also include face-to-face meetings for all the OpenShift Commons Special Interest Groups and allow ample time for peer-to-peer networking.
Quelle: OpenShift

Bluemix powers productivity for IBM Business Partners

What do data discovery, call center productivity and genomic data analytics have in common?
Though they’re functions of wildly different industries and businesses, they’re all areas in which IBM Business Partners have put Bluemix to use, driving digital transformation.
IBM launched Bluemix for its business partners channel in September 2015, and by the last quarter of 2016, almost a third of Bluemix signings were from partners. Those partners make use of the 54 IBM Cloud data centers worldwide to create custom solutions that make their businesses run more smoothly.
At the PartnerWorld Leadership Conference in Las Vegas this week, three such IBM Business Partners shared their successes with Bluemix:

Mark III Systems offers clients the IBM Cognitive Call Center on Bluemix to filter, analyze and take actions based on calls to and from client call centers. Working with call center software provider Cistera, Mark III’s development unit, BlueChasm, brought about an 80 percent increase in call center productivity.
Global information technology, consulting and business process services company Wipro offers clients the Bluemix-built Data Discovery Platform. The platform uses analytics to help clients anticipate problems and reduce costs. One client, Western Power in Australia, saved $5 million in one year on overhead power line replacement and maintenance.
Bluebee uses IBM Cloud bare metal servers, Aspera and Cloud Object Storage to run a cloud-based genomics platform that enables speedy processing of large volumes of data for cancer diagnostics. It has reduced time to diagnosis from five to two days.

Also at the PartnerWorld event, IBM announced the first-ever Watson Build, a program which encourages partners to brainstorm and pitch new cognitive business solutions, similar to a program for IBM employees. IBM will offer technical and business support while partner teams develop their solutions.
Along with those announcements, IBM unveiled a new PartnerWorld program which includes 40 competencies, including information, risk and protection; cloud video; high-speed transfer; continuous engineering, and global financing. Partners will also have access to a new voice-and-text-activated, Watson-based support tool called PartnerWorld Advisor.
Find out more about IBM PartnerWorld.
The post Bluemix powers productivity for IBM Business Partners appeared first on news.
Quelle: Thoughts on Cloud

The “open” part of open source doesn’t mean “free”

It’s a common misconception that opening the source code for a product is a fast, easy path to success. After all, how much easier can it get than to have other developers fix your bugs, add new features and answer support questions?
That panacea is what draws so many misguided tech companies to releasing open source software (OSS) versions of their products. They aim to draw in crowds of free-version users that will undoubtedly shift over to becoming paid subscribers of a different offering.
Let’s take a good look at that “free” offering and see what it really means to the programmer providing it to the developer community.
Free marketing?
There are a lot of OSS repositories out there. If you are using your open source project to drive downloads and build a lead funnel, you will have to spend as much or possibly more on marketing budget as you would for a traditional project. If you are using it as a recruitment tool for new developers, or to build community around your software, you will still have to allocate some investment to getting the word out that you’re in the OSS space.
Free community management?
Let’s say promoting your product goes well and the community grows the way you hoped it would. Now you need to manage that community. And it’s potentially two communities – users and contributors – each making their own demands on your time.
Users want documentation, support and the ability to request new features. You will need to make sure you have someone listening and responding to all of that feedback. Successful open source projects often have an engaged set of advocates who can help you. But you need to constantly nurture that group to make sure they stay engaged.
Contributors are looking for a way to make a difference. They could be your most valuable commodity. But you will have to work with them to make sure they understand the vision of the product. And you will need to engage in a dialog about the contributions you need against the contributions they want to make.
Free repository management?
Oh, and let’s not forget that you also have a bunch of code and documentation to maintain. Performing code reviews, merging pull requests and ensuring the documentation is up-to-date requires a level of staffing that people often forget to allocate. Like any good software, an OSS project is only as good as its underlying code sanitation. With a lot of contributors, this becomes an important part of your mission as the project owner.
OSS drives some of the most rewarding collaboration and innovation in our industry. But it’s important to see it as an investment in time, money and staffing if you want to be successful.
To learn more about Open Source, join me and several other experts at the Open Technology Summit at IBM InterConnect in Las Vegas on March 19, 2017.
The post The “open” part of open source doesn’t mean “free” appeared first on news.
Quelle: Thoughts on Cloud

Taking the Canadian insurance industry digital

The world is growing more digital by the hour. Studies show that the demographics of people who use the Internet are shifting dramatically.
Traditionally, consumers bought home and auto insurance from brokers who worked behind desks and manned phones in brick-and-mortar offices. Twenty years ago, four of five people who bought life insurance used a broker.
That’s changing, and as a result more and more people — especially millennials — have a growing preference to use online technology, whether it’s web-based or app-based, to satisfy their insurance needs.
Launching an online insurance option
At Economical Insurance, a large Canadian insurer that has been operating for 145 years, we conducted extensive research to better understand the market. To remain competitive, Economical had to become a multi-channel business, enhancing a different strategy by offering an alternative purchase path for customers.
So in 2016 we launched Sonnet Insurance Company, a digital brand that provides online products and services with a customer-first attitude. With Sonnet, Economical disrupted the industry in Canada, going from zero ability to sell insurance online to providing customers with Canada’s first and, to date, only direct-digital business that enables customers to get a quote and buy home and auto insurance completely online.
Sonnet hosts its entire portfolio of workloads, applications and solutions on the IBM Cloud infrastructure. And, amazingly, it took just six months from inception for Sonnet to get its digital business up and running. Sonnet has become the smart, new home and auto insurance experience, done completely online.
A tailored online experience
Consumers can visit the customer friendly Sonnet site, kick the tires and get a quote. The cloud-based solution uses technology and multiple data inputs to offer each policyholder recommendations and meet each visitor’s unique needs.
Typically when consumers buy insurance in Canada, they aren’t actually insured right away. An agent must first contact the consumer and send information back. But not with Sonnet. It has the first application that allows autobinding, so when a consumer buys a policy online, it becomes active immediately, without the need to ever speak to or correspond with a broker.
From beginning to end, Sonnet provides a trusted, transparent and simple process. We use data-driven analytics and inputs from multiple sources to tailor service to each individual customer.
We anticipate steady growth among a distinct behavioral segment of the market, particularly millennials, that prefers to buy home and auto insurance online.
Learn more about Sonnet in this news release and read the white paper on IBM for insurance.
The post Taking the Canadian insurance industry digital appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introduction to Salt and SaltStack

The post Introduction to Salt and SaltStack appeared first on Mirantis | Pure Play Open Cloud.
The amazing world of configuration management software is really well populated these days. You may already have looked at Puppet, Chef or Ansible but today we focus on SaltStack. Simplicity is at its core, without any compromise on speed or scalability. In fact, some users have up to 10,000 minions or more. In this article, we&;re going to give you a look at what Salt is and how it works.
Salt architecture
Salt remote execution is built on top of an event bus, which makes it unique. It uses a server-agent communication model where the server is called the salt master and the agents the salt minions.
Salt minions receive commands simultaneously from the master and contain everything required to execute commands locally and report back to salt master. Communication between master and minions happens over a high-performance data pipe that use ZeroMQ or raw TCP, and messages are serialized using MessagePack to enable fast and light network traffic. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication.
State description is done using YAML and remote execution is possible over a CLI, and programming or extending Salt isn’t a must.
Salt is heavily pluggable; each function can be replaced by a plugin implemented as a Python module. For example, you can replace the data store, the file server, authentication mechanism, even the state representation. So when I said state representation is done using YAML, I’m talking about the Salt default, which can be replaced by JSON, Jinja, Wempy, Mako, or Py Objects. But don’t freak out. Salt comes with default options for all these things, which enables you to jumpstart the system and customize it when the need arises.
Terminology
It&8217;s easy to be overwhelmed by the obscure vocabulary that Salt introduces, so here are the main salt concepts which make it unique.

salt master &; sends commands to minions
salt minions &8211; receives commands from master
execution modules &8211; ad hoc commands
grains &8211; static information about minions
pillar &8211; secure user-defined variables stored on master and assigned to minions (equivalent to data bags in Chef or Hiera in Puppet)
formulas (states) &8211; representation of a system configuration, a grouping of one or more state files, possibly with pillar data and configuration files or anything else which defines a neat package for a particular application.
mine &8211; area on the master where results from minion executed commands can be stored, such as the IP address of a backend webserver, which can then be used to configure a load balancer
top file &8211; matches formulas and pillar data to minions
runners &8211; modules executed on the master
returners &8211; components that inject minion data to another system
renderers &8211; components that run the template to produce the valid state of configuration files. The default renderer uses Jinja2 syntax and outputs YAML files.
reactor &8211; component that triggers reactions on events
thorium &8211; a new kind of reactor, which is still experimental.
beacons &8211; a little piece of code on the minion that listens for events such as server failure or file changes. When it registers on of these events, it informs the master. Reactors are often used to do self healing.
proxy minions &8211; components that translate Salt Language to device specific instructions in order to bring the device to the desired state using its API, or over SSH.
salt cloud &8211; command to bootstrap cloud nodes
salt ssh &8211; command to run commands on systems without minions

You’ll find a great overview of all of this on the official docs.
Installation
Salt is built on top of lots of Python modules. Msgpack, YAML, Jinja2, MarkupSafe, ZeroMQ, Tornado, PyCrypto and M2Crypto are all required. To keep your system clean, easily upgradable and to avoid conflicts, the easiest installation workflow is to use system packages.
Salt is operating system specific; in the examples in this article, I’ll be using Ubuntu 16.04 [Xenial Xerus]; for other Operating Systems consult the salt repo page.  For simplicity&8217;s sake, you can install the master and the minion on a single machine, and that&8217;s what we&8217;ll be doing here.  Later, we&8217;ll talk about how you can add additional minions.

To install the master and the minion, execute the following commands:
$ sudo su
# apt-get update
# apt-get upgrade
# apt-get install curl wget
# echo “deb [arch=amd64] http://apt.tcpcloud.eu/nightly xenial tcp-salt” > /etc/apt/sources.list
# wget -O – http://apt.tcpcloud.eu/public.gpg | sudo apt-key add –
# apt-get clean
# apt-get update
# apt-get install -y salt-master salt-minion reclass

Finally, create the  directory where you’ll store your state files.
# mkdir -p /srv/salt

You should now have Salt installed on your system, so check to see if everything looks good:
# salt –version
You should see a result something like this:
salt 2016.3.4 (Boron)

Alternative installations
If you can’t find packages for your distribution, you can rely on Salt Bootstrap, which is an alternative installation method, look below for further details.
Configuration
To finish your configuration, you&8217;ll need to execute a few more steps:

If you have firewalls in the way, make sure you open up both port 4505 (the publish port) and 4506 (the return port) to the Salt master to let the minions talk to it.
Now you need to configure your Minion to connect to your master.  Edit the file /etc/salt/minion.d/minion.conf  and Change the following lines as indicated below:

# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: localhost

# If multiple masters are specified in the ‘master’ setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing

# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
id: saltstack-m01

# Append a domain to a hostname in the event that it does not exist.  This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
:

As you can see, we&8217;re telling the minion where to find the master so it can connect &; in this case, it&8217;s just localhost, but if that&8217;s not the case for you, you&8217;ll want to change it.  We&8217;ve also given this particular minion an id of saltstack-m01; that&8217;s a completely arbitrary name, so you can use whatever you want.  Just make sure to substitute in the examples!
Before being able you can play around, you&8217;ll need to restart the required Salt services to pick up the changes:
# service salt-minion restart
# service salt-master restart

Make sure services are also started at boot time:
# systemctl enable salt-master.service
# systemctl enable salt-minion.service

Before the master can do anything on the minion, the master needs to trust it, so accept the corresponding key of each of your minion as follows:
# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
saltstack-m01
Rejected Keys:

Before accepting it, you can validate it looks good. First inspect it:
# salt-key -f saltstack-m01
Unaccepted Keys:
saltstack-m01:  98:f2:e1:9f:b2:b6:0e:fe:cb:70:cd:96:b0:37:51:d0

Then compare it with the minion key:
# salt-call –local key.finger
local:
98:f2:e1:9f:b2:b6:0e:fe:cb:70:cd:96:b0:37:51:d0

It looks the same, so go ahead and accept it:/span>
salt-key -a saltstack-m01

Repeat this process of installing salt-minion and accepting the keys to add new minions to your environment. Consult the documentation to get more details regarding the configuration of minions or more generally this documentation for all salt configuration options.
Remote execution
Now that everything&8217;s installed and configured, let&8217;s make sure it&8217;s actually working. The first, most obvious thing we could do with our master/minion infrastructure is to run a command remotely. For example we can test whether the minion is alive by using the test.ping command:
# salt ‘saltstack-m01′ test.ping
saltstack-m01:
   True
As you can see here, we&8217;re calling salt, and we&8217;re feeding it a specific minion, and a command to run on that minion.  We could, if we wanted to, send this command to more than one minion. For example, we could send it to all minions:
# salt ‘*’ test.ping
saltstack-m01:
   True
In this case, we have only one, but if there were more, salt would cycle through all of them giving you the appropriate response.
So that should get you started. Next time, we&8217;ll look at some of the more complicated things you can do with Salt.
The post Introduction to Salt and SaltStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How to turn APM metrics into better apps

Digital applications are the lifeblood of business today. They are the primary means of interaction with customers. It’s imperative applications are always available, provide optimal performance and deliver exceptional customer experiences. If not, you can probably expect your customers to say goodbye.
Applications run on top of a complex web of software components. Some provide the platform and connectivity needed to deliver services and others move data between devices. IBM provides a robust set of components, including WebSphere Application Server, MQ, IBM Integration Bus, Datapower and more. Each helps deliver numerous applications.
To ensure your applications perform optimally, you should manage the health and welfare of software components. IBM Application Performance Management (APM) monitors the performance and availability of your critical IBM applications to identify problems before impacting users, visualize performance bottlenecks and more.
Fine-tune your software with APM metrics
Many people think of monitoring as being alerted about a problem and guided to the issue source to fix it. But another motivation for monitoring is to proactively avoid those problems in the first place. Adopting DevOps methodology, you can take information from monitoring to your developers for fine-tuning to improve application performance. You can gather metrics at short intervals, making it more likely that you’ll spot trends or anomalies that indicate bottlenecks.
APM allows you to monitor the key metrics of your IBM software environment for optimal behavior. But it’s also important to measure the CPU, memory and network utilization to ensure bottlenecks aren’t at the platform level.
To illustrate, here are some of the metrics you can use from APM to tune WebSphere Application Server:

Heap utilization and garbage collection statistics to determine if memory leaks are occurring
Database Connection pools to identify if they are too small to handle the load that is placed on them
Thread pools to determine if they are too small to handle the load
Web Services &; identify most used web services and performance problems, including if it is a code or underlying resource problem

Speed resolution with APM metrics
You can also use APM to identify problems and speed up resolution time. It works similarly to fine-tuning, but with more frequent metric gathering and alerting rather than reporting. This is also where IBM APM outshines other solutions, offering quicker troubleshooting and resolution.
Transaction tracking can dramatically improve problem diagnosis by isolating the source. This ensures the issue is routed to the right SME and does not involve others responsible for different areas of the application environment. IBM Operations Analytics – Predictive Insights automatically determines baselines for metrics and will alert you about deviations from that baseline. It can also identify related metrics helpful for quicker identification of problems’ cause.
If you’re using IBM components to build applications, you should consider coupling them with IBM APM’s monitoring designed to help tune those components for optimal performance and quick problem resolution. Result? An optimal customer experience.
Want hands-on experience with IBM APM? Attend IBM InterConnect for countless sessions, labs, and educational opportunities.
The post How to turn APM metrics into better apps appeared first on news.
Quelle: Thoughts on Cloud

RDO blogs, week of Feb 13

Here’s what RDO enthusiasts have been blogging about in the last few weeks. If you blog about RDO, please let me know (rbowen@redhat.com) so I can add you to my list.

TripleO: Debugging Overcloud Deployment Failure by bregman

You run ‘openstack overcloud deploy’ and after a couple of minutes you find out it failed and if that’s not enough, then you open the deployment log just to find a very (very!) long output that doesn’t give you an clue as to why the deployment failed. In the following sections we’ll see how can […]

Read more at http://tm3.org/dv

RDO @ DevConf by Rich Bowen

It’s been a very busy few weeks in the RDO travel schedule, and we wanted to share some photos with you from RDO’s booth at DevConf.cz.

Read more at http://tm3.org/dw

The surprisingly complicated world of disk image sizes by Daniel Berrange

When managing virtual machines one of the key tasks is to understand the utilization of resources being consumed, whether RAM, CPU, network or storage. This post will examine different aspects of managing storage when using file based disk images, as opposed to block storage. When provisioning a virtual machine the tenant user will have an idea of the amount of storage they wish the guest operating system to see for their virtual disks. This is the easy part. It is simply a matter of telling ‘qemu-img’ (or a similar tool) ’40GB’ and it will create a virtual disk image that is visible to the guest OS as a 40GB volume. The virtualization host administrator, however, doesn’t particularly care about what size the guest OS sees. They are instead interested in how much space is (or will be) consumed in the host filesystem storing the image. With this in mind, there are four key figures to consider when managing storage:

Read more at http://tm3.org/dx

Project Leader by rbowen

I was recently asked to write something about the project that I work on – RDO – and one of the questions that was asked was:

Read more at http://tm3.org/dy

os_type property for Windows images on KVM by Tim Bell

The OpenStack images have a long list of properties which can set to describe the image meta data. The full list is described in the documentation. This blog reviews some of these settings for Windows guests running on KVM, in particular for Windows 7 and Windows 2008R2.

Read more at http://tm3.org/dz

Commenting out XML snippets in libvirt guest config by stashing it as metadata by Daniel Berrange

Libvirt uses XML as the format for configuring objects it manages, including virtual machines. Sometimes when debugging / developing it is desirable to comment out sections of the virtual machine configuration to test some idea. For example, one might want to temporarily remove a secondary disk. It is not always desirable to just delete the configuration entirely, as it may need to be re-added immediately after. XML has support for comments which one might try to use to achieve this. Using comments in XML fed into libvirt, however, will result in an unwelcome suprise – the commented out text is thrown into /dev/null by libvirt.

Read more at http://tm3.org/d-

Videos from the CentOS Dojo, Brussels, 2017 by Rich Bowen

Last Friday in Brussels, CentOS enthusiasts gathered for the annual CentOS Dojo, right before FOSDEM.

Read more at http://tm3.org/dp

FOSDEM Day 0 – CentOS Dojo by Rich Bowen

FOSDEM starts tomorrow in Brussels, but there’s always a number of events the day before.

Read more at http://tm3.org/dq

Gnocchi 3.1 unleashed by Julien Danjou

It’s always difficult to know when to release, and we really wanted to do it earlier. But it seems that each week more awesome work was being done in Gnocchi, so we kept delaying it while having no pressure to push it out.

Read more at http://tm3.org/dr

Testing RDO with Tempest: new features in Ocata by ltoscano

The release of Ocata, with its shorter release cycle, is close and it is time to start a broader testing (even if one could argue that it is always time for testing!).

Read more at http://tm3.org/ds

Barely Functional Keystone Deployment with Docker by Adam Young

My eventual goal is to deploy Keystone using Kubernetes. However, I want to understand things from the lowest level on up. Since Kubernetes will be driving Docker for my deployment, I wanted to get things working for a single node Docker deployment before I move on to Kubernetes. As such, you’ll notice I took a few short cuts. Mostly, these involve configuration changes. Since I will need to use Kubernetes for deployment and configuration, I’ll postpone doing it right until I get to that layer. With that caveat, let’s begin.

Read more at http://tm3.org/dt
Quelle: RDO

The next generation of IT issue resolution

For years, search engines have been the first place to go to look for information, from who batted cleanup for the 1979 Pittsburgh Pirates (Willie Stargell) to how to optimize WebSphere Application Servers.
While search engines have always been a great source of general information, they were not personalized for you and your situation. Until now.
The cognitive era is here, which means you can apply natural language interaction, moving beyond generic trends and standard benchmarks to get real-time, specifically targeted data and insights. It was this concept that drove our third Connect to Cloud Cognitive Build team to create the new Cognibot application.
IT operations managers around the world deal daily with the realities of trying to keep their environment not only up and running, but also optimized to the best of their ability. Industry experts have studied the four phases of IT service management when it came to repairing problems:

Mean time to identify
Mean time to know
Mean time to fix
Mean time to verify

The team noticed that while there was a great deal of information available for identifying a problem, there was significantly less available to recommend what to do to fix it.
That brings us back to search engines. In many cases, IT managers look at system reports and attempt to search online for possible causes and resolutions. Not only is that approach time consuming, but what they find is not tailored for their specific issue.
What if someone applied cognitive services to the problem?
The Cognibot project does just that. The team has created an interactive service that combines the knowledge of what one has already done to fix a problem, with the real data from the specific IT environment. It then adds Watson capabilities so users can ask natural-language questions and get response customized for their specific issue.
Here&;s an example: suddenly your IT department gets flagged with an issue in your WebSphere Application Server deployment. Normally, an IT subject matter expert will get called in to identify the issue, search for the solution, the execution the recovery plan.  What if we could streamline that process?  Your IT department still gets the notification, but instead of searching for the answer your Cognibot interface has already analyzed your real data, researched fixes that have worked in the past, and recommended a solution that will work in this case. All you have to do is click “accept” to enact the fix, helping reduce your mean time to repair drastically. Now that’s a cognitive solution.
Interested in learning more? Join us over the next few weeks as we track the team’s progress toward creating the first IT Operations consultant.
See how we are helping clients take advantage of the digital economy.
The post The next generation of IT issue resolution appeared first on news.
Quelle: Thoughts on Cloud

RDO @ DevConf

It’s been a very busy few weeks in the RDO travel schedule, and we wanted to share some photos with you from RDO’s booth at DevConf.cz.

Led by Eliska Malikova, and supported by our team of RDO engineers, we provided information about RDO and OpenStack, as well as a few impromptu musical performances.

RDO engineers spun up a small RDO cloud, and later in the day, the people from the Manage IQ booth next door set up an instance of their software to manage that cloud, showing that RDO and Manage IQ are better together.

You can see the full album of photos on Flickr.

If you have photos or stories from DevConf, please share them with us on rdo-list. Thanks!
Quelle: RDO