Technical Instructor

The post Technical Instructor appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis is the industry leader in OpenStack with more customers in production than any other OpenStack company. Open source projects are in our DNA with Mirantis being the number one contributor to OpenStack and a top three contributor to Ceph. Additionally Mirantis places significant value on building communities around open source projects evidenced by our Platinum sponsorship of OpenStack, involvement with the OpenStack Foundation Board, and our own online and in person events used to educate end users about all things OpenStack.If you’re an ambitious and technical, and thrive on getting tough, real-world problems solved with a smart, motivated team, you want to work here.Are you?A good listener who enjoys standing in front of a group to inspire them with what you know?Get as much satisfaction in making other people successful as solving those problems yourself?Have a knack for explaining complex technical subjects?Inspired by the success of Open Source technologies and want be in the front seat of disruptive transformation?We’ll teach you what you need to know, building your own knowledge and experience. Then you’ll meet first-hand the developers and SysAdmins who are taking to the next level with the world’s leading open source cloud orchestration environment.Primary Responsibilities:Deliver instructor-led classroom training to end-user customers and partners on a regular schedule in US and abroad.Collaborate actively in developing & maintaining technical training content, lab exercises, presentations, and accompanying materials.Maintain training content through fast paced upgrade cycles (Currently every 6 months)Research completive solutions and related topicsFeed knowledge back in to training curriculumWrite technical blogs for Mirantis Website Qualifications:Bachelor’s degree or higher in Computer Science and/or equivalent experienceSolid hands-on experience with several of the following: Linux, OpenStack, Rackspace, AWS, mysql, RabbitMQ, networking, SDN, OpenFlow,Ability to quickly learn and develop expertise with new technology stacksExperience with Virtualization technologies – kvm, xen, lxc, vmwareStrong background in storage technologies – iscsi, fcoe, nexenta, netapp, nfs, nasProven oral presentation skills, interpersonal communication, and writing skillsAbility to travel globally, 30%-to-50%.Good to Have:Experience with committing and working from Open Source repositoriesRecent experience with architecting, deploying, and operating Internet scale applicationsCoding experience in one or more of the following languages: Ruby, Perl, PHP, Python, Java or .Net.Experience designing and developing instructor-led content with technical subject matter content.Experience conducting classroom training for related technology products and servicesA strong “stage presence” and ability to manage a classroom of adult learnersThe post Technical Instructor appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Introducing Decapod, an easier way to manage Ceph

The post Introducing Decapod, an easier way to manage Ceph appeared first on Mirantis | The Pure Play OpenStack Company.
Ceph is a de-facto standard in building robust distributed storage systems. It enables users to get a reliable, highly available, and easily scalable storage cluster using commodity hardware. Also, Ceph is becoming a storage basis for production OpenStack clusters.
There are several ways of managing Ceph clusters, including:

Using the ceph-deploy tool
Using custom in-house or open source manifests for configuration management software such as Puppet or Ansible
Using standalone solutions such as 01.org VSM or Fuel

Another solution in that third bucket is Decapod, a standalone solution that simplifies deployment of clusters and management of their lifecycles.
In this article, we&;ll compare the different means for deploying Ceph.
Deployment using ceph-deploy
The ceph-deploy tool is available with Ceph itself. According to the official documentation:
The ceph-deploy tool is a way to deploy Ceph relying only upon SSH access to the servers, sudo, and some Python. It runs on your workstation, and does not require servers, databases, or any other tools. If you set up and tear down Ceph clusters a lot, and want minimal extra bureaucracy, ceph-deploy is an ideal tool. The ceph-deploy tool is not a generic deployment system. It was designed exclusively for Ceph users who want to get Ceph up and running quickly with sensible initial configuration settings without the overhead of installing Chef, Puppet or Juju. Users who want fine-control over security settings, partitions or directory locations should use a tool such as Juju, Puppet, Chef or Crowbar.
As described, ceph-deploy is mostly limited to some quick cluster deployment. This is perfectly applicable for deploying a test environment, but production deployment still requires a lot of thorough configuration using external tools.
Deployment using manifests for configuration management tools
Configuration management tools enable you to deploy Ceph clusters as while maintaining great possibilities to tune the cluster. It is also possible to scale or shrink these clusters using the same code base.
The only problem here is high learning curve of such solutions: you need to know, in detail, every configuration option, and you need to read the source code of manifests/playbooks/formulas to understand in detail how they works.
Also, in most cases these manifests focus on a single use case: cluster deployment. They do not provide enough possibilities to manage the cluster after it is up and running. When you operate the cluster, if you need to extend it with new machines, disable existing machines to do maintenance, reconfigure hosts to add new storage pools or hardware, and so on, you will need to create and debug new manifests by yourself.
Standalone solutions
Decapod and 01.org VSM are examples of standalone configuration tools. They provide you with a unified view of the whole storage system, eliminating the need to understand low level details of cluster management. They integrate with a monitoring system, and they simplify operations on the cluster. They both have a low learning curve, providing best management practices with a simple interface.
Unfortunately, VSM has some flaws, including the following:

It has tightly coupled business and automation logic, which makes it hard to extend the tool, or even customize some deployment steps
By design, it is limited in scale. It works great for small clusters, but at a bigger scale the software itself becomes a bottleneck
It lacks community support
It has an overcomplicated design

Decapod takes a slightly different approach: it separates provisioning and management logic from the start, using an official community project, ceph-ansible. Decapod uses Ansible to do all remote management work, and uses its proven ability to create scalable deployments.
The Decapod architecture
Since Decapod uses Ansible to manage remote nodes, it does not need a complex architecture. Moreover, we’ve been trying to keep it as simple as possible. The architecture looks like this:

As you can see, Decapod has two main services: API and controller.
The API service is responsible for management entities and the handling of HTTP requests. If you request execution of an action on a Ceph node, the API service creates the task in the database for the controller. Each request for that task returns its status.
The Controller listens for new tasks in the database, prepares Ansible for execution (generates Ansible inventory, injects variables for playbooks) and tracks the progress of execution. Every step of the execution is trackable in the UI. You can also download the whole log afterwards.
Decapod performs every management action using a plugin, including cluster deployment and purging object storage daemons from hosts. Basically, a plugin is a playbook to execute, and a Python class used to generate the correct variables and dynamic inventory for Ansible based on the incoming class. Installation is dynamically extendable, so there is no need to redeploy Decapod with another set of plugins. Also, each plugin provides a set of sensible settings for your current setup, but if you want, you may modify every aspect and each setting.
Decapod usage
Decapod has rich CLI and UI interfaces, which enable you to manage clusters. We gave a lot of attention to the UI because we believe that a good interface can help users to accomplish  their goals without paying a lot of attention to low level details. If you want to do some operation work on a cluster, Decapod will try to help you with the most sensible settings possible.
Also, another important feature of Decapod is its ability to audit changes. Every action or operation on the cluster is trackable, and it is always possible to check the history of modifications for every entity, from its history of execution on a certain cluster to changes in the name of a user.
The Decapod workflow is rather simple, and involves a traditional user/role permission based model of access. To deploy a cluster, you  needs to create it, providing a name for the deployed cluster. After that, you select the management action you wants to do, and select the required servers, and Decapod will generate sensible defaults for that action. If you&8217;re is satisfied, you can execute this action in a single click. If not, you can tune these defaults.
You may find more information about using Decapod in our demo:

So what do you think? What are you using to deploy Ceph now, and how do you think Decapod will affect your workflow? Leave us a comment and let us know.
The post Introducing Decapod, an easier way to manage Ceph appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

OpenShift Commons Gathering Seattle 2016 Session Videos

The OpenShift Commons Gathering was a great opportunity to experience the momentum of the community and how it continues to drive the OpenShift Origin project forward. We are grateful for all upstream project leads, contributors, customers, and partners who contributed to making the OpenShift Commons Gathering a success. This article lists the recordings and slides of all the sessions from the event.
Quelle: OpenShift

How are you using RDO? (Survey results)

Over the last few weeks, we’ve been conducting a survey of RDO users, to get an idea of who is using RDO, how, and for what.

While the sample size here is small, it’s a start at understanding the makeup of our user community. And we’ll conduct the survey again around the next release, so if you missed this one, stay tuned.

The Numbers

Let’s start with the numbers.

First of all, there were only 39 responses to the survey. Hopefully next time we can do a better job of getting responses, but this is a good start for the first time.

Most of our users (ie, more than half) are running the Mitaka release, witht he next-largest number (46%) having already moved over to the Newton release.

Nearly half of our users are running RDO as their production cloud.

38% of users are deploying with Packstack, with just 20% using TripleO.

There’s no clear leader in terms of what industry our users are in, however, Research, Service Providers and Telecom are the three at the top.

Finally, in the distribution of cloud size, over half of respondants were in the 1-10 nodes range, with the rest spread everywhere from there to more than 7500 nodes.

There were some additional questions that will be summarized to the rdo-list over the coming days, regarding how people want to get more involved in the project, and what things they feel are missing.

Next Time

Doing surveys is hard, and invaribly as soon as you send a survey out into the wild, you realize some things that you wish you’d done differently. This time, one thing we did was make the “industry” and “size” fields free-form entry, rather than providing options. This made it a lot more work to tally the results.

Beyond that, if there’s some things that you feel that we should have done differently in the survey, please speak up.
Quelle: RDO

Five Key Takeaways from KubeCon 2016

Now that KubeCon 2016 is over, we have some time to reflect on the State of the Kubernetes project and communities, the event itself, and the marketplace going forward into 2017. Red Hat has been a part of this community since well before it was launched, but it’s incredibly important to understand how a community evolves over time. If you we’re able to attend the events in Seattle, the Cloud Native Computing Foundation (CNCF) has posted all of the videos online. Looking back on the events in Seattle, here are five key takeways that will continue to shape the community and market for the next few years.
Quelle: OpenShift