Cloud computing: More than just storage

While the term &;” may seem nebulous, it actually has a very simple definition, according to PC Magazine: “storing and accessing data and programs over the internet.”
Most people use some form of cloud computing already, such as a student using Google Docs to work on a paper with a classmate or anyone who accesses their email from the web instead of an application. In other words, the information for the paper or the email don’t exist on the computer’s hardware; it’s stored in the provider’s cloud. Personal storage now requires less effort.
But cloud computing isn’t just for consumers. It’s also revolutionizing the way businesses are, well, doing business.
After choosing from one of the top providers, including IBM, businesses can select from three different service models of cloud computing: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). These services are aptly named. With IaaS, the provider gives the business an entire infrastructure to work with. PaaS enables businesses to create their own custom applications that they can disperse via the cloud. SaaS provides the business with one or more cloud-based software applications to use for their purposes.
At first glance, these services seem fairly straightforward.  However, there are a number of definitive advantages to cloud computing:

It shifts the burden of maintenance and responsibility from the business to the provider. Local computers no longer require the capacity to run all the applications, and the company no longer needs a “whole team of experts to install, configure, test, run, secure, and update [the applications],” according to Salesforce. These differences bring both hardware and IT costs down, as local computers no longer need vast amounts of memory or the cutting-edge processing ability; they simply must be able to connect to the cloud system.
Costs come down further by removing the need for on-site storage. With this remote storage services, companies no longer need to rent or buy physical space and facilities and purchase expensive equipment to store servers and databases, this HowStuffWorks report also points out.
Cloud also offers extreme flexibility. Cloud services are pay per use, and can be adjusted easily to accommodate a company’s needs. SkyHighNetworks provides a good example: “A sales promotion might be wildly popular, and capacity can be added quickly to avoid crashing servers and losing sales. When the sale is over, capacity can shrink to reduce costs.”
Employees across the business can access data and files at any time. Work doesn’t get trapped on users’ hard disks or flash drives, and the information accessed is always the most relevant and updated. This allows for seamless collaboration across geographies and time zones, which is  transformative for multinational companies with offices all around the world.

Like other game-changing technological advances, cloud computing is here to stay.
Learn more about mobile cloud computing.
The post Cloud computing: More than just storage appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Cloud innovation in the US federal sector

As federal agencies take their first steps toward cloud implementation, they are beginning to establish their own cloud strategies.
These include laying out a framework that will identify different workload profiles and deploy each workload in the most suitable cloud environment. During this planning and design phase, they often realize a need to adopt a hybrid cloud, including their on-premises private cloud.
For instance, an agency will likely leave its mission-critical systems in a private cloud and migrate emails and public-facing websites into a public cloud. In addition, an agency may have enterprise applications which can be split into systems of engagement (SOE) and systems of record (SOR) to be separately hosted between private (SOR) and public cloud (SOE).
Once this initial design is completed, the next challenge is how to orchestrate disparate applications and enable application portability in a hybrid environment. There are a number of different solutions for this, but the best solution should be based on a twofold approach.
The first part starts with understanding if an agency’s current infrastructure virtualization platform is also available in the cloud. The key to implementing a hybrid cloud for the government is seamless integration that requires minimal effort. This will not only save implementation costs, but also ensure that cloud implementation doesn’t increase security risks, assuming the solution is FedRAMP compliant. For example, moving VM images between the same platform (such as vMotion on ESXi) would be much easier and faster than converting image formats between different platforms.
The second phase is ensuring that a cloud environment is designed based on open technologies (such as OpenStack or Cloud Foundry) for easy integration with other systems, even across different cloud service providers. From an orchestration perspective, it is critical for an agency to deploy single-pane-of-glass management across disparate systems in a hybrid environment. This will mitigate the risk of losing visibility and control over their systems running in the cloud and cohesively enforce IT governance and policies.
For instance, if an agency is using the vRealize management platform as part of its VMware software stack within its own data center, it might be a good idea to select a cloud environment that provides VMware software and OpenStack services. This will accelerate the creation of a hybrid environment and orchestrate even OpenStack-based systems via VMware-integrated OpenStack APIs.
Designing a hybrid cloud based on the twofold approach described above can significantly reduce an agency’s deployment cost and time (see figure). Cloud migration is evolving into simply extending the capacity of existing virtualization platforms on-demand and moving applications using existing software stack and tools. An agency can also continue to use the same management tool (with no new skillset required) for their systems running in the cloud.

Partnering with VMware, IBM brings VMware on x86 bare metal servers, in addition to OpenStack-based cloud services, to federal cloud data centers. Understanding the federal government’s budget constraints, security, cloud-first policy, and current IT environments, IBM Cloud is fully committed to accelerate hybrid adoption for federal government agencies while reducing cost and risks.

More about the robust cloud services designed for the US federal government.

 
The post Cloud innovation in the US federal sector appeared first on news.
Quelle: Thoughts on Cloud

Service Catalogs and the User Self-Service Portal

One of the most interesting features of CloudForms is the ability to define services that can include one or more virtual machines (VMs) or instances and can be deployed across hybrid environments. Services can be made available to users through a self-service portal that allows users to order predefined IT services without IT operations getting involved, thereby delivering on one of the major promises of .
The intention of this post is to provide you with step-by-step instructions to get you started with a simple service catalog. After you have gone through the basic concepts, you should have the skills to dive deeper into more complex setups.

Getting started with Service Catalogs
Let’s set the stage for this post: You added your Amazon Web Services (AWS) account to CloudForms as a cloud provider. Your AWS account includes a Red Hat Enterprise Linux (RHEL) image ready to use. Now you want to give your users the ability to deploy RHEL instances on AWS but you want to limit or predefine most of the options they could choose when deploying these instances.
Service Basics
Four items are required to make a service available to users from the CloudForms self-service portal:

A Provisioning Dialog which presents the basic configuration options for a VM or instance.
A Service Dialog where you allow users to configure VM or instance options.
A Service Catalog which is used to group Services Dialogs together.
A Service Catalog Item (ie. the actual Service) which joins a Service Dialog with a Provisioning Dialog.

Provisioning Dialogs
To work with services in CloudForms it is important to understand the concept of Provisioning Dialogs. When you begin the process of provisioning a VM or instance via CloudForms, you are presented with a Provisioning Dialog where you set certain options for the VM or instance. The options presented are dependent on the provider you are using. For instance, a cloud provider might have &;flavors&; of instances, whereas an infrastructure provider might allow you to set the Memory size or number of CPUs on a VM.
Every provider in CloudForms comes with a sample provisioning dialog covering the options specific to that provider. To have a look at some sample Provisioning Dialogs, go to Automate > Customization > Provisioning Dialogs > VM Provision and select &8220;Sample Provisioning Dialogs&8221;. This is a textual representation of the dialog you will get when you provision a VM or instance.
For this post, we need to make sure instance provisioning to AWS is working, so go to Compute > Clouds > Instances and create a new AWS instance by choosing &8220;Provision Instances&8221; from the &8220;Lifecycle&8221; drop-down. Select the image you are going to use, click “Continue” and walk through the Provisioning Dialog.

Service Dialogs
A Service Dialog determines which options the users get to change. The choice of options that are presented to the user is up to you. You could just give them the option to set the service name, or you could have them change all of the Provisioning Dialog options. You have to create a Service Dialog to define the options users are allowed to see and set. To help with creating a Service Dialog, CloudForms includes a simple form designer.
Anatomy of a Service Dialog
A Service Dialog contains three components:

One or more &8220;Tabs&8221;
Inside the &8220;Tabs&8221;, one or more &8220;Boxes&8221;
Inside the &8220;Boxes&8221;, one or more &8220;Elements&8221;
The &8220;Elements&8221; contain methods, like check boxes, drop-down lists or text fields, to fill in the options on the Provisioning Dialog. Here is the most important part: The names of the Elements have to correspond to the options used in the Provisioning Dialog!

What are the Element Names?
Very good question. As mentioned the options and values we provide in the Service Dialog must match those used in the Provisioning Dialog. There are some rather generic names like &8220;vm_name&8221; or &8220;service_name&8221;, while others might be specific to the provider in question.
So how do you find the options and values you can pass in a Service Dialog? The easiest way is to look at the Provisioning Dialog. In this case, for our Amazon EC2 instance:

As an administrator, go to Automate > Customization
Open the &8220;Provisioning Dialogs&8221; accordion and locate the &8220;VM Provision&8221; folder
Find the appropriate dialog, &8220;Sample Amazon Instance Provisioning Dialog&8221;
Now you can use your browser’s search capabilities to find options and their potential values. For practice just look for e.g. “vm_name”.

Creating a Service Dialog
Enough theory, let&;s dive in and create our first simple Service Dialog. The Service Dialog should let users choose a service and instance name for an AWS instance.

As an administrator, go to Automate > Customization
Open the &8220;Service Dialogs&8221; accordion. You will find two example Service Dialogs.
Add a new Service Dialog: Configuration > Add a new Dialog
Type &8220;aws_single_rhel7_instance&8221; into the Label field, this will be the name of the Service Dialog in CloudForms. Add a description if you want, this is not mandatory but good practice.
For Buttons, check &8220;Submit&8221; and &8220;Cancel&8221;.

From this starting point, you can now add content to the Dialog:

From the drop-down with the &8220;+&8221; sign choose &8220;Add a new Tab to this Dialog&8221;.

For Label use &8220;instance_settings&8221;, as Description use &8220;Instance Settings&8221;.
With the &8220;instance_settings&8221; Tab selected choose &8220;Add a new Box to this Tab&8221; from the &8220;+&8221; drop-down.
Give the new Box a Label and Description of &8220;Instance and Service Name&8221;.
From the &8220;+&8221; drop-down choose &8220;Add a new Element to this Box&8221;.
Fill in Label and Description with &8220;Service Name&8221; and Name with &8220;service_name&8221;.
For the Type, choose &8220;Text Box&8221; with Value Type &8220;String&8221;.

Following the same procedure add a second Element to the Box. The Name field should be &8220;vm_name&8221; and the Label and Description fields should be &8220;Instance Name&8221;. Similarly, Type should be &8220;Text Box&8221; with Value Type &8220;String&8221;.

That’s it! Now you can finally hit the &8220;Add&8221; button at the lower right corner.
Create a Catalog
Now that you have created your Service Dialog, we can add it to a Service Catalog by creating its associated Catalog Item.
First, we will create a Catalog:

Go to Services > Catalogs and expand the &8220;Catalogs&8221; accordion.
Select the &8220;All Catalogs&8221; folder and click Configuration > Add a new Catalog.
For Name and Description fill in &8220;Amazon EC2&8221;.
We will assign Catalog Items to this Catalog later.

Create a Catalog Item
Now we have the Catalog without any content, the Service Dialog, and the Provisioning Dialog. To allow users to order the service from the self-service catalog, we have to create a Catalog Item. Let&8217;s create a Catalog Item to order a RHEL instance using our Service Dialog:

Go to Services > Catalogs and expand the &8220;Catalog Items&8221; accordion.
Select the &8220;Amazon EC2&8221; catalog and click Configuration > Add a new Catalog Item.
From the &8220;Catalog Item Type&8221; drop-down select &8220;Amazon&8221;.
For Name and Description use &8220;RHEL Instance&8221; and check the box labelled &8220;Display in Catalog&8221;
From the &8220;Catalog&8221; drop-down choose &8220;Amazon EC2&8221;
From the &8220;Dialog&8221; drop-down choose &8220;aws_single_rhel7_instance&8221;. This is the Service Dialog you created earlier.
The three fields below point to methods used when provisioning/reconfiguring or retiring the service. For now, just configure these to use built-in methods as follows:

Click into the “Provisioning Entry Point State Machine” field, you will be taken to the Datastore Explorer.
Under the “ManageIQ” subtree, navigate to the following method and hit &8220;Apply&8221;: &8220;/Service/Provisioning/StateMachines/ServiceProvision_Template/CatalogItemInitialization&8221;
Click into the “Retirement Entry Point State Machine” field, navigate to this method and hit apply: “/Service/Retirement/StateMachines/ServiceRetirement/Default”

Switch to the &8220;Details&8221; tab. In real life you would put a detailed description of your Service here. You could use HTML for better formatting, but for the purpose of this post &8220;Single Amazon EC2 instance&8221; will do.
Switch to the &8220;Request Info&8221; tab. Here you preset all of the options from the Provisioning Dialog. (Remember that the user is only allowed to set Service Name and the Instance Name options via the Service Dialog):

On the &8220;Catalog&8221; tab, set the image Name to your AWS image name (&8220;rhel7&8221; in this case) and the Instance Name to &8220;changeme&8221;

On the &8220;Properties&8221; tab set the Instance Type to &8220;T2 Micro&8221;. If you ever plan to access the instance you should of course select a &8220;Guest Access Key Pair&8221;, too.

On the &8220;Customize&8221; tab set the Root Password. And in Customize Template choose the &8220;Basic root pass template&8221; as a script for cloud-init.

Click Add at the bottom right.

As you can see your new Catalog Item is listed with a generic icon. Let’s change this by uploading an icon in the &8220;Custom Image&8221; section. You can pick any image you like.
Recap or &8220;What have we done so far&8221;?
We created a Provisioning Dialog that defines the options that can be set on a VM or instance. We created a Service Dialog which allows us to expose certain options to be set by the user. For our example, only the instance name and service name are configurable. Then we created a Service Catalog and finally a Catalog Item. The Catalog Item joins the Service Dialog with all of the options in the Provisioning Dialog. Now, users should be able to order RHEL instance from the self-service catalog.
Let’s Order a RHEL Instance
To order your new service:

Access the self-service portal on https://<your_cf_appliance>/self_service. You will be greeted by the self-service dashboard
Select &8220;Service Catalog&8221; on the menu bar.

You should now see your service. Select it and you will be taken to the form you have defined in your Service Dialog:

Fill in the &8220;Service Name&8221; and &8220;Instance Name&8221; fields. Recall that these are the only two options that you made available to users in your Service Dialog.
Click &8220;Add to Shopping Cart&8221; and access the &8220;Shopping Cart&8221; by clicking the icon on the top right (there should now be a number on it).
Click &8220;Order&8221;. You have created a new provisioning request. You can follow the request by selecting &8220;My Requests&8221; from the menu bar and selecting the specific request to see its progression and details.

Once the &8220;Request State&8221; is shown as &8220;finished&8221;, your AWS instance is provisioned.
Conclusion
As you can see, creating a basic service catalog and to use the self-service portal in CloudForms is not rocket science. Of course, there is a lot more to learn, but there are also a lot of good resources to help you on your journey. For example, articles on this blog, the official documentation, and of course the excellent “Mastering CloudForms Automation” book written by  Peter McGowan that I cannot recommend highly enough.
Quelle: CloudForms

Recapping OpenStack Summit Barcelona

More than 5,200 OpenStack professionals and enthusiasts gathered in Barcelona, Spain to attend the 2016 OpenStack Summit. From the keynotes to the break-out sessions to the marketplace to the evening events and the project work sessions on Friday, there was plenty to keep attendees busy throughout the week. In fact, if you were one of the lucky ones who attended OpenStack Summit, there was probably many sessions and activities you wanted to make it to but couldn&;t.
Red Hat was very busy throughout the week as well, as we participated in 49 sessions, staffed a booth in the marketplace with five demo stations, announced several new and exciting customers, hosted and co-hosted evening events throughout the week, and held hands-on, intensive training through OpenStack Academy. So if you weren&8217;t able to make it to every Red Hat session, or couldn&8217;t go to the Summit at all, here is a recap of everything we did.
Announcements
With Ericsson, we announced a new alliance to enable the adoption of open source solutions. In addition, we announced several new customers who are having great success with OpenStack in deployment:

Swisscom Guides Customers into the Digital Age with Red Hat OpenStack Platform and Red Hat Virtualization
Produban Chooses Red Hat as Technology Partner to Deliver Modern Cloud Services with Kubernetes and Containers on OpenStack
Communications Leaders Choose Red Hat OpenStack Platform for Powering Cloud Deployments to Deliver New Services
UKCloud Creates an Open Source Alternative for UK Public Sector with Red Hat OpenStack Platform

Finally, we announced the results of our second annual customer survey, gathering their thoughts on key topics related to OpenStack, including deployment, management tools, and containers.
Sessions
Dozens of Red Hat&8217;s OpenStack experts delivered or co-delivered almost 50 sessions at OpenStack Summit. Here is a listing of them all, with links to the recorded version.

Red Hat: Leveraging CI/CD to Improve OpenStack Operations
Maria Bracho, Daniel Sheppard (Rackspace)

Deploying and Operating a Production Application Cloud with OpenStack
Chris Wright, Pere Monclus (PLUMgrid), Sandra O&8217;Boyle (Heavy Reading), Marcel Haerry (Swisscom)

Delivering Composable NFV Services for Business, Residential & Mobile Edge
Azhar Sayeed, Sharad Ashlawat (PLUMgrid)

Evolution of the Modern Day Service Provider Needs

Al Sadowski, Group 451
Radhesh Balakrishnan, Red Hat

I found a security bug, what happen&8217;s next?
Tristan de Cacqueray and Matthew Booth

Failed OpenStack Update?! Now What?
Roger Lopez

OpenStack Scale and Performance Testing with Browbeat
Will Foster, Sai Sindhur Malleni, Alex Krzos

Mobile Edge Computing in support of IoT

Sanjay Aiyagari, Red Hat
Pierre Olivier Mathys, Red Hat

OpenStack and the Orchestration Options for Telecom / NFV
Chris Wright, Tobias Ford (AT&T), Hui Deng (China Mobile), Diego Lopez Garcia (Telefonica)

How to Work Upstream with OpenStack
Julien Danjou, Ashiq Khan (NTT), Ryota Mibu (NEC)

OpenStack and Ansible: Automation born in the Cloud
Keith Tenzer

Message Routing: a next-generation alternative to RabbitMQ
Kenneth Giusti, Andrew Smith

Deploying Containers at Scale on OpenStack

Steve Gordon, Principal Product Manager, Red Hat OpenStack Platform

Pushing your QA upstream
Rodrigo Duarte Sousa

TryStack: The Free OpenStack Community Sandbox
Will Foster, Kambiz Aghaiepour

Panel: Meeting The Largest Service Provider’s Needs with an Ecosystem Approach

Susan James, Ericsson
Darrell Jordan Smith, Red Hat
Mark McCloughlin, Red Hat
Ian Hood, Red Hat
Lew Tucker, Cisco

Kerberos and Health Checks and Bare Metal, Oh My! Updates to OpenStack Sahara in Newton
Elise Gafford, Nikita Konovalov (Mirantis), Vitaly Gridnev (Mirantis)

Red Hat discovery session: Key considerations for a successful OpenStack deployment

Bart van den Heuvel, Manager, Consulting Services
Alberto Garcia, Senior Cloud Architect

Feeling a bit deprecated? We are too. Let&8217;s work together to embrace the OpenStack Unified CLI.
Darin Sorrentino, Chris Janiszewski

The race conditions of Neutron L3 HA&8217;s scheduler under scale performace
John Schwarz, Ann Taraday (Mirantis), Kevin Benton (Mirantis)

Bringing Cloud Innovation to the Enterprise

Nick Barcet, Senior Director of Product Management, Red Hat OpenStack Platform

Cinder Always On &; Reliability And Scalability Guide

Gorka Eguileor, Michal Dulko (Intel)

OpenStack is an Application! Deploy and Manage Your Stack with Kolla-Kubernetes
Ryan Hallisey, Ken Wronkiewicz (Cisco), Michal Jastrzebski (Intel)

OpenStack Requirements: what we are doing, what to expect, and what’s next
Swapnil Kulkarni and Davanum Srinivas

Stewardship: bringing more leadership and vision to OpenStack
Monty Taylor, Amrith Kumar (Tesora), Colette Alexander (Intel), Thierry Carrez (OpenStack Foundation)

Using OpenStack Swift to empower Turkcell&8217;s public cloud services
Christian Schwede, Orhan Biyiklioglu (Turkcell) & Doruk Aksoy (Turkcell)

Lessons Learned from a Large-Scale Telco OSP+SDN Deployment

Guil Barros, Cyril Lopez, Vicken Krissian

KVM and QEMU Internals: Understanding the IO Subsystem
Kyle Bader

Effective Code Review
Dougal Matthews

OVN &8211; Moving into Production
Russell Bryant, Justin Pettit (VMware), Ben Pfaff (VMware)

Anatomy Of OpenStack Neutron Through The Eagle Eyes Of Troubleshooters
Sadique Puthen

Building self-healing applications with Aodh, Zaqar and Mistral
Zane Bitter, Lingxian Kong (Catalyst IT), Fei Long Wang (Catalyst IT)

Writing A New Puppet OpenStack Module Like A Rockstar

Emilien Macchi

Ambassador Community Report
Erwan Gallen, Kavit Munshi (Aptira), Jaesuk Ahn (SKT), Marton Kiss (Aptira), Akihiro Hasegawa (Bit-isle Equinix, Inc)

VPP: the ultimate NFV vSwitch (and more!)?
Franck Baudin, Uri Elzur (Intel)

Zuul v3: OpenStack and Ansible Native CI/CD
James Blair

Container Defense in Depth
Thomas Cameron, Scott McCarty

Analyzing Performance in the Cloud: solving an elastic problem with a scientific approach
Alex Krzos, Nicholas Wakou (Dell)

One-stop-shop for OpenStack tools
Ruchika Kharwar

OpenStack troubleshooting: So simple even your kids can do it
Vinny Valdez, Jonathan Jozwiak

Solving Distributed NFV Puzzle with OpenStack and SDN
Rimma Iontel, Fernando Oliveira (VZ), Rajneesh Bajpai (BigSwitch)

Ceph, now and later: our plan for open unified cloud storage
Sage Weil

How to configure your cloud to be able to charge your users using official OpenStack components!
Julien Danjou, Stephane Albert (Objectif Libre), Christophe Sauthier (Objectif Libre)

A dice with several faces: Coordinators, mentors and interns on OpenStack Outreachy internships
Victoria Martinez de la Cruz, Nisha Yadav (Delhi Tech Universty), Samuel de Medeiros Queiroz (HPE)

Yo dawg I herd you like Containers, so we put OpenStack and Ceph in Containers
Sean Cohen, Sebastien Han, Federico Lucifredi

Picking an OpenStack Networking solution
Russell Bryant, Gal Sagie (Huawei)

Forget everything you knew about Swift Rings &8211; here&8217;s everything you need to know about Swift Rings
Christian Schwede, Clay Gerrard (Swiftstack)

3-2-1 Action! Running OpenStack Shared File System Service in Production

Sean Cohen, Tom Barron, Anika Sure (NetApp)

OVN &8211; Moving into Production
Russell Bryant, Justin Pettit (VMware), Ben Pfaff (VMware)

Hopefully we&8217;ll see you in Boston in May, 2017, for either the OpenStack Summit or the Red Hat Summit, or even both.
 

Quelle: RedHat Stack

Account Executive- North East

The post Account Executive- North East appeared first on Mirantis | The Pure Play OpenStack Company.
We are transforming the industry and you will be helping us lead the charge.  As an account executive at Mirantis you will develop and execute a strategic and comprehensive business plan for your territory, including identifying core customers, mapping the benefits of OpenStack to customer’s business requirements. You will take full responsibility for accurate forecasting, regular quarterly revenue delivery, and facilitation of sales enablement and regulate the implementation of agreed account and business plans. Your overall focus areas will be in prospecting, developing business, responding to RFP&;s, developing proposals for presentation to customers, and selling Services and Products. Cross-functional teams from Mirantis’ Marketing, Solutions Engineering, Professional Services, and Product Development functions will provide support and tools for you to leverage to attain and exceed sales performance goals. Primary ResponsibilitiesPipeline Generation- acquire new customer database from calling into high level within prospect organizations, networking and various customer account lists.Participates in campaigns, conferences, works with marketing team to understand new offers and leads in assigned region, generates leads independently and follows-up appropriatelySolution Selling – consults with clients to determine their needs and works with application sales specialists to generate multi-product/service solutions. Takes initiative to learn new offers and products, as they become available. Able to apply technology knowledge in business development effortsProposal/Presentation Generation: incorporates executive summary, ROI analysis, and solution design to develop customer-specific proposals and presentations.Develop Scope of Work – works with the customer and engineering team to define and document the project scopeRelationship Management – develops and manages relationships with current clients to develop additional business as well as ensure a high level of client satisfactionAccurate Forecasting – captures activity information on a timely basis as client interactions occur to insure accurate product and services forecastingRequirementsAdvanced selling skills with a demonstrated track record of selling into complex organizations with multiple layers of decision makers. 10+ years selling experience with telecom and other technology products and solutions such as Cisco, EMC (Storage), VMware, NetApp, Oracle and managed services.Market knowledge (i.e. industry knowledge relevant to geographic area) and technical knowledge are necessary, and if assigned to vertical markets, knowledge of public sector is required.Must possess business experience to analyze client business requirements and develop creative solutions as well as utilize technical resources to complete an accurate and technically assured sales order.Exceptional communication skillsAbility to accept constructive criticism; and ability to maintain and develop positive team cohesivenessWork constructively across cultural boundaries in a globally distributed organization What We OfferWork in the Silicon Valley with established leaders in their industryWork with exceptionally passionate, talented and engaging colleaguesBe a part of cutting edge of open-source innovation since LinuxHigh-energy atmosphere of a young company, competitive compensation package with strong benefits plan and stock optionsLots of freedom for creativity and personal growthThe post Account Executive- North East appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Deployment Engineer

The post Deployment Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis  is the leading global provider of Software and Services for OpenStack ™, a  massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, HP, NASA, Dell, PayPal and many more.What Linux was to open source and operating systems,  OpenStack  is to . It makes programmable infrastructure vendor-neutral and frictionless to access, not to mention it unlocks distributed applications and accelerates innovation. OpenStack transforms virtualization from an efficiency to a whole new compute paradigm.We are looking for talented  OpenStack Deployment Engineer , who is willing to work on intersection of IT and software engineering, be passionate about open-source and be able to design and deploy cloud infrastructure build on top of open-source components.Responsibilities:Plan and deploy OpenStack cloud solutions for our customers;Extend functionality for OpenStack cloud solutions;Facilitate knowledge transfer to the customers during deployment projects; Work with geographically distributed international teams on technical challenges and process improvements; Contribute to Mirantis’ deployment knowledge base; Continuously improve tooling and technologies set.Your profile:At least 3 years of practical administration experience in Linux (RHEL, CentOS, Ubuntu) as a server platform. Required experience with Linux operation system itself as well as with production level software and hardware. Practical experience of organization of highly available clusters is also required; At least 3 years of practical administration experience in networks. Clear understanding of modern and currently used network protocols and processes running on each of network layers; At least 2 years of practical experience in Puppet (IT automation tool) for medium and large environments with practical experience of Puppet manifests creation; At least 2 years of practical administration experience of virtualized environments based on KVM; At least 3 years of practical experience with scripting languagesAbility to understand and troubleshoot code written in Python and Ruby English language on an intermediate level; Ability and willingness to travel abroad for 3-6 monthsWill be a plus:Team management experience;Practical experience of Python programming;Knowledge and experience of SDN;Knowledge of XEN;Knowledge of OpenStack is a big plus;Knowledge of Ruby-scripting is a plus.We offer:High-energy atmosphere of a young companyBuild large scale, innovative systems for mission-critical useCollaborate with exceptionally passionate, talented and engaging colleaguesCompetitive compensation packageLots of freedom for creativity and personal growth.The post Deployment Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Introduction to Kubernetes

The post Introduction to Kubernetes appeared first on Mirantis | The Pure Play OpenStack Company.
The objective of Kubernetes is to abstract away the complexity of managing a fleet of containers, which represent packaged applications that include everything needed to run wherever they&;re provisioned. By interacting with the Kubernetes REST API, you can describe the desired state of your application, and Kubernetes, aka k8s, will do whatever is necessary to make the infrastructure conform. It will deploy groups of containers, replicate them, redeploy if some of them fail, and so on.
Because it&8217;s open source, it can run almost anywhere, and the major public cloud providers all provide easy ways to consume this technology. Private clouds based on OpenStack or Mesos can also run k8s, and bare metal servers can be leveraged as worker nodes for it. So if you describe your application with k8s building blocks, you’ll then be able to deploy it within VMs or bare metal servers, on public or private clouds.
Let&8217;s take a look at the basics of how Kubernetes works so that you will have a solid foundation to dive deeper.
The Kubernetes architecture
The Kubernetes architecture is relatively simple. You never interact directly with the nodes that are hosting your application, but only with the control plane, which presents an API and is in charge of scheduling and replicating groups of containers named Pods. Kubectl is the command line interface you can use to interact with the API to share the desired application state or gather detailed information on the infrastructure&8217;s current state.
Let&8217;s look at the various pieces.
Nodes
Each node that will host part of your distributed application does so by leveraging Docker or a similar container technology, such as Rocket from CoreOS. The nodes also run two additional piece of software: kube-proxy, which give access to your running app, and kubelet, which receives commands from the k8s control plane. Nodes can also run flannel, an etcd backed network fabric for containers.
Master
The control plane itself runs the API server (kube-apiserver), the scheduler (kube-scheduler), the controller manager (kube-controller-manager) and etcd, a highly available key-value store for shared configuration and service discovery implementing the Raft consensus Algorithm.

Now let&8217;s look at some of the terminology you might run into.
Terminology
Kubernetes has its own vocabulary which, once you get used to it, gives you some sense of how things are organized. These terms include:

Pods: Pods are a group of one or more containers, their shared storage, and options about how to run them. Each pod gets its own IP address.
Labels: Labels are key/value pairs that Kubernetes attaches to any objects, such as pods, Replication Controllers, Endpoints, and so on.
Annotations: Annotations are key/value pairs used to store arbitrary non-queryable metadata.
Services: Services are an abstraction that defines a logical set of Pods and a policy by which to access them over the network.
Replication Controller: Replication controllers ensure that a specific number of pod replicas are running at any one time.
Secrets: Secrets hold sensitive information such as passwords, TLS certificates, OAuth tokens, and ssh keys.
ConfigMap: ConfigMaps are mechanisms used to inject containers with configuration data while keeping containers agnostic of Kubernetes.

Why Kubernetes
In order to justify the added complexity that Kubernetes brings, there need to be some benefits. At its core, a cluster manager such as k8s exists to serve developers so they can serve themselves without having to involve the operation team.
Reliability is one of the major benefits of Kubernetes; Google has over 10 years of experience when it comes to infrastructure operations with Borg, their internal container orchestration solution, and they’ve built Kubernetes based on this experience. Kubernetes can be used to prevent failure from impacting the availability or performance of your application, that’s a great benefit.
Scalability is handled by Kubernetes on different levels. You can add cluster capacity by adding more workers nodes, which can even be automated in many public clouds with autoscaling functionality based on CPU and Memory triggers. The Kubernetes Scheduler includes affinity features to spread your workloads evenly across the infrastructure, maximizing availability. Finally, k8s can autoscale your application using the Pod autoscaler, which can be driven by custom triggers.
Sound interesting? If you live in Austin, Texas, you&8217;re in luck; we&8217;ll be presenting Kubernetes 101 at OpenStack Austin Texas on November 15, and at the Cloud Austin meetup on Nov 16, or you can dive right in and sign up for Mirantis&8217; Kubernetes and Docker Boot Camp.
The post Introduction to Kubernetes appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Why Red Hat Chose Kubernetes for OpenShift

I often get asked why Red Hat chose to standardize on Kubernetes for OpenShift, instead of going with a competing container orchestration solution. There are a number of open source container orchestration solutions available today. The OpenShift product team explored many of them, and we even explored building our own. In this blog, I will explain why we chose Kubernetes and why more than two years later, we couldn’t be happier with our decision!
Quelle: OpenShift

Blog posts last week

We’ve had more followup blog posts from OpenStack Summit, along with some more from the RDO community.

Querying haproxy data using socat from CLI by Carlos Camacho

Currently (most users) I don’t have any way to check the haproxy status in a TripleO virtual deployment (via web-browser) if not previously created some tunnels enabled for that purpose.

Read more at http://tm3.org/c3

Keystone Domains are Projects by Adam Young

Yesterday, someone asked me about inherited role assignments in Keystone projects. Here is what we worked out.

Read more at http://tm3.org/c4

OpenStack Summit: An evening with Ceph and RDO by Rich Bowen

Last Tuesday in Barcelona, we gathered with the Ceph community for an evening of food, drinks, and technical sessions.

Read more at http://tm3.org/c5

OpenStack Summit Barcelona, 3 of N by rbowen

Continuing the saga of OpenStack Summit Barcelona …

Read more at http://tm3.org/c6

Red Hat Virtualization: Bridging the Gap with the Cloud and Hyperconverged Infrastructure by Ted Brunell

Red Hat Virtualization offers a flexible technology for high-intensive performance and secure workloads. Red Hat Virtualization 4.0 introduced new features that enable customers to further extend the use case of traditional virtualization in hybrid cloud environments. The platform now easily incorporates third party network providers into the existing environment along with other technologies found in next generation cloud platforms such as Red Hat OpenStack Platform and Red Hat Enterprise Linux Atomic Host. Additionally, new infrastructure models are now supported including selected support for hyperconverged infrastructure; the native integration of compute and storage across a cluster of hosts in a Red Hat Virtualization environment.

Read more at http://tm3.org/c7

Running Tempest on RDO OpenStack Newton by chandankumar

Tempest is a set of integration tests to run against an OpenStack cluster.

Read more at http://tm3.org/bk
Quelle: RDO

Augmenting versus artificial intelligence at World of Watson

“Wow” is right.
More than 16,000 clients, partners, sponsors, and IBMers gathered in Las Vegas last month to witness the expanding World of Watson (WoW). Watson is five years old this year and, as highlighted in IBM CEO and Chairman Ginni Rometty’s keynote address, “A World with Watson,” it’s already making huge contributions to healthcare, automotive, music and education, as well as in the fields of Internet of Things (IoT) and big-data analytics. Watson is proving itself as the augmenting intelligence platform for business in the cognitive era.
I was privileged to attend the World of Watson conference, and while there, I attended sessions relating to IoT and spent time on discussions of augmenting intelligence versus artificial intelligence. The distinction is an important one. IBM strategy is centered around the notion that clients own their data and always will. Clients decide how, when and where data are housed, secured, viewed, accessed, used and augmented to serve their needs, their own clients’ needs and their innovative approaches to the marketplace.
Like a riptide at a metaphorical cognitive beach, the fear of artificial intelligence seemed to be on the minds of many at the conference. It’s a good idea to understand this dynamic in detail, especially given the broader implications.  What I discovered was a refreshing number of sessions addressing this fear; most notably, a well-attended discussion entitled “Why I Don’t Fear Artificial Intelligence” by XPRIZE Founder Peter Diamandis.
Dr. Diamandis was powerfully optimistic about the future. He described a future of augmenting, not circumventing, the human experience. In his session, Diamandis highlighted his work with human genomics and put a spotlight on the $5 million Watson AI XPRIZE competition. The competition aims to inspire participants to adopt cognitive technologies to develop creative, scalable and innovative demonstrations that address “moonshot” challenges in the global community. If you are not familiar with Diamandis’ work, I recommend you research his accomplishments and consider his 2012 book, Abundance: The Future Is Brighter Than You Think.
There was also plenty to learn about IBM Bluemix. It now provides a single access point for IBM Cloud platform as a service (PaaS), infrastructure as a service (IaaS) and Watson cognitive services with a single user ID, single invoice and consistent user experience. Cognitive services in the IBM Cloud make Watson’s advanced platform services easily accessible through application programing interfaces (API). Watch Jason McGee, IBM Fellow and vice president, talk about these innovations in his interview with theCUBE.
After attending World of Watson, I am even more optimistic about the future of Watson in the IBM Cloud and its role in the emerging cognitive era.  When Watson turns six next year, I can’t wait to see what unfolds in a bright new world with Watson.  Shades may be required.
Learn more about IBM cognitive solutions.
The post Augmenting versus artificial intelligence at World of Watson appeared first on news.
Quelle: Thoughts on Cloud