IBM and partners launch JS Foundation

This week’s announcement that IBM is a founding member of the JS Foundation further confirms something I’ve been saying for years now: there’s never been a better time to be working in open technology.
Along with the greater JavaScript ecosystem, the JS Foundation is dedicated to fostering developer engagement, collaboration and best practices under an openly governed model. This will apply to and benefit application frameworks, tools, testing and JavaScript projects that serve many markets, including Internet of Things (IoT).

Broadening the JavaScript ecosystem
I’m also excited to announce that the JS Foundation will formalize its partnership with the Node.js Foundation to collaborate and provide mentorship for frameworks that depend on Node.js. This powerful partnership benefits the entire JavaScript and Node.js community by formalizing and amplifying these foundations’ core strengths, both based on JavaScript technology.
More than that, the JS Foundation represents a giant step forward in bringing more openness to the developer community.
I often say that when organizations collaborate on open source and open standards, everyone wins. We’ve seen it with OpenStack. We’ve seen it with Cloud Foundry. We’re seeing it with the Cloud Native Compute Foundation and the Open Container Initiative. Outside of cloud, we’re seeing it with the Open API Initiative and Spark. I believe we’ll see it with the JS Foundation as well.
What IBM contributes
IBM and community partners are contributing Node-RED as one of the JS Foundation’s core projects. For anyone new to Node-RED, it’s a flow-based programming environment aimed at creating event-driven applications that can easily integrate with APIs and services.
Like many great ideas, this started as an experimental project in 2013 that quickly grew to widespread usage in the IoT community. Node-RED already enjoys a broad ecosystem of flows and nodes that the entire community shares. Its broad adoption in the IoT space has even resulted in Node-RED being shipped as part of the Raspberry PI distribution.
Driving open collaboration
Open communities thrive when equal emphasis is given to developing code, community, and culture. By bringing together the community around core platform technologies and the application tier with the JS Foundation, the industry is establishing a center of gravity to drive innovation in the open.
In the coming months, watch for updates around the code, collaboration and community development in the JS Foundation. We expect this venture to set the bar for openly governed JavaScript projects. With this announcement, the industry has a new binary star system for JavaScript projects to orbit. You can find more information about the JS Foundation and how to get involved on its website.
IBM Cloud is open by design. Learn more.
The post IBM and partners launch JS Foundation appeared first on news.
Quelle: Thoughts on Cloud

To help their uphill climb, startups look to Global Entrepreneur

While there’s no one, definitive source of statistics for startup activity worldwide, a quick look online reveals facts such as:

Nine out of ten startups will fail.
The startup rate has fallen sharply over the past 30 years.
Startup funding is a mess right now.

Yet with the odds stacked against them, startup companies across industries continue to emerge, many of whom have grown to lead and even define industry sectors. Dare I quote the WhatsApp, Uber or AirBNB examples yet again? These disruptors have grown at a rapid pace, defining new business models built on technology platforms that support innovation, change and agility.
IBM offers startups IBM Global Entrepreneur as a means to explore IBM cloud platforms and tools to support their business as they grow. IBM Global Entrepreneur is a unique program designed to connect startups to the IBM global ecosystem of clients, partners, business leaders, and enterprise-grade technology.
How startups can get a boost
IBM Global Entrepreneur benefits include:

Up to $120,000 in IBM Cloud credits for Bluemix
An extensive, global network of solutions architects
Go-to-market and mentor advice for entrepreneurs

Accompanying IBM Global Entrepreneur in 2016, IBM and LAUNCH are again hosting IBM SmartCamp. SmartCamp is a global pitch competition for early-stage startups. A global initiative, SmartCamp 2016 is already underway across a number of cities, with regional live-pitch competitions taking place. Finalists will attend the LAUNCH festival in 2017 with the winner landing a spot in the LAUNCH Incubator, which includes a $25,000 investment.
Across Europe, cities including London, Paris and Berlin are renowned for their vibrant startup scenes’ entrepreneur support and incubation programs. Ireland also has its startup hotspots across major cities such as Dublin, Cork and Galway.
Making connections
At a recent event in Dublin, I had to the opportunity to address a number of new businesses at The Digital Hub, a cluster of early stage digital companies in Ireland. I shared IBM Global Entrepreneur details at a drop-in center located right outside The Hub’s coffee shop. I spoke with many of the residents about SoftLayer, Bluemix and IBM support for startup entities in Ireland.
One recently formed business I spoke with shared details of a solution focused on care of the elderly. The company delivers insights from data sourced in care homes, analyzed on the cloud and made available to care workers via mobile apps. Its business proposition is built on domain expertise, targeted user experiences and data analysis to provide better care for the elderly. While this business builds its solution, IBM Global Entrepreneur can support it, not only through access to cloud platforms, but also through access to expertise at IBM.
I also spoke with a fintech startup with a focus on corporate banking. The business provides a portal that allows clients to compare banking fees. Technology is the enabler for this business, but its focus is on client experience and the creation of new markets in corporate banking. IBM Global Entrepreneur wasn’t a program that this business had come across, which was something my visit helped to address with a number of the startups in attendance.
With a pocketful of business cards, I left a two-hour morning session at The Digital Hub enthusiastic that, despite the challenges, startups continue to invest in ideation and innovation, and that IBM support for these businesses matters.
Learn more about IBM Global Entrepreneur. Speak with a local Cloud Advisor. For more details on this article in particular, contact Ronan Dalton @daltonology.
The post To help their uphill climb, startups look to Global Entrepreneur appeared first on news.
Quelle: Thoughts on Cloud

Using Tags for Access Control

Most systems use Access Control Lists (ACL’s) to manage user’s access to objects. Common examples are ACL’s for file systems, LDAP, Web Servers and many more. Anyone who has had to create ACL rules and maintain them knows how complicated this can be. To make access control easy again, CloudForms uses tags. If the group a user belongs to has the same tag as the accessed object, access is granted, if not, access is denied.
This sounds simple and straightforward, but there are a couple of things to know about tags which make them very powerful, but also a bit tricky.

Let’s start with a basic explanation of common objects in CloudForms:

Users: users can be created in the internal database or retrieved from external authentication. Meta data, including the full name, email address, password (in the case of database authentication) and relationship to one or more groups, is associated to the user.
Groups: every user is a member of one or more groups. Groups are used to define the scope or visibility of a user. For example, a member of the “Engineering Department” group can be granted access to all virtual machines (VMs) owned by the engineering department. Or a member of the group “Operations” could be granted access to all VMs running in the production environment.
Roles: every group is associated to exactly one role, which describes the privileges granted to that group. Roles are used to define which actions a user is allowed to perform. For example, an “Operator” role could include permissions to start or stop VMs, re-configure them, etc. A “Self Service” role could allow a user to order new VMs and manage them.

The combination of groups and roles defines which actions are allowed and on which objects. An “Operator” role in the “Engineering Department” group would have the same privileges as an “Operator” role in the “Finance Department” group because they share the same role, but they would see different objects because they are not in the same group.
Let’s discuss a couple of examples to get familiar with this model.
Setting the Stage
As an administrator, navigate to Settings > Configuration and click on “Access Control” in the pane on the left, and then click on “Roles”. Add a new role by clicking on Configuration > Add a New Role and name it “Self Service”. Granting privileges to the role is very nicely implemented. The tree on the right represents all of the menus and actions a user can perform in the UI. Enabling (checking) a feature grants the privilege to the role. By drilling down into sub folders, very fine grained permissions can be granted (e.g. power on, but not power off).
For the purpose of this demo, a role with full access to “Cloud Intel”, “Services”, “Compute”, “Settings” and “Access Rules for all Virtual Machines” &; but no other privileges &8211; is created.

Example Group “Engineering”
In the next step, a group called “Engineering” is created. All members of this group will have the “Self Service” role assigned, which was created in the previous step. For now, we skip tags and filters and keep them all unchecked.

Example User “Joe Doe”
In the last step a user “Joe Doe” is created. This will be a local user (stored in the database) and is a member of the “Engineering” group.

Results
If Joe Doe logs into the web interface and navigates to Providers > Hosts > Virtual Machines or Providers > Hosts > Service Catalogs he will see all of the existing objects. This should not be a surprise, because he is assigned to a group which doesn’t have any restrictions on visibility.
Granting Access to Individual Objects
For our next step, we want to restrict Joe Doe’s visibility to only those VMs associated to the Engineering Department. To accomplish this, we will restrict Joe Doe to only see objects tagged as Department/Engineering. But first, we will learn a little bit about tags and tag categories.
Tags and Tag Categories
Tags are any string that you would like to describe a particular characteristic of an object. The best tags are clearly descriptive and easy for other users to understand. For example, Engineering and Finance are clearly descriptive and easy to understand tags that describe the part of the organization to which a user or VM belongs. Tag categories are groupings of related tags. For example, Engineering and Finance belong to the Department tag category. Using tag categories you can group related tags together.
CloudForms comes with a default set of tags and tag categories that you can use, or you can create your own custom taxonomy of tags. In this way, tags are very flexible. For this demonstration, we are going to work with the default set of tags and tag categories.
Assigning a Tag to an Object
Navigate to the “Engineering” group, edit it and select the Department/Engineering tag.

When changing groups, roles or tenants, the user doesn’t need to logout and login again. Changes to groups and roles are reflected immediately in CloudForms, even if the user is already logged in. If Joe now navigates to view VMs, only those VMs tagged with Department/Engineering will be shown. In this case, none!
First Gotcha!
You might have noticed, after setting the Department/Engineering tag for the group, no objects are showing up in the UI. The scope for the group, and hence the user, was just limited to objects which are tagged as Department/Engineering &8211; and no objects have been tagged so far. We now need to tag all objects which should be visible for the user. An object, like a VM, can be tagged by using the Policy > Edit Tags menu. After tagging a VM and navigating to the VM list, the VM will show up in the user interface.
This process works the same way for all other objects. If Joe Doe should be able to order a specific item from the service catalog, the item or bundle has to be tagged with the Department/Engineering tag to make it visible.
Working with Multiple Tags
If a VM or other object has to be visible to multiple groups, we can add all the necessary tags to the object. For instance, adding the Department/Finance tag to a VM, makes the VM available to members of the “Finance Department” group, which also has that tag.
Tags within the same tag category are processed as logical OR relationships. That is, if at least one tag of the group matches with at least one tag of the object, access is granted. For example, if a user is in a group with the Department/Engineering or Department/Finance tag, they will see the object. Users which are in a group with neither the Department/Engineering or Department/Finance tag, will not see the object. This also applies, if the object isn’t tagged with any tag &8211; which means, nobody will see it.
Second Gotcha!
Tag restrictions also apply to Super Administrators! If you restrict the visibility of a Super Administrator by assigning them tags, they will no longer see those objects which do not have matching tags! Since Super Administrators can always fix tag assignments or remove the tags for their group, they can restore full visibility, but it’s probably best to make sure you never limit Super Administrators.
Working with Multiple Tag Categories
When working in more complex environments, multiple tag categories must be used. For example, in addition to separating VMs by departments, tags can be used to separate VMs in different stages of deployment (Development, QA, Production). However, as soon as multiple tag categories are introduced, things get a bit more complicated.
Third Gotcha!
When using multiple tag categories, there is a logical AND between tags in multiple categories. This is probably best explained with an example. CloudForms comes with a default tag category called Environment with tags like Development and Production.
If the “Engineering” group, of which Joe Doe is a member, gets the additional tag Environment/Development, Joe will only see objects which have the Department/Engineering tag and the Environment/Development tag.  A VM tagged as Department/Engineering and Environment/Production will be hidden from Joe.

Object Tags
Group Tags
Visible?

Department/Engineering
Department/Engineering
Yes, Tags match.

Department/Engineering
AND
Department/Finance
Department/Engineering
Yes. At least one tag in the same category matches.

Department/Engineering
AND
Environment/Development
Department/Engineering
No. Tags from multiple categories, so both must match.

Department/Engineering
AND
Environment/Production
Department/Engineering
AND
Environment/Development
No. Tags from multiple categories, so both must match.

Department/Engineering
AND
Environment/Development
AND
Environment/Production
Department/Engineering
AND
Environment/Development
Yes. At least one tag in each tag category matches.

This is very important and often causes confusion. As soon as you start tagging objects with tags from different tag categories, the logical AND comes into play!
Conclusion: Think Before You Tag
There are a few rules we try to follow when we plan tagging:

Don’t use tags for information which is already available as an attribute for the object. For example, tagging all Windows VMs as Operating System Windows is in most cases not a good idea. Since this information is already stored in an VM attribute, you can use a filter to find all of the Windows VMs.
Try to minimize the number of tags and tag categories. Having a large number of categories and tags makes things more complicated.
Think before you add a new tag or tag category. Besides increasing the number of tags or tag categories, you will have to tag all of the objects already in CloudForms.
Try to use auto tagging where possible. Instead of manually tagging objects, write Automate code to do this for you or make use of the CloudForms REST API.

Tags are a very simple and yet powerful way to manage access control lists. Used properly, they can provide greater flexibility and manageability in CloudForms. For more information on tags and access control, see the following resources:
Creating and Using Tags in Red Hat CloudForms
Planning your CloudForms tagging taxonomy
Quelle: CloudForms

Full Stack Automation with Ansible and OpenStack

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.
In this blog we’ll cover the many use-cases Ansible, the most popular automation software, with OpenStack, the most popular cloud infrastructure software. We’ll help you understand here how and why you should use Ansible to make your life easier, in what we like to call Full-Stack Automation.

Let’s begin by analyzing the layers of Full-Stack Automation, shown in the diagram above. At the bottom, we have the hardware resources (servers, storage area networks, and networking gear). Above, is the operating system (Linux or Windows). On the Linux side, you can install OpenStack to abstract all of your datacenter resources and offer a software-defined version of your compute, network, and storage resources. On top of OpenStack, are the tenant-defined services needed to create the virtual machines where the applications will reside. Finally, you have to manage the operating system (Linux or Windows) to deploy the actual applications and workloads that you really care about (databases, web servers, mobile application backends, etc.). If you use containers (like Docker or Rkt), you’ll package those applications in images that will be deployed on top of your Guest OS. In addition to that, some languages introduce the concept of application servers, which adds another layer (i.e. J2EE).
Ansible management possibilities
With Ansible, you have a module to manage every layer. This is true even for the networking hardware, although technically speaking it’s for the network operating system, like IOS or NXOS (see the full list of Ansible network modules here).

General interaction with the Operating System: install packages, change or enforce file content or permissions, manage services, create/remove users and groups, etc.

Linux and BSD via SSH (the first and most popular use-case)
Windows via PowerShell (since 1.7)

IaaS Software: install the IaaS software and its dependencies (databases, load balancers, configuration files, services, and other helper tools)

OpenStack-ansible installer https://github.com/openstack/openstack-ansible, as used in some upstream-based OpenStack distributions from other vendors. Note that the Red Hat OpenStack Platform does not use Ansible, but Heat and Puppet. Future releases will leverage Ansible to perform certain validations and to help operators perform their updates and upgrades.
CloudStack installer is also an Ansible-based project.

Virtual Resources: define the resource, like a Virtual Machine or Instance, in terms of how big it is, who can access it, what content should it have, what security profile and network access it requires, etc.

OpenStack Ansible modules (since Ansible 2.0): for instance, Nova or Neutron. It&;s based on the OpenStack &;shade&; library, a common tool for all CLI tools in OpenStack.
It can also manage not so virtual network resources, via netconf (since 2.2) https://docs.ansible.com/ansible/netconf_config_module.html
VmWare vSphere Ansible modules
RHV or oVirt or Libvirt for bare KVM
It also has modules for public cloud providers, like Amazon, Google Cloud, Azure and Digital Ocean

Guest OS: the same components as described for the Host OS. But how do you discover how many Guests you have?

Ansible Dynamic Inventory will dynamically interrogate the IaaS/VM layer and discover which instances are currently available. It detects their hostname, IPs, and security settings and replaces the static Inventory concept. This is especially useful if you leverage Auto Scaling Groups in your cloud infrastructure, which makes your list of instances very variable over time.

Containers Engine (optional)

Docker: Note that the old Docker module is deprecated for a new, native version, in Ansible 2.1.
Kubernetes
Atomic Host

Tenant Software: databases, web servers, load balancers, data processing engines, etc.

Ansible Galaxy is the repository of recipes (playbooks) to deploy the most popular software, and it’s the result of the contributions of thousands of community members.
You can also manage web Infrastructure such as JBoss, allowing Ansible to define how an app is deployed in the application server.

How to install the latest Ansible on a Python virtual environment
As you have seen, some features are only available with very recent Ansible versions, like 2.2. However, your OS may not ship it yet. For example, RHEL 7 or CentOS 7 only comes with Ansible 1.9.
Given that Ansible is a command-line tool written in Python, which supports multiple versions on a system, you may not need the security hardening in Ansible that your distribution offers, and you may want to try the latest version instead.
However, as any other Python software, there are many dependencies, and it’s very dangerous to mix untested upstream libraries with your system-provided ones. Those libraries may be shared and used in other parts of your system, and untested newer libraries can break other applications. The quick solution is to install the latest Ansible version, with all its dependencies, in a isolated folder under your non-privileged user account. This is called a Python Virtual Environment (virtualenv), and if done properly, allows you to safely play with the latest Ansible modules for a full-stack orchestration. Of course, we do not recommend this practice for any production use-case; consider it a learning exercise to improve your DevOps skills.
1) Install prerequisites (pip, virtualenv)
The only system-wide python library we need here is “virtualenvwrapper”. Other than that, you should not do “sudo pip install” as it will replace system python libraries with untested, newer ones. We only trust one here, “virtualenvwrapper”. The virtual environment method is a good mechanism for installing and testing newer python modules in your non-privileged user account.
$ sudo yum install python-pip
$ sudo pip install virtualenvwrapper
$ sudo yum install python-heatclient python-openstackclient python2-shade
2) Setup a fresh virtualenv, where we’ll install the latest Ansible release
First, create a directory to hold the virtual environments.
$ mkdir $HOME/.virtualenvs
Then, add a line like &8220;export WORKON_HOME=$HOME/.virtualenvs&8221; to your .bashrc. Also, add a line like &8220;source /usr/bin/virtualenvwrapper.sh&8221; to your .bashrc. Now source it.
$ source ~/.bashrc
At this point, wrapper links are created, but only the first time you run it. To see the list of environments, just execute &8220;workon&8221;. Next, we&8217;ll create a new virtualenv named “ansible2” , which will be automatically enabled, with access to the default RPM-installed packages.
$ workon
$ mkvirtualenv ansible2 –system-site-packages
To exit the virtualenv, type &8220;deactivate&8221;, and to re-enter again, use &8220;workon&8221;.
$ deactivate
$ workon ansible2
3) Enter the new virtualenv and install Ansible2 via PIP (as regular user, not root)
You can notice your shell prompt has changed and it shows the virtualenv name in brackets.
(ansible2) $ pip install ansible
The above command will install just the ansible 2 dependencies, leveraging your system-wide RPM-provided python packages (thanks to the &;system-site-packages flag we used earlier). Alternatively, if you want to try the development branch:
(ansible2) $ pip install git+git://github.com/ansible/ansible.git@devel
(ansible2) $ ansible –version
If you ever want to remove the virtualenv, and all its dependencies, just use use &8220;rmvirtualenv ansible2&8221;.
4) Install OpenStack client dependencies
The first command below ensures you have the latest stable OpenStack API versions, although you can also try a pip install to get the latest CLI. The second command provides the latest python “shade” library to connect to latest OpenStack API versions using ansible, regardless of the CLI tool.
(ansible2) $ yum install python-openstackclient python-heatclient
(ansible2) $ pip install shade –upgrade
5) Test it
(ansible2) $ ansible -m ping localhost

localhost | SUCCESS => {

“changed”: false,

“ping”: “pong”

}
NOTE: you cannot run this version of ansible outside the virtualenv, so always remember to do “workon ansible2” before usi.

Using Ansible to orchestrate OpenStack
Our savvy readers will notice that using Ansible to orchestrate OpenStack seems to ignore the fact that Heat is the official orchestration module for OpenStack. Indeed, an Ansible Playbook will do almost the same as a HOT template (HOT is the YAML-based syntax for Heat, an evolution of AWS CloudFormation). However, there are many DevOps professionals out there who don’t like to learn new syntax, and they are already consolidating all their process for their hybrid infrastructure.
The Ansible team recognized that and leveraged Shade, the official library from the OpenStack project, to build interfaces to OpenStack APIs. At the time of this writing, Ansible 2.2 includes modules to call the following APIs

Keystone: users, groups, roles, projects
Nova: servers, keypairs, security-groups, flavors
Neutron: ports, network, subnets, routers, floating IPs
Ironic: nodes, introspection
Swift Objects
Cinder volumes
Glance images

From an Ansible perspective, it needs to interact with a server where it can load the OpenStack credentials and open an HTTP connection to the OpenStack APIs. If that server is your machine (localhost), then it will work locally, load the Keystone credentials, and start talking to OpenStack.
Let’s see an example. We’ll use Ansible OpenStack modules to connect to Nova and start a small instance with the Cirros image. But we’ll first upload the latest Cirros image, if not present. We’ll use an existing SSH key from our current user. You can download this playbook from this github link.

# Setup according to Blogpost “Full Stack automation with Ansible and OpenStack”. Execute with “ansible-playbook ansible-openstack-blogpost.yml -c local -vv”
# #
# #
# #
– name: Execute the Blogpost demo tasks
hosts: localhost
tasks:
– name: Download cirros image
get_url:
url: http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
dest: /tmp/cirros-0.3.4-x86_64-disk.img
– name: Upload cirros image to openstack
os_image:
name: cirros
container_format: bare
disk_format: qcow2
state: present
filename: /tmp/cirros-0.3.4-x86_64-disk.img

– name: Create new keypair from current user’s default SSH key
os_keypair:
state: present
name: ansible_key
public_key_file: “{{ ‘~’ | expanduser }}/.ssh/id_rsa.pub”

– name: Create the test network
os_network:
state: present
name: testnet
external: False
shared: False
: vlan
: datacentre
register: testnet_network

– name: Create the test subnet
os_subnet:
state: present
network_name: “{{ testnet_network.id }}”
name: testnet_sub
ip_version: 4
cidr: 192.168.0.0/24
gateway_ip: 192.168.0.1
enable_dhcp: yes
dns_nameservers:
– 8.8.8.8
register: testnet_sub

– name: Create the test router
ignore_errors: yes for some reasons, re-running this task gives errors
os_router:
state: present
name: testnet_router
network: nova
external_fixed_ips:
– subnet: nova
interfaces:
– testnet_sub

– name: Create a new security group
os_security_group:
state: present
name: secgr
– name: Create a new security group allowing any ICMP
os_security_group_rule:
security_group: secgr
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
– name: Create a new security group allowing any SSH connection
os_security_group_rule:
security_group: secgr
protocol: tcp
port_range_min: 22
port_range_max: 22
remote_ip_prefix: 0.0.0.0/0

– name: Create server instance
os_server:
state: present
name: testServer
image: cirros
flavor: m1.small
security_groups: secgr
key_name: ansible_key
nics:
– net-id: “{{ testnet_network.id }}”
register: testServer

– name: Show Server’s IP
debug: var=testServer.openstack.public_v4

After the execution, we see the IP of the instance. We write it down, and we can now use Ansible to connect into it via SSH. We assume Nova’s default network allows connections from our workstation, in our case via a provider network.

Comparison with OpenStack Heat
Using Ansible instead of Heat has it&8217;s advantages and disadvantages. For instance, with Ansible you must keep track of the resources you create, and manually delete them (in reverse order) once you are done with them. This is especially tricky with Neutron ports, floating IPs and routers. With Heat, you just delete the stack, and all the created resources will be properly deleted.
Compare the above with a similar (but not equivalent) Heat Template, that can be downloaded from this github gist:
heat_template_version: 2015-04-30

description: >
Node template. Launch with “openstack stack create –parameter public_network=nova –parameter ctrl_network=default –parameter secgroups=default –parameter image=cirros –parameter key=ansible_key –parameter flavor=m1.small –parameter name=myserver -t openstack-blogpost-heat.yaml testStack”

parameters:
name:
type: string
description: Name of node
key:
type: string
description: Name of keypair to assign to server
secgroups:
type: comma_delimited_list
description: List of security group to assign to server
image:
type: string
description: Name of image to use for servers
flavor:
type: string
description: Flavor to use for server
availability_zone:
type: string
description: Availability zone for server
default: nova
ctrl_network:
type: string
label: Private network name or ID
description: Network to attach instance to.
public_network:
type: string
label: Public network name or ID
description: Network to attach instance to.

resources:

ctrl_port:
type: OS::Neutron::Port
properties:
network: { get_param: ctrl_network }
security_groups: { get_param: secgroups }

floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_network }
port_id: { get_resource: ctrl_port }

instance:
type: OS::Nova::Server
properties:
name: { get_param: name }
image: { get_param: image }
flavor: { get_param: flavor }
availability_zone: { get_param: availability_zone }
key_name: { get_param: key }
networks:
– port: { get_resource: ctrl_port }

Combining Dynamic Inventory with the OpenStack modules
Now let’s see what happens when we create many instances, but forget to write down their IP’s. The perfect example to leverage Dynamic Inventory for OpenStack is to learn the current state of our tenant virtualized resources, and gather all server IP’s so we can check their kernel version, for instance. This is transparently done by Ansible Tower, for instance, which will periodically run the inventory and collect the updated list of OpenStack servers to manage.
Before you execute this, you don’t have stale cloud.yaml files in either ~/.config/openstack, /etc/openstack, or /etc/ansible. The Dynamic Inventory script will look for environment variables first (OS_*), and then it will search for those files.
ensure you are using latest ansible version

$ workon ansible2
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
$ chmod +x openstack.py
$ ansible -i openstack.py all -m ping
bdef428a-10fe-4af7-ae70-c78a0aba7a42 | SUCCESS => {
    “changed”: false,
    “ping”: “pong”
}
343c6e76-b3f6-4e78-ae59-a7cf31f8cc44 | SUCCESS => {
    “changed”: false,
    “ping”: “pong”
}
You can have fun by looking at all the information that the Inventory script above returns if you just executed as follows:
$ ./openstack.py &8211;list
{
 “”: [
    “777a3e02-a7e1-4bec-86b7-47ae7679d214″,
    “bdef428a-10fe-4af7-ae70-c78a0aba7a42″,
    “0a0c2f0e-4ac6-422d-8d9b-12b7a87daa72″,
    “9d4ee5c0-b53d-4cdb-be0f-c77fece0a8b9″,
    “343c6e76-b3f6-4e78-ae59-a7cf31f8cc44″
 ],
 “_meta”: {
    “hostvars”: {
     “0a0c2f0e-4ac6-422d-8d9b-12b7a87daa72″: {
       “ansible_ssh_host”: “172.31.1.42”,
       “openstack”: {
         “HUMAN_ID”: true,
         “NAME_ATTR”: “name”,
         “OS-DCF:diskConfig”: “MANUAL”,
         “OS-EXT-AZ:availability_zone”: “nova”,
         “OS-EXT-SRV-ATTR:host”: “compute-0.localdomain”,
         “OS-EXT-SRV-ATTR:hypervisor_hostname”: “compute-0.localdomain”,
         “OS-EXT-SRV-ATTR:instance_name”: “instance-000003e7″,
         “OS-EXT-STS:power_state”: 1,
         “OS-EXT-STS:task_state”: null,
         “OS-EXT-STS:vm_state”: “active”,
         “OS-SRV-USG:launched_at”: “2016-10-10T21:13:24.000000″,
         “OS-SRV-USG:terminated_at”: null,
         “accessIPv4″: “172.31.1.42”,
         “accessIPv6″: “”,
(….)

Conclusion
Even though Heat is very useful, some people may prefer to learn Ansible to do their workload orchestration, as it offers a common language to define and automate the full stack of I.T. resources. I hope this article has provided you with a practical example, with a very basic use case for Ansible to launch OpenStack resources. If you are interested in trying Ansible and Ansible Tower, please visit https://www.ansible.com/openstack. A good starting point would be connecting Heat with Ansible Tower callbacks, as described in this other blog post
Also, if you want to learn more about Red Hat OpenStack Platform, you&8217;ll find lots of valuable resources (including videos and whitepapers) on our website. https://www.redhat.com/en/technologies/linux-platforms/openstack-platform
 
Quelle: RedHat Stack

Mirantis OpenStack 9.1 – Continuing to Simplify the Day-2 Experience

The post Mirantis OpenStack 9.1 &; Continuing to Simplify the Day-2 Experience appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis OpenStack 9.1 makes it easier for cloud operators to consume upstream innovation on a periodic basis, both for bug fixes and minor feature enhancements, and you can get access to this capability through an easy and reliable update mechanism. In addition, along with a number of additional features in Fuel, Mirantis OpenStack 9.1 simplifies the day-2, or post-deployment, experience for operators.
Improved Day-2 Operations
Streamline OpenStack updates
The prior mechanism of applying Maintenance Updates (MU) had several limitations. First, the MU script could only apply package updates to controller and compute nodes, and not to the Fuel Master itself. Next, the previous mechanism suffered from the inability to restart services automatically, and lacked integration with Fuel.
In 9.1, a new update mechanism has been introduced that uses Fuel’s internal Deployment Tasks to update the cloud and the Fuel Master. This new mechanism delivers the following:

Reliability: It is tested and verified as part of the Mirantis OpenStack release. This includes going through our automated CI/CD pipelines and extensive QA process.
Customizations: It provides users the ability to detect any customizations before applying an update to a cloud to enable operators to decide whether an update is safe to apply.
Automatic restart: It enables automatic restart of services so that changes can take effect. The prior mechanism required users to manually restart services.

Simplify Custom Deployment Tasks With a New User Interface
In Mirantis OpenStack 9.0, we introduced the ability to define custom deployment tasks to satisfy advanced lifecycle management requirements. Operators could customize configuration options, execute any command on any node, update packages etc. with deployment tasks. In the 9.1 release, you get access to a new Deployment Task user interface in Fuel that shows the deployment workflow history. The UI can also be used to manage deployment tasks.

Automate Deployment Tasks With Event-Driven Execution
Consider an example where you need to integrate third-party monitoring software. In that case, you would want to register a new node with the monitoring software as soon as it is deployed via Fuel. Items such as these can now be automated with 9.1, where a custom deployment task can be triggered by specific Fuel events.
Reduce Footprint With Targeted Diagnostic Snapshots
With prior releases, diagnostic snapshots continued to grow over time to consume multiple GB of storage per node in just a few weeks. To solve this problem, 9.1 features targeted diagnostic snapshots to only allow log retrievals of recent N (configurable) days for a specific set of nodes.
Enhanced Security
Mirantis OpenStack 9.1 includes a number of important security features:

SSH Brute Force protection on the Host OS
Basic DMZ Enablement to separate the API/Public Network from the Floating Network
RadosGW S3 API authentication through Keystone to enable the use of the same credentials for Ceph object storage APIs

The latest versions of StackLight and Murano are compatible with 9.1, so you will also be able to benefit from the latest features of the logging, monitoring and alerting (LMA) toolchain and application catalog and orchestration tool.
Because it&;s an update, installation of the Mirantis OpenStack 9.1 update package requires you to already have Mirantis OpenStack 9.0 installed, but then you&8217;re ready to go.  All set? Then hit the 9.0 to 9.1 update instructions to get started.
The post Mirantis OpenStack 9.1 &8211; Continuing to Simplify the Day-2 Experience appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Cloud computing’s turning point: The innovation phase has begun

In the past, when I’ve spoken to clients about , those conversations were mostly with line-of-business managers focused on cutting costs and getting new, customer-facing applications up and running quickly.
Today, the tide has turned, as chief executives and other line leaders want to know how cloud computing can help them transform their entire business, create new models and spur innovation.
While much of the attention in cloud has been focused on public infrastructure deployments, that is only part of the overall story. Now those conversations reflect an intense interest in digital transformation through open, industry-specific hybrid cloud solutions that deliver higher value with analytics and cognitive capabilities.
What business leaders want
Overwhelmingly, what we are hearing from clients and the market is the need for a clear path to extend the significant IT investments they have made by leveraging new, innovative cloud services, while ensuring their data is protected.
With hybrid cloud, business leaders don’t have to rip out and replace existing systems. They can focus on elevating their business value.
IBM has invested billions of dollars and secured key industry partnerships to grow our cloud platform and deliver unique capabilities to clients around the globe. We are mobilizing thousands of developers, many of whom are working on open cloud projects, to bring new innovations to market that deliver the visibility, control and security that enable enterprises to extend their existing IT with hybrid cloud deployments.
In a recent study, we found that companies cite their ability to extend into the cloud as a driving force. It gives them the power to expand into new industries (76 percent), create new sources of revenue (71 percent), and establish and support new business models (69 percent). While public cloud adoption continues to rise, almost half of computing workloads will remain on premises.
Solutions that work
IBM has answered the call by delivering solutions for cloud migrations that are both easy and automated. Here are a few examples:

We have partnered with VMware to tackle the daunting task of extending enterprise investments to the cloud. As a result, 1,000 joint customers now have the ability to seamlessly run their workloads on the IBM Cloud as if it were part of their own, on-premises environment. Global brands, such as Marriott and Clarion, are extending their VMWare environments to the IBM Cloud.
Today we announced the launch of IBM Cloud Object Storage, the industry’s first storage-as-a-service option for hybrid clouds. This will redefine how enterprises can store, manage and access their mounting volumes of digital information in the cloud with cross-regional security. Bitly has signed on as one of the solution’s first clients and will manage up to 500 terabytes of critical workloads on the service. With tools such as Data Connect and Aspera for high speed file transfer, getting data to the cloud has never been easier.
Our long-standing partnership with SAP has deepened and we are now co-locating resources in Germany and Silicon Valley. The companies are working on cognitive extensions, enhanced customer and user experiences and industry-specific functionality. These experiences will be available both on premises and in the cloud with SAP Business Suite 4 SAP HANA (SAP S/4HANA) software.
With the IBM Connect series of offerings, thousands of IBM WebSphere, MobileFirst, MQ, DB2, and mainframe clients are already connecting and extending their enterprise apps, transactions systems and data with the cloud. Developers can easily and quickly connect to and from the cloud to fully capitalize existing skills and applications and innovate with new services such as Watson, blockchain and analytics.

We are proud of the work we’re doing with some of the world’s most recognizable brands across a variety of industries, including 1-800-Flowers, ShopDirect, Anthem, Fleetcor and Halliburton.
Business leaders around the globe recognize that cloud computing can effect digital transformation in their industries, and for many, a hybrid strategy will be the key to unlocking all of that potential. 
The post Cloud computing’s turning point: The innovation phase has begun appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

53 new things to look for in OpenStack Newton

The post 53 new things to look for in OpenStack Newton appeared first on Mirantis | The Pure Play OpenStack Company.
OpenStack Newton, the technology&;s 14th release, shows just how far we&8217;ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that&8217;s a given, and we&8217;re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms &; virtual machines, containers, and bare metal.
There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What&8217;s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let&8217;s take a look at 53 things that are new in OpenStack Newton.
Compute (Nova)

Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
No downtime API service upgrades

Storage (Cinder, Glance, Swift)
Cinder

Microversions let developers can add new features you can access without breaking the main version.
Rolling upgrades let you update to Newton without having to take down the entire cloud.
enabled_backends config option defines which backend types are available for volume creation.
Retype volumes from encrypted to not encrypted, and back again after creation.
Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.

Glance

Glare, the Glance Artifact Repository, provides the ability to store more than just images.
A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.

Swift

Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)
Keystone

Simplified configuration setup
PCI support of password configuration options
Credentials encrypted at rest

Horizon

You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)
Magnum

Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
The API service is now protected by SSL.
You can now use Kubernetes on bare metal.
Asynchronous cluster creation improves performance for complex operations.

Kolla

You can now use Kolla to deploy containerized OpenStack to bare metal.

Kuryr

Use Neutron networking capabilities in containers.
Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)
Heat

Use DNS resolution and integration with an external DNS.
Access external resources using the external_id attribute.

Ceilometer

New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
Magnum support.

FUEL

Deploy Fuel without having to use an ISO.
Improved life cycle management user experience, including Infrastructure as Code.
Container-based deployment possibilities.

Murano

Use the new Application Development Framework to build more complex applications.
Enable users to deploy your application across multiple regions for better reliability and scalability.
Specify that when resources are no longer needed, they should be deallocated.

Ironic

You can now have multiple nova-compute services using Ironic without causing duplicate entries.
Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
Add and manage applications via the Community App Catalog website.

Did we miss your favorite project or feature?  Let us know what new features you&8217;re excited about in the comments.
The post 53 new things to look for in OpenStack Newton appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Why businesses shouldn’t settle on a storage solution

To date, the business community, including startups and entrepreneurs, have had only simple storage solutions to choose from on the cloud. Or they’ve had outdated, pricey software, hardware and appliance solutions from legacy storage providers.
In today’s business world, this no longer works. Not with IDC’s predicted data growth of 44 zettabytes by 2020, fueled by the increased use of cloud, mobile, analytics, social and even cognitive to drive digital transformation. Additionally, unstructured content (images, video, audio, documents, and so on) outnumbers structured content by a factor of four.
In this world, simple storage solutions fall short. Governments and clients have increased pressure to assure compliance and residency requirements for content and applications. Transparency and coverage are not always strengths of cloud solutions.
Businesses shouldn’t have to settle for a simple cloud storage option. That’s why IBM Cloud Object Storage offers flexibility, scalability and simplicity. Solutions can be deployed on premises and across the IBM Cloud with more than 45 data centers around the world. Users get full transparency and control.
That’s essential because business is intrinsically hybrid. Elements of hybrid business processes require that some applications and content run on premises for performance, compliance or simply colocation with compute resources. Other business processes are well supported with either a dedicated or shared object storage deployment on IBM Cloud. IBM Cloud Object Storage supports both Amazon S3 and OpenStack Swift across deployment models, so there’s a consistent technology platform to support your applications and initiatives.
Additionally, there’s a higher level of availability and security. IBM Cloud Object Storage takes data that lands on one region on the IBM Cloud, then slices, erasure-codes and disperses the slices across three regions using something called SecureSlice.
Why does that matter? Two reasons:

If security is compromised in a region, the full content will not be exposed.
If one region is offline, your applications continue to run without disruption and without you having to intervene.

The IBM Cloud Storage approach translates to significantly better economics. Prices are over 25 percent less than other cloud storage providers*.
But the really exciting part goes beyond IBM Cloud Object Storage and layers on other IBM capabilities. Think of the exciting technology emanating from IBM Watson, IBM Bluemix and IBM Cloud Video Services. Cognitive will be essential as data grows from tera- to peta- to exa- to zettabytes, in the process taxing out ability to manage and utilize this growing mountain of content.  There is even broader value if you look at the IBM Spectrum family, with transparent cloud tiering and beyond. It is truly an exciting tapestry that you can weave together to elevate your .
Our ecosystem of partners delivers even more innovation and value. Our channel is broad, but to understand what’s possible, just look at what  the likes of Panzura, Nasuni, Mark III and CTERA are doing in bringing our portfolio, along with their expertise and IP, to deliver even greater value.
Learn more about IBM Cloud Object Storage and how it can be employed in your organization.
* Comparison is between IBM Cloud Object Storage Vault Cross-Region and S3 Infrequent Access bucket in AWS US East with Cross Region Replication to S3 Infrequent Access bucket in US West Oregon. Pricing is based on published IBM and Amazon US list prices as of October 13, 2016. Price includes storage capacity, API operations, internet data transfer charges, and cross-region data replication charges (s3 only). Pricing will vary depending on workload capacity, object size, data access patterns, and configuration. Pricing for this comparison based on the following workload assumptions:

Mixed footprint of 50 percent &;small&; and 50 percent &8220;large&8221; objects (by capacity). Average object sizes: small = 1GB, large = 5GB.
Monthly access pattern for all &8220;small&8221; and “large” objects: 10 percent read, 50 percent written, 5 percent listed, All objects assumed retained at least 30 days.
All object reads assumed outbound to internet (internet data transfer charges apply for all GETS).

The post Why businesses shouldn’t settle on a storage solution appeared first on news.
Quelle: Thoughts on Cloud

Auto-remediation: making an Openstack cloud self-healing

The post Auto-remediation: making an Openstack cloud self-healing appeared first on Mirantis | The Pure Play OpenStack Company.
The bigger the Openstack cloud you have, the bigger the operation challenges you will face. Things break &; daemons die, logs fill up the disk, nodes have hardware issues, rabbitmq clusters fall apart, databases get a split brain due to network outages&; All of these problems require engineering time to create outage tickets, troubleshoot and fix the problem &; not to mention writing the RCA and a runbook on how to fix the same problem in the future.
Some of the outages will never happen again if you’ll make the proper long-term fix to the environment, but others will rear their heads again and again. Finding an automated way to handle those issues, either by preventing or fixing them, is crucial if you want to keep your environment stable and reliable.
That&;s where auto-remediation kicks in.
What is Auto-Remediation?
Auto-Remediation, or Self-Healing, is when automation responds to alerts or events by executing actions that can prevent or fix the problem.
The simplest example of auto-remediation is cleaning up the log files of a service that has filled up the available disk space. (It happens to everybody. Admit it.) Imagine an automated action that is triggered by a monitoring system to clean the logs and prevent the service from crashing. In addition, it creates a ticket and sends a notification so the engineer can fix log rotation during business hours, and there is no need to do it in the middle of the night. Furthermore, the event-driven automation can be used for assisted troubleshooting, so when you get an alert it includes related logs, monitoring metrics/graphs, and so on.

This is what an incident resolution workflow should look like:

Auto-remediation tooling
Facebook, LinkedIn, Netflix, and other hyper-scale operators use event-driven automation and workflows, as described above. While looking for an open source solution, we found StackStorm, which was used by Netflix for the same purpose. Sometimes called IFTTT (If This, Then That) for ops, the StackStorm platform is built on the same principles as a famous Facebook FBAR (FaceBook AutoRemediation), with “infrastructure as code”, a scalable microservice architecture, and it&8217;s supported by a solid and responsive team. (They are now part of Brocade, but the project is accelerating.) StackStorm uses OpenStack Mistral as a workflow engine, and offers a rich set of sensors and actions that are easy to build and extend.
The auto-remediation approach can easily be applied when operating an OpenStack cloud in order to improve reliability. And it&8217;s a good thing, too, because OpenStack has many moving parts that can break. Event-driven automation can take care of a cloud when you sleep, handling not only basic operations such as restarting nova-api and cleaning ceilometer logs, but also complex actions such as rebuilding the rabbitmq cluster or fixing Galera replication.
Automation can also expedite incident resolution by “assisting” engineers with troubleshooting. For example, if monitoring detects that keystone has started to return 503 for every request, the on-call engineer can be provided with logs from every keystone node, memcached and DB state even before starting the terminal.
In building our own self-healing OpenStack cloud, we started small. Our initial POC had just 3 simple automations: cleaning logs, service restarts and cleaning rabbitmq queues. We placed them on our 1,000 node OpenStack cluster, and they run there for 3 months, taking these 3 headaches off our operators. This example showed us that we need to add more and more self-healing actions, so our on-call engineers can sleep better at night.
Here is the short list of issues that can be auto-remediated:

Dead process
Lack of free disk space
Overflowed rabbitmq queues
Corrupted rabbitmq mnesia
Broken database replication
Node hardware failures (e.g. triggering VM evacuation)
Capacity issue (by adding more hypervisors)

Where to see more
We&8217;d love to give you a more detailed explanation on how we approached self-healing of an OpenStack cloud. If you’re at the OpenStack summit, we invite you to attend our talk on Thursday, October 27, 9:00am at Room 112, or if you are in San Jose, CA come to the Auto-Remediation meetup on October 20th and hear us sharing the story there. You can also meet with the StackStorm team and other operators who are making the vision of Self-Healing a reality.
The post Auto-remediation: making an Openstack cloud self-healing appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis