SUSE drops OpenStack Cloud
The post SUSE drops OpenStack Cloud appeared first on Mirantis | Pure Play Open Cloud.
The post SUSE drops OpenStack Cloud appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
The post SUSE drops OpenStack Cloud appeared first on Mirantis | Pure Play Open Cloud.
The post SUSE drops OpenStack Cloud appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
During a recent webinar titled, “Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning,” we received a lot of interest and many questions regarding the topic. Some of the questions were coming in at a very rapid rate and we were not able to address them all. As a followup to our webinar, we have decided to put the answers to those questions into this blog post. The questions are listed below. This is part two in a series, check out our first blog post here.
The demo in the webinar showed a combination of CloudForms/Ansible Tower to accomplish lifecycle provisioning. Is CloudForms an alternative or must it be used together with Ansible? Can you elaborate on the integration?
CloudForms and Ansible Tower can operate independently of one another for provisioning purposes. CloudForms could theoretically handle end-to-end provisioning through the use of Ruby methods to accomplish external integrations. Likewise, Ansible Tower can do this solely via the use of playbooks implemented via Job Templates.
The demo showed how the two products can very easily integrate with one another to produce a robust provisioning process. Adding the two products together gives you a few advantages over simply using them independently:
Self-service provisioning with CloudForms – this allows us to hand out a nice, neat portal and consolidate our standard services into a catalog-like functionality (e.g. standard RHEL7, RHEL8, and Apache Web Servers) to allow users to order services on-demand and reduce processes for requesting new systems.
Environment visibility with CloudForms – this allows us to do things like intelligent hostname conventions, intelligent placement via capacity/tags, and tracking of resources through their entire lifecycle.
Leverage Ansible for Post-Provisioning Actions – Ansible code is written in modules that are heavily used in the community. The modules allow a much easier way for us to integrate with other devices to accomplish end-to-end provisioning rather than having to write custom, one-off Ruby code, as needed by CloudForms as a standalone entity.
In relation to the demo in the webinar, can we use shell scripts in place of Ruby to test our provisioning and retirement? What other alternatives are available?
Yes, a shell script is a perfectly acceptable means to initiate your provision process via Jenkins. In our case, we chose Ruby as the language of choice due to familiarity, however you could use whatever language that you prefer in order to test your provisioning. We could even use Ansible or Python if that was the preferred language. In fact, there are some examples of using Ansible playbooks to initiate testing in our work in progress Github project located at https://github.com/lynndixon/redhat-cicd-tools.
If there are other tools being used in the existing environment, how will the provisioning tools (e.g. CloudForms/Ansible Tower) get integrated for CI/CD?
For testing automated provisioning, CI/CD is nothing more than a wrapper around the provisioning process. When code changes are detected, via a commit or merge request in a git-based product, the provisioning process is initiated, including anything that the provisioning process will integrate with, e.g. adding a DNS entry or registration with Red Hat Satellite. In many cases, the in-line provisioning process will validate that the integration was successful and fail if it was unsuccessful, in which case the validation happens within the process itself. This is one, very simple approach to take.
In more complex examples, you could potentially unit test the individual integrations independent of the provisioning code to validate those changes individually.
Likewise, it is important to remember that we additionally want to test the retirement process as well as provisioning. This ensures proper cleanup of our environment and validates our end-to-end automated lifecycle process.
Can I apply this process to provisioning virtual networks?
Yes. CI/CD, in its generic form, is simply a process for testing your software, represented in the form of code, prior to the final release to an operational state. It is most commonly used for the release of software development. As it pertains to this topic, “software” is the infrastructure-as-code provisioning process which could also include virtual network provisioning. Due to the high number of infrastructure-as-code environments that we see, we also recommend the use of CI/CD processes to ensure and validate that code prior to delivery into production.
Could we trigger an Ansible workflow from GitLab after commiting a change to a playbook in GitLab?
Yes, this is possible. Realistically, there are many ways to implement a CI/CD pipeline, and the proper way depends upon the needs of your business. Our demo is just one example of the tools that could be used and how they could be used in conjunction with one another. For more information, you can look at https://about.gitlab.com/2019/07/01/using-ansible-and-gitlab-as-infrastructure-for-code/.
What do you do when a build fails? Do you rollback the code? Do you delete the VM?
If the build fails, there is a possibility that a VM will be left behind since the retirement process might not complete due to the failure. In this case, you can write a CloudForms method to cleanup the VM on failure. It would also be a good idea to roll back the code, since the code caused an issue in the environment.
Can we use Chef for our post provisioning configuration?
Yes. We have integrated both CloudForms directly (using Ruby) with Chef for post-provisioning configuration, and more recently Ansible Tower (using playbooks) to call Chef for post-provisioning configuration. The key here would be to separate duties between Ansible Tower and Chef, as there are now multiple configuration management tools in play here. In reference to the question as it pertains to our demo, it would likely be Ansible Tower handling any external integrations (e.g. Infoblox, Active Directory/LDAP, DNS, etc) including Chef, while Chef handles the post-provisioning system configuration.
What are the differences in using Jenkins vs. Ansible for Windows Provisioning use cases?
Although Jenkins does have plugins which allow simple integrations with Windows, Ansible is much more mature in this regard and offers many more modules in which you may use to integrate with Windows environments. Additionally, we are not advocating for Jenkins as an integral part to the actual provisioning process, rather we are using Jenkins to initiate the provisioning process to validate it’s stability and reliability when it detects a code change to that process.
Is Red Hat Satellite required?
Red Hat Satellite is not required. However, a CDN which hosts RPM repositories would be required in instances in which your Ansible Tower automation will need to install packages on your provisioned system. In our case, we typically use Red Hat Satellite as our RPM repository, we register systems with Satellite, and finally install and/or update the appropriate packages as needed to satisfy the needs of our automation. Nearly all instances of automation will require this in some shape or form.
The post Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning: Your questions answered (Part 2) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
Part 1 of this series described the manageiq-automate and manageiq-vmdb roles that can be used by a playbook to interact with a CloudForms workflow.
This article will describe how to specify the inventory hosts on which to run an embedded Ansible playbook, and how to pass parameters into Ansible playbook services and methods.
Inventory Hosts
Embedded Ansible does not support the concept of inventory groups, and so a list of IPv4 addresses or resolvable hostnames must be passed to the playbook at run-time.
For playbook services a list of default target hosts can be specified when the service is created, and optionally overridden at order-time using the Hosts service dialog element, as follows:
For playbook methods one or more target hosts can be specified in the Hosts dialog when the method is created. This can be a list of IP addresses or hostnames, or a substitution string, for example:
The substitution string enables the target hosts value to be dynamically translated at run-time and passed to the playbook, so enhancing the flexibility of the playbook method.
Ansible Playbook Input Parameters
To benefit from reusability and flexibility, playbooks are often written to work with extra_var variables passed as input parameters. Embedded Ansible playbook services and playbook methods handle input parameters slightly differently.
Playbook Service Input Parameters
Input parameters for playbook services are defined in the Variables & Default Values section of the service definition WebUI page, as follows:
These variables and their string values are passed into the playbook as extra_vars when the playbook is launched.
The default values can optionally be overridden from a service dialog when the service is ordered. Any of the service dialog’s elements that are named with the prefix “param_” will be passed as extra_vars to the playbook (with the string “param_” removed).
If a playbook service is ordered programmatically from Ruby then input parameters can be passed via the options hash to the create_service_provision_request call, for example
CREDENTIAL_CLASS = “ManageIQ_Providers_EmbeddedAnsible_AutomationManager_MachineCredential”
TEMPLATE_CLASS = “ServiceTemplate”
service_template = $evm.vmdb(TEMPLATE_CLASS).where(:name => ‘Install a Package’).first
credential = $evm.vmdb(CREDENTIAL_CLASS).where(:name => ‘Root Password’).first
options = {
“credential” => credential.id,
“hosts” => “infra1.cloud.uk.bit63.com”,
“param_package” => “mlocate”
}
$evm.execute(‘create_service_provision_request’, service_template, options)
Playbook Method Input Parameters (aka Method Parameters)
The Automate Explorer allows for input parameters (also called method parameters) to be defined when an Ansible playbook method is created. The parameters can be any of the standard data types such as string, boolean, integer, password, etc., for example:
Input parameters are passed to the playbook as extra_vars, so can be referred to in the playbook just as any other variable. As an example the first input parameter in this screenshot can be accessed using the “{{ ipam_url }}” syntax.
As can be seen from the ipam_user parameter name, the value of an input parameter for an Ansible playbook method can be a dynamic variable read from a substitution string at run-time (this substitution is not available for a playbook service).
In this example a prior Ruby method in the workflow would have saved the username into $evm.root[‘ipam_user’].
Sidebar:
The automation engine’s substitution syntax is ${object#attribute_name} where object can be “/” for the root object, or “” (or “.”) for the current object.
For example a substitution string of ${/#dialog_vm_name} would be given the value of $evm.root[‘dialog_vm_name’] at run-time. A substitution string of ${#username} would be given the value of $evm.object[‘username’] at run-time.
Querying Input Parameters
There are two functions in the manageiq-automate role that can be used to list or query whether an input or method parameter exists. These are:
get_method_parameters
method_parameter_exists
Their use is illustrated as follows:
get_method_parameters
The get_method_parameters function reads the full list of input parameters with their values, as illustrated below:
– name: Get the full list of method parameters (get_method_parameters)
manageiq_automate:
workspace: “{{ workspace }}”
get_method_parameters: yes
register: get_method_parameters
– debug: msg=”Result:{{ get_method_parameters.value }}”
The output is as follows:
TASK [Get the full list of method parameters (get_method_parameters)] ***************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
“msg”: “Result:{u’ipam_user': u’ipam_admin’, u’default_passwd': u’password::********’, u’ipam_url': u’https://ipam.company.org’, u’manageiq_validate_certs': False}”
}
method_parameter_exists
The method_parameter_exists function checks if a method parameter with a given name exists, for example:
– name: Check if a method parameter called ‘ipam_url’ exists (method_parameter_exists)
manageiq_automate:
workspace: “{{ workspace }}”
method_parameter_exists:
parameter: “ipam_url”
register: method_parameter_exists
– debug: msg=”Result:{{ method_parameter_exists.value }}”
The output is as follows:
TASK [Check if a method parameter called ‘ipam_url’ exists (method_parameter_exists)] ***
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
“msg”: “Result:True”
}
It should be noted that the return from each of these tasks (stored by the register keyword) is a hash, and value of the return is referenced using the value key of this hash, for example method_parameter_exists.value
Reading Service Dialog Values as Input Parameters
It is often the case that the values input into a service dialog should be passed to an Ansible playbook method somewhere in the workflow. A typical example is when calling an Ansible playbook method from the VM Provision state machine, where the VM provision has been initiated from a service catalog.
The service dialog entries are stored in the service request object’s options hash :dialog key, the value of which is itself a hash of dialog element name/value pairs.
From the VM Provision state machine this is accessible from $evm.root[‘miq_provision’].miq_provision_request.options, for example:
$evm.root[‘miq_provision’].miq_provision_request.options[:dialog] = {
“dialog_service_name” => “New Engineering VM”,
“dialog_vm_name” => “pemcg-eng-03″,
“dialog_option_0_cores_per_socket” => 2,
“dialog_option_0_vm_memory” => 2048,
“dialog_option_0_hostname” => “pemcg-eng-03.lon.redhat.com”, “Array::dialog_tag_0_department” => “Classification::1000000000046″,
“password::dialog_option_0_root_password” => “********”
}
To inject the cores_per_socket value from the service dialog as an input parameter using substitution syntax would therefore require the following input parameter value:
${/#miq_provision.miq_provision_request.get_option(:dialog).fetch(dialog_option_0_cores_per_socket)}
Encrypted Input Parameters
The parameters passed to a playbook method are often encrypted, either by definition as a “password” data type in the list of input parameters, or by being encrypted elsewhere in a workflow.
Input Parameter ‘password’ Data Type
An input parameter can be defined as being of type “password”, for example:
A parameter of this type is decrypted automatically and is available to the playbook as the named extra variable, for example “{{ scrambled_this }}”. It should be noted that an input parameter that has the text string “password” anywhere in the name will not be passed as a method parameter, and so will not appear in the list of method parameters returned by the get_method_parameters function. The variable will however be available as an extra_var with the password value decrypted correctly.
Password Defined Earlier in Workflow
A variable encrypted earlier in the workflow (for example when input into a service dialog) can generally be identified as having a name prefixed by the string “password::”. This signifies that the object is of type MiqPassword.
A password object of this type can be used as an input parameter if it is passed as a string data type, also prefixed by the string “password::”. The encrypted value will be automatically decrypted and usable by the playbook as the named extra variable.
For example to inject the root_password value from the previous service dialog using substitution syntax, an input parameter should be defined with a string data type and the following input parameter value:
password::${/#miq_provision.miq_provision_request.get_option(:dialog).fetch(password::dialog_option_0_root_password)}
A screenshot of these input parameters for illustration is as follows:
Summary
In part 2 of this series we have looked at how inventory hosts are defined for embedded Ansible playbook methods and services. We have also seen how input parameters can be passed to playbook services and methods and decrypted if necessary, and how to retrieve and use service dialog element values.
Each of these techniques is usable for standalone playbooks, however the real benefit of embedded Ansible comes when using playbook methods as part of a larger automation workflow. Part 3 will examine how a playbook can interact with other components in an automation workflow.
Quelle: CloudForms
During a recent webinar titled, “Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning,” we received a lot of interest and many questions regarding the topic. Some of the questions were coming in at a very rapid rate and we were not able to address them all. As a followup to our webinar, we have decided to put the answers to those questions into this blog post. The questions are listed below. As always, if your organization is in need of assistance with services that pertain to this webinar, please reach out and contact your Red Hat sales team:
NOTE: the below questions have been paraphrased and ordered for flow from the list in the webinar. Additionally, similar questions have been merged together.
Is the CD on CI/CD Continuous Delivery or Continuous Deployment per se? Or is it actually both?
As CI/CD relates to automated provisioning processes, I believe the answer will always be an “it depends”. It depends on a lot of factors, mainly what the business needs of an organization are. The business needs should drive the technology, and in this case would it make the most sense to have a continuous delivery process which implements manual intervention via code reviews? Or would it make the most sense to deploy to an operational environment without any human intervention whatsoever? That being said, when we’ve implemented this at other customers, we generally see it being one or the other, but not usually both at the same time. In the case of the demo, we used a continuous delivery methodology to show how we are delivering code following manual/automated testing and making it ready to be delivered into an operational environment.
Our developers feel they leverage a lot of products and don’t end up writing a significant amount of code. Because of this, they don’t feel they would benefit from unit testing. Any advice?
Methodologies such as unit testing and CI/CD are meant to improve the reliability of code. If you are writing any amount of code that needs to be reliable, you should consider writing unit tests and putting together a CI/CD pipeline. Over time it will become more and more valuable. Code bases almost always grow in size over time. The sooner you start writing your unit tests, the better. Specifically, unit tests help with the maintenance of the code. They help to ensure that the functionality of your methods is maintained when code changes are made. This makes it easier for other developers to work with your code and ensures the functionality of the program as a whole. It is much less painful to write the unit tests as you go rather than go back and try to create them months, or even years, later.
What are the alternatives to using Jenkins as a CI/CD automation platform?
There are several standard alternatives that we’ve seen including Travis CI (for GitHub) and GitLab CI (for GitLab). There are many more available as well, some which are publicly hosted Software-as-a-Service (SaaS) offerings and some which rely on managed infrastructure provided by your organization, such as Jenkins.
Ultimately, our choice to use Jenkins was based on a few factors:
Experience and commonality of the product: we have seen it used in many different deployments as it seems to be a very common industry standard.
Dedicated infrastructure for testing: because we are testing our provisioning process, we need a separate, dedicated environment similar to production to test our code in. This would have been very difficult to find in a SaaS type of offering, so it made sense to host our on-premise CI tool for this very purpose.
Could I use Ansible Tower instead of Jenkins as the CI/CD automation platform?
Yes. There are various ways to go about using Ansible Tower instead of Jenkins. There is a good blog post which covers this topic at https://keithtenzer.com/2019/06/24/ci-cd-with-ansible-tower-and-github/. I think the key item to be aware of is ensuring that any part of the setup covered in the blog post is communicated to Red Hat support, so you can ensure that any configuration applied to your Ansible Tower infrastructure will not void your support contract with Red Hat.
Could I use AWX or Ansible Engine instead of Tower?
Yes. AWX and Tower provide you an API that Ansible Engine (previously called Ansible Core) does not. Tower and Engine products will be backed by Red Hat support whereas AWX is not, as it is the upstream version of Tower and is only community supported.
Is Jenkins open source?
Yes. Jenkins is a currently maintained, open source project managed on GitHub at https://github.com/jenkinsci/jenkins.
What editor did you use for your code in the demo? What are the options that are available for me to use for development of my codebase?
The editor being used in the demo was Visual Studio Code. Visual Studio Code is an open source editor from Microsoft that is available for Linux, Mac, and Windows. Any editor which can integrate with Git, including vim, is also a perfectly supportable editor. We are only using the editor to create branches, edit the code, and push to the Git repository. Other examples that we commonly use are Atom, Rubymine, vim, CodeReady Workspaces and Sublime.
In a continuous delivery model, what are the touchpoints where the testing (automated and manual) is done as code progresses to production?
In this example, the automated testing occurs whenever there is a merge. The merge triggers a build in Jenkins which performs the automated testing. The manual testing occurred in our QA branch during User Acceptance Testing. Once the code has passed the initial phases of testing and validation, you are ready to let users test out the feature. Jenkins is a very flexible tool and offers other triggers in addition to a merge. For example, you could also trigger automation based off of a commit or a tag as well. The process shown above might not work for all organizations; rather it was to display a small sample size, which we commonly see, and what could be done. Because of this, it will take proper planning and a well thought out process to figure out what’s right for your business goals.
Is there a specific software defined lifecycle environment process that I need to be using? Do I need to use three distinct environments (e.g. development, quality assurance, and production) as shown in the webinar?
The diagram referenced in the webinar was just an example. You may not need all of the environments shown. At a minimum, you need two environments so that you can keep your production code isolated. Production should only be for working code that has been tested.
What code review is done prior to delivery to production?
In our previous webinar demo, we used a code review process that was implemented upon the initiation of a merge request. In this case, it would be a manual review of any of the changes that were being implemented prior to initiating an automated build via webhook to Jenkins. Organizations which have more formalized and mature testing processes might not necessarily need a manual review in place and may implement a continuous deployment model instead, where the code is deployed to production as soon as all automated tests pass.
Could you use JIRA or Trello at any point in the process?
We did not use JIRA or Trello in our setup, however it should be possible to configure JIRA and Trello to trigger Jenkins builds with appropriate plugins or integrations on their respective products enabled. We also believe that JIRA or Trello could be a key in enforcing agile concepts such as creating and managing short sprints for code changes for infrastructure-as-code processes.
The post Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning: Your questions answered (Part 1) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
The post SUSE OpenStack is no more — but Don’t Panic appeared first on Mirantis | Pure Play Open Cloud.
“We are working closely with all affected customers and partners to support them through their remaining subscription period and as they transition to alternatives.”
There’s a phrase from a supplier you don’t want to wake up to.
But that’s what SUSE OpenStack — and by extension, HP Enterprise OpenStack, also owned by SUSE — customers heard this morning as the European company announced that it would be winding down its participation in OpenStack. “SUSE is focusing and increasing our strategic investments on the application delivery market and its opportunities in order to align with technology trends in the industry and, most importantly, with our customers’ needs.”
SUSE has long been a vital, resourceful ally in the OpenStack community and here at Mirantis we will miss their contributions and leadership. But what do you do if your OpenStack supplier suddenly leaves the business?
Fortunately, OpenStack is still a thriving community, and if you need commercial support, there are other alternatives.
Picking up the pieces
At Mirantis we’ve been in the OpenStack business since it was hot — so hot, in fact that there were many more companies and products than the market would ultimately be able to support. So the steady drip of companies leaving the field and products getting shut down isn’t a surprise; it’s an inevitable result of the natural consolidation that comes as a technology matures.
Of course, that’s no consolation when you’re abruptly left with a product that’s no longer supported. Fortunately, because we have been there since the beginning, our organizational DNS is built on services, and we have a deep bench of upstream contributors and a support organization that can work with virtually any OpenStack on virtually any operating system.
In fact, that’s just what we’ve done for more than a dozen large organizations who have suddenly found themselves with abandoned distributions. Because we’ve always focused on vendor neutrality, we have supported these customers on their existing distributions as we’ve helped them evolve their stack to a roadmap-based solution. In other words, they didn’t have to suddenly pack up and move to a new distro.
But it does point to the importance of avoiding vendor lock-in and other traps.
Vendor Lock-In
For years, Mirantis has been beating the drum against vendor lock-in, and this is a great example of why. If you chose the SUSE operating system for some reason other than OpenStack — or even if you are just married to SUSE via technological inertia — you’re not going to want to switch to Red Hat OpenStack, with its reliance on Red Hat Enterprise Linux.
Instead, you want to make sure that whatever architecture you’re implementing, if you need to move you can. Now, that doesn’t mean that you might not want to take advantage of vendor tools for deploying or managing OpenStack, but at the end of the day you want to be certain that if you need to change providers, your clusters themselves will be unaffected.
So if you’re making the switch now, you’ll want to try and do it in a way in which your clusters see the least amount of disruption, and that means staying on SUSE hosts.
Inflexible architecture
The final “gotcha’ is to make sure that your infrastructure and architecture is flexible. For example, you probably guessed that SUSE is pivoting to Kubernetes — and you’re right. This kind of change and innovation is inevitable, but it doesn’t have to be a problem.
OpenStack as a technology isn’t going away; it’s just become the “boring” infrastructure on which the cool new things like Kubernetes are deployed. The important thing is to make sure that your infrastructure can do more than just OpenStack, and even more than just OpenStack and Kubernetes. You want to make sure that you’re ready for whatever comes next, so that you’re never caught in this situation again.
Come talk to us
As you already know if you’re reading this, Mirantis has been in the OpenStack business pretty much since the beginning, and we’re not going anywhere. We serve hundreds of customers, with more than 25,000 physical nodes running on our platform. So if you value the business benefits of OpenStack, we’re ready to help support your transition.
But we’ve seen our share of pivots, which means we know what you’re going through. We’re here to help you through the stomach-clenching uncertainty an announcement like today’s can bring.
Give us a call.
The post SUSE OpenStack is no more — but Don’t Panic appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
Last month, at OpenShift Commons Gathering Milan, Paolo Gigante and Pierluigi Sforza of Poste Italiane, showed the audience how they built a microservices based banking architecture using Apache Kafka and OpenShift. Their slides are available here. For more great in-person events like this, register for the next Commons Gathering near you! San Francisco is coming up before the end of the month, and will focus on AI/ML.
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we enable the success of customers, users, partners and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
Join us in the upcoming Commons Gatherings!
The OpenShift Commons Gatherings continue – please join us next time at:
October 28, 2019 in San Francisco, California – event is co-located with ODSC/West
November 18, 2019 in San Diego, California – event is co-located with Kubecon/NA
The post Open Banking with Microservices Architectures and Apache Kafka on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
The excitement has been building and Red Hat OpenShift Container Storage 4 is almost here: it is currently heading into a High Touch Beta program. Apply today if you would like to be considered for participation in this program. Red Hat works closely with customers through High Touch Beta programs to gather product feedback and requirements in a true collaborative approach.
Red Hat is focused on delivering a storage product that rounds out the Red Hat OpenShift Container Platform ecosystem through enterprise-ready data services for the hybrid cloud. Red Hat OpenShift Container Storage 4 makes it easy for applications — traditional as well as emerging workloads — to consume storage resources, enabling developers to focus on innovation and reducing Time to Market.
Take the OpenShift Storage Survey to give voice to your persistent storage requirements and come join us this month in anticipation of the Red Hat OpenShift Container Storage 4 General Availability release:
Red Hat Forum 2019
Ceph Days
OpenShift Commons
The post Red Hat OpenShift Container Storage 4: Driving Innovation through Collaboration appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
In this blog series, we will cover how to integrate Infoblox IPAM with Red Hat CloudForms using Ansible Playbooks. Before we start, let me point you that we already have a detailed blog on CloudForms with Infoblox integration written by John Hardy[1] which has explained how to integrate CloudForms with Infoblox using Ruby scripts.
Now the question is if we already have a detailed blog on this then what new I would bring to this blog?
The new would be the power of Ansible that makes the integration simpler and effective. Not only Ansible is compatible with newer versions of CloudForms but also with the addition of Infoblox to ansible modules make this whole process a cakewalk.
Use Case
Many organisations use IPAM software like Infoblox which provides IP address management and services like DHCP, DNS for any size of network. So instead of using out-of-the-box IP management service offered by CloudForms, we can outsource this service to Infoblox and make it part of provisioning workflow with the help of ansible playbooks.
After integration, we should successfully able to perform below two functionalities i.e.
It should get the IP from the Infoblox and assign the IP to the VM getting provisioned.
Update the DNS record in Infoblox and assign the same host details to the provisioned VM.
Implementing the Use Case:
To start the implementation, let’s divide the work into three sections i.e.
Installing and Configuring Infoblox client in CloudForms appliance.
Playbooks creation and make it a part of provisioning workflow.
Executing it via Service Catalog.
Installing and configuring Infoblox client in CloudForms Appliance
In order to install/configure Infoblox client, make sure you have ansible 2.5 or higher version installed in your CloudForms appliance.
If you are implementing this solution in CloudForms 4.7, then you don’t need to worry about the ansible version as it already has ansible 2.7 installed.
Next step is to install pip which would help in installing infoblox-client i.e.
#pip install infoblox-client
Also, you need to setup the python path in your CloudForms appliance like below:
export PYTHONPATH=/opt/rh/python27/root/usr/lib/python2.7/site-packages
Lastly, Make sure you put the details of Infoblox server i.e. hostname and credentials in your CloudForms appliance so that it can be further consumed by playbooks as vars_files. For example, I have added Infoblox server details under /vars/host.yaml file in my CloudForms appliance like below:
—
nios_provider:
host: <Infoblox server IP/hostname>
username: <username>
password: <password>
In the next post we will cover the playbooks needed for this.
Quelle: CloudForms
Introduction
On the wave of the successes attained with stateless workload with regards to multi-data center active/active deployments and uptime, expectations are rising for stateful workloads too.
In particular, it is becoming more and more common for IT organizations to want to deploy their stateful workloads with multi-data center, active/active, always-available and always-consistent configurations.
In this blog post we will see that if the requirement is to deploy a stateful workload so that it is available and consistent even during a disaster scenario, then it is necessary to have three data centers.
Conventionally we define as stateful workloads all those pieces of software or applications that in some way manage a state. Typically state is managed in storage and middleware software such as software defined storage, databases, message queue and stream systems, key value stores, caches etc… . This definition is similar to the one adopted by the Storage Landscape whitepaper published by the CNCF storage SIG.
High Availability and Failure
High Availability (HA) is a property of a system that allows it to continue performing normally in the presence of failures. Normally, with HA, it is intended the ability to withstand exactly one failure. If there is a desire to withstand more than one failure, such as two, it can be written as HA-2. Similarly, three failures can be written as HA-3.
The concept of Availability for a system has its roots in mechanics and electronics engineering (in those disciplines, it is known as reliability) and the science and math behind it is consolidated at this point. Given the Mean Time Between Failures (MTBF) of each individual component, one can calculate the MTBF of the entire system by applying a set of formulas. Redundancy of each component is the key to achieving HA.
The foundational idea of HA is that the Mean Time to Repair (MTTR) a failure must be much shorter than the MTBF (MTTR << MTBF), allowing something or someone to repair the broken component before another components breaks (two broken components would imply a degraded system for HA-1).
It is often understated that something needs to promptly notify a system administrator that the system has a broken component (by the very same definition of HA one should not be able to notice that by the normal outputs of the systems).
This means we need a good monitoring and alerting systems. Without it, an HA system would just keep working until the second failure occurs (~2xMTBF) and then still be broken, defeating the initial purpose of HA. Unfortunately, there are still a significant number of organizations that primarily put effort and resources in designing and building HA systems, without having solid monitoring systems.
With regards to stateful workloads, HA implies that one needs multiple instances (at least two) of each workload and that the state that these instances manage needs to be replicated between them. Usually, some kind of heartbeat ensures the peers are alive and some kind of gossip protocol ensures state is synchronized and consistent across all of them.
If, for example, one builds a stateful system with two instances and instance A suddenly cannot contact instance B, instance A will have to make a decision on whether to keep working or not. Instance A cannot know whether instance B is down or healthy-but-unreachable. It could also be that instance A is the one that is unreachable. In practice, in a distributed system, failures are indistinguishable from network partitioning where the presumably failed component has become unreachable due to a network failure.
If a piece of software is designed to keep working when the peers are unreachable, its state may become inconsistent. On the other hand, if a piece of software is designed to stop when the peers is unreachable, then it will maintain consistency, but will not be available.
The CAP Theorem
This situation is formalized in the CAP theorem. Simply put, the CAP theorem states that a stateful workload in case of network partitioning (P) can choose between consistency (C) (as in the integrity of the data) or availability (A), but cannot have both.
During a network partition, the stateful workload will need to work in a degraded state: normally either read-only if the application chooses consistent, or inconsistent if the application chooses availability.
A corollary of the CAP theorem called PACELC (Partition? Availability or Consistency, Else? Latency or Consistency) states that under normal conditions (absence of a network partition), one needs to choose between latency (L) or consistency (C). That is to say that under normal circumstances one can optimize for either speed or consistency of the data, but not for both.
Sometimes consistency is not a strict requirement. Eventually consistent databases (mostly belonging to the NoSQL class of databases) are an example of this type of software.
Note that eventual consistency does not imply eventual correctness; it simply means that eventually, all instances will present the same state. Usually, a conflict resolution algorithm decides which state to keep if two instances have different states. During the lapse of time while the state was inconsistent, the application using the stateful workload may have made wrong decisions that altered the state to also be incorrect, and these situations cannot be reconciled by the conflict resolution algorithm.
The following list illustrates some stateful workload and their choice in terms of PACELC Theorem:
DynamoDB: P+A (on network partitioning, chooses availability), E+L (else, latency)
Cassandra: P+A, E+L
MySQL Cluster: P+C, E+C
MongoDB: P+C, E+C
Source wikipedia, see the link for a more stateful set examples.
Most SQL databases are absent from this table, but you can assume they choose consistency over the other alternatives, as this is required by the ACID property of RDBMS.
Most stateful workloads (and in particular databases) also employ partitions or shards to increase throughput.
Partitions
Partitions, or shards, are a way to increase the general throughput of the workload. Usually, the state space is broken into two or more partitions based on a hashing algorithm. The client or a proxy decides where to send requests based on the computed hash. This dramatically increases horizontal scalability, whereas historically for RDBMS, vertical scaling was often the only practical approach.
From an availability perspective, partitions do not have a significant impact. Each partition is an island, and the same availability considerations that apply to a non-partitioned database also apply to each individual partition. Stateful workloads can have replicas of partitions which sync their state to increase the availability of each individual partition.
Partitions, however, while allowing for horizontal scalability, introduce an additional complication which is the need to maintain consistency between them. If a transaction involves multiple partitions, there needs to be a way to make sure that all of the involved partitions are coordinated into participating in their portion of the transaction.
Partitions are widely adopted by modern databases and while not contributing to the availability discussion, need to be taken into consideration, especially with regard to the multi-partition consistency issue.
So, with replicas and partitions running in separate processes and needing some level of synchronization, how can there be a reassurance that the workload is highly-available and the state remains consistent?
Consensus Protocols
A consensus protocol allows a set of peers to agree on a shared state. An ingredient of consensus protocols is a leader election process, which, based on the strict majority of the members of the stateful workload cluster, designates a leader that is the ultimate and undiscussed owner of the state.
As long as the strict majority of the elements of the cluster can talk to each other, the cluster can continue to operate in a non-degraded state (without violating the CAP theorem). This results in a stateful system that is both consistent and available, while sustaining a number of failures.
In a cluster of two, if a member is lost, the remaining member does not represent the strict majority. In a cluster of three, if a member is lost, the two remaining members do represent the strict majority. As a consequence, for a stateful workload that implements a leader election protocol there must be at least three nodes to preserve availability and consistency in the presence of one failure (HA-1).
As of today, there are two main generally accepted consensus algorithms based on leader election:
Paxos Generally considered very efficient, but arcane to understand and complex to deal with in real life corner cases.
Raft, Generally considered easy to understand and code in a real life scenario, even though less efficient.
Most of the new stateful software tends to be based on Raft as it is simpler to implement.
Leader election-based consensus protocols work well when a set of peers need to agree on the same set of operations. But when the different participants need to perform different operations, and they just need to agree on committing to those operations or not, a different family of consensus protocol is used: the two-phase commit (2PC) protocol and derivatives.
In the two-phase commit protocol, a coordinator orchestrates a set of resources into either all commit or abort a transaction in which each participant needs to perform a different task. The two-phase commit protocol has several limitations amongst which:
All participants must be available for it to succeed (we needed only the strict majority for the leader election-based protocol)
The coordinator is a single point of failure and if it fails in the middle of a transaction, the participant may end up never completing (either committing or aborting).
The two-phase commit protocol is suitable to coordinate transactions that involve multiple partitions.
The following are examples of stateful workloads and their use of consensus algorithms (this information is not always easy to uncover, what follows is based on my personal research and could be inaccurate):
Etcd: Raft, no partitions
Consul: Raft, no partitions
Zookeeper: ZooKeeper Atomic Broadcast (a derivative of Paxos), no partitions
Elasticsearch: Intra-partition: Paxos, (no Multi-partition consistency?)
Cassandra: Intra-partition: Paxos, (no Multi-partition consistency?)
CockroachDB(1): Intra-partition: Raft, Multi-partition: 2PC
YugaByte-DB(1): Intra-partition: Raft, Multi-partition: 2PC
TiKV: Intra-partition: Raft, Multi-partition: Percolator
FaunaDB(2): Calvin (Multi-partition version of Raft)
Nats: Raft, no partitions
Spanner(1): Intra-Partition raft, Multi-Partition 2PC + high precision time service
(1) Some claims that CockroachDB and YugaByte-DB are not fully consistent can be found here. The argument basically says that in order to ensure serializability isolation-level, a mechanism to achieve a total order or events is needed. Spanner does that with a high-precision clock that is available only in Google datacenters. YougaByteDB and CockroachDB can only approximate that service. The author proposes Calvin as a solution to these problems.
(2) A counter argument that Calvin is not usable for enterprise use cases can be found here.
In general, consistency across partitions with off-the-shelf infrastructure seems to be where the frontier of the theoretical research is headedhas gotten to at this point in time.
Failure Domains
While it is useful to assume (as we do when we design for HA) that individual components will fail in isolation, this is often not the case. Multiple components may fail altogether. Failure domains are areas of an IT system in which all the components inside that area may fail all at the same time.
Examples of failure domains are: pods, nodes, racks, entire kubernetes clusters, network zones and datacenters.
As one can see from these examples, failure domains exist at different scales.
When designing a Highly Available system (HA), one considers the smallest scale: component level and, generally speaking, one makes all components redundant.
However, one should not lose track of the higher scale failure domains. For example, there may be several instances of an application, but if all those instances run on the same rack, there is still a single event that can take the application down.
Applying this same line of reasoning to stateful workloads, one should ensure that workload instances run in different failure domains, irrespective of the scale. If there is a need for consistency and availability at the same time, stateful workloads should be run in at least three different failure domains.
In Kubernetes, there are standard node labels (failure-domain.beta.kubernetes.io/region , failure-domain.beta.kubernetes.io/zone) to capture the idea of failure domains in a cluster. Designers of stateful workloads should consider creating anti-affinity rules based on those labels when packaging their software to be deployed in kubernetes.
Disaster Recovery
Disaster recovery (DR) normally refers to the strategy for recovering from the complete loss of a datacenter.The failure domain in this situation is clearly the entire datacenter.
Disaster recovery is usually associated with two metrics:
Recovery Time Objective (RTO): the time it takes to have the systems back online after a datacenter fails.
Recovery Point Objective (RPO): how far back in time the state will be from the time the datacenter fails.
In the old days, these metrics were measured in hours and users followed a set of manual steps to recover a system.
Most DR strategies employed an active/passive approach, in which one primary datacenter was handling the load under normal circumstances and a secondary datacenter was activated only if the primary went down.
But having an entire datacenter sitting idle was recognized as a waste. As a result, more active/active deployments were employed, especially for stateless applications.
With an active/active deployment, one can set the expectations that both RTO and RPO can be reduced to almost zero, by virtue of the fact that if one datacenter fails, traffic can be automatically directed to the other datacenter (though the use of health checks). This configuration is also known as disaster avoidance.
Today, some companies have the expectation to implement disaster avoidance for stateful workloads.
Based on the above discussion, if one wants to deploy a stateful application in such a way that is both highly available and consistent, one needs three failure domains and in the case of DR, three datacenters.
And therein lies the two data center conundrum.
The Two Data Center Conundrum
Some companies have two datacenters. In many cases, the second datacenter was built in the last fifteen years and was a conspicuous capital investment.
Based on my observations, these datacenters are sometimes not fully symmetrical. In some cases, the second datacenter (beside being the secondary in the active/passive DR strategy) is also used to run stateless workload in an active/active manner or run some lower/non-production environments.
As these companies try to deploy stateful workloads in a highly available and consistent fashion, they will realize the limitations posed by two datacenters in this context and they will have some decisions to make:
One may compromise on availability. This implies accepting an RTO and RPO greater than zero and an active/passive DR strategy most likely involving some manual steps.
One may compromise on consistency. This would result in deploying an eventually consistent stateful workload.
If both availability and consistency are needed, then a third data center is necessary. This can be achieved by building a new datacenter or using the cloud as a third datacenter.
Given how capital intensive it is to stand up a new datacenter, the last option, in which the cloud is used as a third failure domain, seems to be the most likely solution especially if there is a cloud region in proximity to existing datacenters.
Conclusions
As described, uptime and consistency requirements for stateful workloads may result in significant cost implications, such as the need to run workloads in three datacenters.
It is the hope that this article will serve as a base for discussing how to deploy a stateful workload on multiple datacenters. Some of the questions that should guide the discussion are:
Do I need consistency: availability or both (there is a price for everything)?
Do I need partitioning for horizontal scalability?
Which concuns protocol is used and when?
Which level of performance do I need?
The relative simplicity that allows for the deployment to three datacenters in the cloud is putting pressure on traditional two-data center infrastructures as well as on traditional monolithic stateful software that does not use a leader election consensus protocol. A new generation of databases and stateful software (mostly an offspring of the Google Spanner) is getting traction and soon will be ready for mainstream enterprise deployments.
The post Stateful Workloads and the Two Data Center Conundrum appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
The post How to deploy Airship in a Bottle: A quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.
Airship is designed to enable you to reliably deploy OpenStack on Kubernetes, but with both of those systems being fairly complicated to deploy, a system that combines them can be downright confusing. Fortunately the community has created Airship in a Bottle, a simple way to create a deployment so you can get an idea of what’s going on.
In this guide, we will take a look at Airship in a Bottle and what it gives you.
Deploying Airship in a Bottle
Airship in a Bottle, or AIAB, is designed to evaluate your environment to determine most of the information that it needs, so you don’t have to worry about thinks like determining IP addresses and network interfaces. All you need to do is download the repository and run the script. Let’s get started.
Get suitable “hardware”. The instructions will tell you that you need a fresh Ubuntu 16.04 VM with a minimum 4vCPU/20GB RAM/32GB disk, but this is somewhat outdated. To avoid problems, you are actually going to need a fresh Ubuntu 16.04 VM with 8vCPU/32GB RAM.
Note that there’s a reason that the instructions specify a VM: AIAB is designed to create a “disposable” environment; it makes changes to the entire system, so you will not be able to run the script reliably more than once. By creating a VM, you can simply dispose of it and start over if you want to try again.
Log into the system and change to root:
sudo -i
Create the deployment directory and download the software from the Treasuremap repository:
mkdir -p /root/deploy && cd “$_”
git clone https://opendev.org/airship/treasuremap/
This repo actually contains multiple options, but we’re going to concentrate on AIAB for today.
Run the creation script:
cd /root/deploy/treasuremap/tools/deployment/aiab/
./airship-in-a-bottle.sh
Answer the script’s questions. Unless you have a good reason, it’s probably best to just accept the defaults. (After all, that’s what the script was designed for.)
Welcome to Airship in a Bottle
/——————–
|
| |—| —-
| | x |
| |—| |
| | /
| ____|____/ /—-
| /
——————–/
A prototype example of deploying the Airship suite on a single VM.
This example will run through:
– Setup
– Genesis of Airship (Kubernetes)
– Basic deployment of Openstack (including Nova, Neutron, and Horizon using Openstack Helm)
– VM creation automation using Heat
The expected runtime of this script is greater than 1 hour
The minimum recommended size of the Ubuntu 16.04 VM is 4 vCPUs, 20GB of RAM with 32GB disk space.
Let’s collect some information about your VM to get started.
Is your HOST IFACE ens4? (Y/n)
Is your LOCAL IP 10.128.0.39? (Y/n) Y
Make some coffee, play with your kids, go get some fresh air … this is going to take an hour or so.
Eventually, the script will return information on the deployed installation:
…
OpenStack Horizon dashboard is available on this host at the following URL:
http://10.128.0.47:31309
Credentials:
Domain: default
Username: admin
Password: password123
OpenStack CLI commands could be launched via `./openstack` script, e.g.:
# cd /root/deploy/treasuremap/..//treasuremap/tools/
# ./openstack stack list
…
Airship itself does not have a dashboard.
Other endpoints and credentials are listed in the following locations:
/root/deploy/treasuremap/..//treasuremap/site/aiab/secrets/passphrases/
Exposed ports of services can be listed with the following command:
# kubectl get services –all-namespaces | grep -v ClusterIP
…
+ your_next_steps
+ set +x
—————————————————————
Airship has completed deployment of OpenStack (OpenStack-Helm).
Explore Airship Treasuremap repository and documentation
available at the following URLs:
https://opendev.org/airship/treasuremap/
https://airship-treasuremap.readthedocs.io/
—————————————————————
+ clean
+ set +x
To remove files generated during this script’s execution, delete /root/deploy/treasuremap/../.
This VM is disposable. Re-deployment in this same VM will lead to unpredictable results.
Your values will vary, of course!
[[ NOTE: At the time of this writing, the output also includes references to components that aren’t actually included in AIAB, so if you try something not shown here and it doesn’t work, it’s not you. ]]
Now let’s go ahead and explore what we’ve got.
Exploring Airship in a Bottle
The purpose of Airship is to make it possible for you to reliably deploy OpenStack on Kubernetes, and that’s what we have at this point. Let’s take a look at all that, starting at the bottom of the stack.
If you look at the Kubernetes cluster deployed by AIAB, you will see several namespaces:
root@bottle3:~# kubectl get namespaces
NAME STATUS AGE
default Active 12h
kube-public Active 12h
kube-system Active 12h
nfs Active 12h
openstack Active 11h
ucp Active 12h
The one we’re most interested in at this point is, of course, openstack:
root@bottle:~# kubectl get pods –field-selector=status.phase=Running -n openstack
NAME READY STATUS RESTARTS AGE
airship-openstack-memcached-memcached-5bd8dbff55-7mzsf 1/1 Running 0 11h
airship-openstack-rabbitmq-rabbitmq-0 1/1 Running 0 11h
airship-openstack-rabbitmq-rabbitmq-exporter-7f4c799869-xwnhp 1/1 Running 0 11h
glance-api-674b664684-8z4fm 1/1 Running 0 11h
heat-api-6959d699d6-hb67p 1/1 Running 0 11h
heat-cfn-599b4b96cd-s6kw7 1/1 Running 0 11h
heat-engine-69d5c7f947-5p6t4 1/1 Running 0 11h
horizon-549bfbf97d-w5jdm 1/1 Running 0 11h
ingress-648c85cb85-fcjss 1/1 Running 0 11h
ingress-error-pages-78665fc8df-td6lp 1/1 Running 0 11h
keystone-api-bb85bf7-49v5w 1/1 Running 0 11h
libvirt-libvirt-default-wp6vn 1/1 Running 0 11h
mariadb-ingress-744454b88d-7nlg9 1/1 Running 0 11h
mariadb-ingress-error-pages-67d44dc8-bjdvp 1/1 Running 0 11h
mariadb-server-0 1/1 Running 0 11h
neutron-dhcp-agent-default-ljj69 1/1 Running 0 11h
neutron-l3-agent-default-bzdzl 1/1 Running 0 11h
neutron-metadata-agent-default-rpj2d 1/1 Running 0 11h
neutron-ovs-agent-default-qxcdc 1/1 Running 0 11h
neutron-server-86ffb5bdd-64cr9 1/1 Running 0 11h
nova-api-metadata-785bb8cfd7-md99b 1/1 Running 1 11h
nova-api-osapi-54d5479bb9-s7f2z 1/1 Running 0 11h
nova-compute-default-s6wcg 1/1 Running 0 11h
nova-conductor-5dbd64b475-v7hqb 1/1 Running 0 11h
nova-consoleauth-75777467ff-fjqkj 1/1 Running 0 11h
nova-novncproxy-5fd78b47d7-kgflg 1/1 Running 0 11h
nova-placement-api-98859484c-glhcq 1/1 Running 0 11h
nova-scheduler-6694f657cf-wfj67 1/1 Running 0 11h
openvswitch-db-b7vhf 1/1 Running 0 11h
openvswitch-vswitchd-672qp 1/1 Running 0 11h
prometheus-mysql-exporter-67cbd476bb-pnmjd 1/1 Running 0 11h
There are a LOT of containers here — certainly more than you’d want to deploy manually! As you can see, the OpenStack components you’d expect to see, such as nova, neutron, keystone, horizon, and so on, are represented in their containerized form, which makes sense, as they’ve been deployed by OpenStack-Helm.
Other than that, though, it’s just a normal OpenStack deployment. For example, the output said that we could find our OpenStack Horizon dashboard at:
http://10.128.0.47:31309
That’s the internal IP and a randomly chosen port, so we’d actually look for it (in this case, at least) at the external version of that port,
http://35.202.211.150:31309
Note that if you don’t have that output for some reason, you can always find the available ports by asking Kubernetes directly:
root@bottle:~# kubectl get services –all-namespaces | grep -v ClusterIP
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
openstack horizon-dashboard NodePort 10.96.138.222 <none> 80:31309/TCP 11h
ucp drydock-api NodePort 10.96.17.172 <none> 9000:30000/TCP 12h
ucp maas-region NodePort 10.96.97.191 <none> 80:30001/TCP,31800:31800/TCP,53:30839/UDP,514:32004/TCP 12h
As noted in the output, we can log into Horizon with
Credentials:
Domain: default
Username: admin
Password: password123
From there, we can see that we not only have a functional OpenStack cluster, but that it’s been partially populated.
There’s even a sample VM:
We can also use the CLI directly from the command line without worrying about the individual pods. To do that, grab the credentials by choosing the OpenStack RC v3 file from the admin pulldown menu:
From there, you can either run the file on the command line or copy and paste, whichever is more convenient for you. The important thing is that you’re setting the appropriate environment variables:
export OS_PROJECT_ID=1746a87ab3e8409a9be419cc1c8703d1
export OS_PROJECT_NAME=”admin”
export OS_USER_DOMAIN_NAME=”Default”
export OS_PROJECT_DOMAIN_ID=”default”
export OS_USERNAME=”admin”
# With Keystone you pass the keystone password.
echo “Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: ”
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
The script assumes you’ll enter the Keystone password from the command line; as with the interface, the default is
password123
Before you can run any CLI commands, though, you’ll need the OpenStack client, which isn’t installed by default:
apt install python-openstackclient
From there, you can execute OpenStack commands. For example, you can see the VM we were looking at in Horizon:
root@bottle:~# openstack server list
Password:
+————————————–+———————————–+——–+—————————————————————-+
| ID | Name | Status | Networks |
+————————————–+———————————–+——–+—————————————————————-+
| 7af72d02-4609-40aa-8333-6cd3a044d973 | test-stack-01-server-rjfkpintf3pq | ACTIVE | test-stack-01-private_net-wh3x4n3k5zkr=10.11.11.9, 172.24.8.11 |
+————————————–+———————————–+——–+—————————————————————-+
You can also list the other services available:
root@bottle:~# openstack catalog list
Password:
+———–+—————-+———–+
| Name | Type | Endpoints |
+———–+—————-+———–+
| heat | orchestration | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
| nova | compute | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
| neutron | network | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
| heat-cfn | cloudformation | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
| keystone | identity | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
| glance | image | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
| placement | placement | RegionOne |
| | | RegionOne |
| | | RegionOne |
| | | |
+———–+—————-+———–+
So as you can see, we’ve got a fully-functioning OpenStack cluster running on Kubernetes.
Next time we’ll look at the structure of the manifests that define Airship installations, or “sites”.
The post How to deploy Airship in a Bottle: A quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis