Enabling hybrid cloud apps and multi-speed IT

With the evolution of the cloud, startups seem to have it easy. They come up with an idea, implement it on the cloud, and deploy continuously right away. For companies that have developed software for years, either for internal use or to sell, things are more complicated. Those companies invested in their applications and will [&;]
The post Enabling hybrid cloud apps and multi-speed IT appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

How does the world consume private clouds?

The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.
In my previous blog, why the world needs private clouds, we looked at ten reasons for considering a private cloud. The next logical question is how a company should go about building a private cloud.
In my view, there are four consumption models for OpenStack. Let’s look at each approach and then compare.

Approach : DIY
For the most sophisticated users, where OpenStack is super-strategic to the business, a do-it-yourself approach is appealing. Walmart, PayPal, and so on are examples of this approach.
In this approach, the user has to grab upstream OpenStack bits, package the right projects, fix bugs or add features as needed, then deploy and manage the OpenStack lifecycle. The user also has to “self-support” their internal IT/OPS team.
This approach requires recruiting and retaining a very strong engineering team that is adept at python, OpenStack, and working with the upstream open-source community. Because of this, I don’t think more than a handful companies can or would want to pursue this approach. In fact, we know of several users who started out on this path, but had to switch to a different approach because they lost engineers to other companies. Net-net, the DIY approach is not for the faint of heart.
Approach : Distro
For large sophisticated users that plan to customize a cloud for their own use and have the skills to manage it, an OpenStack distribution is an attractive approach.
In this approach, no upstream engineering is required. Instead, the company is responsible for deploying a known good distribution from a vendor and managing its lifecycle.
Even though this is simpler than DIY, very few companies can manage a complex, distributed and fast moving piece of software such as OpenStack &; a point made by Boris Renski in his recent blog Infrastructure Software is Dead. Therefore, most customers end up utilizing extensive professional services from the distribution vendor.
Approach : Managed Services
For customers who don’t want to deal with the hassle of managing OpenStack, but want control over the hardware and datacenter (on-prem or colo), managed services may be a great option.
In this approach, the user is responsible for the hardware, the datacenter, and tenant management; but OpenStack is fully managed by the vendor. Ultimately this may be the most appealing model for a large set of customers.
Approach : Hosted Private Cloud
This approach is a variation of the Managed Services approach. In this option, not only is the cloud managed, it is also hosted by the vendor. In other words, the user does not even have to purchase any hardware or manage the datacenter. In terms of look and feel, this approach is analogous to purchasing a public cloud, but without the &;noisy neighbor&; problems that sometimes arise.
Which approach is best?
Each approach has its pros and cons, of course. For example, each approach has different requirements in terms of engineering resources:

DIY
Distro
Managed Service
Hosted  Private Cloud

Need upstream OpenStack engineering team
Yes
No
No
No

Need OpenStack IT architecture team
Yes
Yes
No
No

Need OpenStack IT/ OPS team
Yes
Yes
No
No

Need hardware & datacenter team
Yes
Yes
Yes
No

Which approach you choose should also depend on factors such as the importance of the initiative, relative cost, and so on, such as:

DIY
Distro
Managed Service
Hosted  Private Cloud

How important is the private cloud to the company?
The business depends on private cloud
The cloud is extremely strategic to the business
The cloud is very strategic to the business
The cloud is somewhat strategic to the business

Ability to impact the community
Very direct
Somewhat direct
Indirect
Minimal

Cost (relative)
Depends on skills & scale
Low
Medium
High

Ability to own OpenStack operations
Yes
Yes
Depends if the vendor offers a transfer option
No

So as a user of an OpenStack private cloud you have four ways to consume the software.
The cost and convenience of each approach vary as per this simplified chart and need to be traded-off with respect to your strategy and requirements.
OK, so we know why you need a private cloud, and how you can consume one. But there&;s still one burning question: who needs it?
The post How does the world consume private clouds? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

A brief history of cloud computing

One of the first questions asked with the introduction of a new technology is: “When was it invented?” Other questions like “When was it first mentioned?” and “What are the prospects for its future?” are also common. When we think of , we think of situations, products and ideas that started in the 21st [&;]
The post A brief history of cloud computing appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

AWS, VMware, OpenStack … what’s your opinion?

The post AWS, VMware, OpenStack &; what&;s your opinion? appeared first on Mirantis | The Pure Play OpenStack Company.
Amazon Web Services (AWS), VMWare and OpenStack are all popular tools IT/OPS practitioners and developers use on their cloud journey. While vendors have many opinions on how these technologies stack up against each other, there is a shortage of data on how users perceive these technologies. In order to better understand the different distinct advantages and drawbacks of prevalent cloud-servicing technologies, Mirantis has sponsored two surveys for those familiar with AWS or VMWare and OpenStack. The survey is about 5 minutes long and our survey results will be published so the entire community can benefit.
Survey Links:

AWS/OpenStackSurvey
VMWare/OpenStack Survey

We would appreciate it if you could take the time to fill out the relevant survey according to your respective background.
James Chung is a summer intern for Mirantis Inc. and is currently a student at Yale University.
Photo by BillsoPHOTO (https://www.flickr.com/photos/billsophoto/4175299981)
The post AWS, VMware, OpenStack &8230; what&8217;s your opinion? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Getting started with managing VMware with Red Hat CloudForms

The VMworld 2016 US event is approaching and Red Hat will be there to showcase our Management portfolio. This includes Red Hat CloudForms which provides unified management for container, virtual, private, and public cloud infrastructures.
With this in mind, we thought it would be a good time to recap how easy it is to deploy Red Hat CloudForms in a VMware virtualized environment. Deploying CloudForms for VMware is very straightforward and consists of three steps to get to an implemented solution that gives full visibility of your VMware infrastructure.
 
Step One &; Obtain the appliance image and import it in VMware
The latest CloudForms appliance is available for download from the Red Hat Customer Portal. CloudForms is provided as a virtual appliance for Microsoft Azure, Google Cloud Platform, Microsoft Hyper-V, Red Hat Virtualization, Red Hat OpenStack and VMware. In the case of VMware, CloudForms is distributed as an OVA (open virtual appliance) image template. You can find it labelled as ‘CFME VMware Virtual Appliance’ in the download section.
Once downloaded, the appliance file needs to be uploaded onto VMware. There are different ways to proceed but the most common is to use vSphere Client and its ‘Deploy OVF Template’ functionality. The associated wizard prompts for a source location which should point to the OVA template file we downloaded. The deployment configuration options can be left as pre-configured (e.g. memory settings, number of CPUs, etc) but we need to specify the host and cluster where the appliance will be deployed and launched. The resource pool and datastore need to be able to accommodate the appliance and associated virtual disk files. Both thin or thick storage provisioning can be used. Select your network and IP allocation as required. You are now ready to deploy the CloudForms appliance. Click Finish to proceed.
Documentation for this step is available: Installing CloudForms on VMware vSphere
 
Step Two &8211; Configure CloudForms appliance
After few seconds, the appliance appears in the inventory. Before powering it on, we add an additional disk which will be used for the internal VMDB database. A typical size for the disk is 50GB but further guidelines are provided as part of the CloudForms Deployment Planning Guide. We can now power on the VM.
Before accessing the CloudForms UI, we need to ensure the configuration is suitable for our environment. We will use the appliance_console configuration tools which can be accessed from the Bash prompt. Login to the appliance, using SSH or remote console, as the root user and type appliance_console command. A summary of the configuration settings is displayed. Common configuration requires setting the network configuration (e.g. static), a fully qualified hostname, timezone, date & time, as well as configuring a database. Each setting can be configured by typing the associated number and pressing Enter.
 

 
The VMDB database can be internal or external. In our case we want to use the additional disk to configure an internal PostgreSQL database on the appliance. We simply follow the prompt, creating a new key, selecting ‘internal’ database, choosing our additional disk and setting a digit region ID (for example 99 to set this instance as master region database). CloudForms will automatically deploy and configure the database for us.
Once all of the settings are configured, we can start the server by selecting ‘Start Server Processes’. After a few seconds, we are able to navigate to CloudForms from our browser. We can exit the appliance_console tool.
Finally let’s change the default root password with the passwd command.
Documentation for this step is available: Installing CloudForms on VMware vSphere
 
Step Three &8211; Configure vCenter as a provider in CloudForms
The last steps are performed from the CloudForms UI. Login as admin and navigate to ‘Settings > Configuration’.
From there, select our appliance server in the tree and make sure appropriate server roles are enabled in the Server Control section. A single appliance deployment is usually configured with the following roles:
 

 
Next, we ensure Capacity & Utilization (C&U) data is captured by clicking on the CFME Region and selecting the ‘C & U Collection’ tab. Both collection for Clusters and Datastores must be selected.
 

 
We are all set for the basic CloudForms configuration. The last part is to configure a VMware provider. This will allow CloudForms to connect to VMware vCenter and its managed hypervisors.
Navigate to ‘Compute > Infrastructure > Providers’ and select ‘Configuration > Add a New Infrastructure Provider’.
 

 
From there, select the VMware vCenter provider type and fill in the administrative user credentials as required. You can validate the connection using the ‘Validate’ button. The configuration should look like the following:
 

 
Once saved, we can start discovery by selecting ‘Configuration > Refresh Relationships & Power States’ from the provider. This will authenticate to VMware vCenter API and query for all existing entities (e.g. ESX hosts, Clusters, Datastores, VMs, Templates, Snapshots, etc). After a few seconds, CloudForms gets a complete view of the virtual environment and starts monitoring events.
 

 
Documentation for this step is available: Managing Providers
 
Optional Step &8211; Configure SmartState Analysis
SmartState Analysis is a CloudForms capability that allows to inspect the contents of virtual machines, templates, hosts & containers without the need for an agent. The collected data (e.g. users/groups, packages/applications, files/registries) can be used with policies to validate compliance on hosts and guests. This step describes what is required to enable this functionality in CloudForms.
First, we install the VMware VDDK (Virtual Disk Development Kit) on the appliance. The VDDK is used to perform SmartState Analysis on virtual machines running on VMware. We download the installer from VMware Support website and follow their instructions for installation on the appliance via SSH or remote console. Do not forget to run the ‘ldconfig’ command once installed to make sure CloudForms is aware of the library.
SmartState Analysis on a virtual machine requires an analysis profile named ‘default’. We can navigate to  ‘Settings > Configuration’ in the CloudForms UI and simply create a ‘default’ profile by copying the existing ‘sample’ profile and renaming it. The Analysis Profiles can be found in the Settings tree under your CFME Region. The ‘default’ profile is used as a starting point, but it can be further enhanced to capture specific files or registries.
The last configuration requirement is to specify credentials on each ESX host to perform SmartState Analysis. Navigate to ‘Compute > Infrastructure > Hosts / Nodes’ and select the ESX hypervisor(s) your want to configure. Select ‘Configuration > Edit Selected Items’ to open the configuration screen. The ESX credentials are specified under the ‘Default’ tab in the ‘Endpoints’ section. Connections can be verified using the ‘Validate’ button.
The next steps are to perform SmartState Analysis on the ESX hosts, on the attached datastores, as well as on templates and virtual machines (running on not). This can be done manually by clicking ‘Perform SmartState Analysis’ on the ‘Configuration’ button, or automatically by scheduling a the task in ‘Settings > Configuration > Schedules’.
 

 
 
That’s it! We have just deployed a CloudForms appliance, configured it and connected a VMware provider. With SmartState Analysis enabled, we get complete visibility of the infrastructure, including guests details (e.g. installed packages, user and group configuration, file or registry content, etc) as well as capacity and utilization consumption. All the collected data is used to provide insights and reporting on resource utilization, performance optimization, operation management, as well as compliance and governance.
 
Come and see us at VMworld to learn more about Red Hat CloudForms and see all the other advanced virtualization capabilities the platform can offer your VMware virtualization environment.
Quelle: CloudForms

A practical guide to platform as a service: PaaS benefits and characteristics

One of the major benefits of platform as a service PaaS is its ability to improve a developer’s productivity. PaaS provides direct support for business agility by enabling rapid development with faster and more frequent delivery of functionality. It does this through continuous integration techniques and automatic application deployment. PaaS also enables developers to realize [&;]
The post A practical guide to platform as a service: PaaS benefits and characteristics appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

What’s new in Mirantis OpenStack 9.0: Webinar Q&A

The post What&;s new in Mirantis OpenStack 9.0: Webinar Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
Theres’s never been a better time to adopt Mirantis OpenStack to build your cloud. The newest release, Mirantis OpenStack 9.0, offers improvements in simplicity, flexibility, and performance that make deployment, operations, and management faster and easier.
If you missed the July 14 webinar highlighting the rich new features in Mirantis OpenStack 9.0, we’ve got you covered. The webinar’s panel included three Mirantis experts: Senior Director of Product Marketing Amar Kapadia, Senior Manager of Technical Marketing Joseph Yep, and Senior Product Manager Durgaprasad (a.k.a. DP) Ayyadevara.
They talked about the ways in which MOS 9.0 improves the &;Day 2&; experience of operating your cloud once you&8217;ve deployed it, as well as easier deployment of workloads, and especially improvements in the management of features related to NFV, such as SR-IOV, software acceleration DPDK and NUMA/CPU pinning.
Here&8217;s a selection of questions and answers from those who attended.
Q: Can any plugin be added after initial deployment without disruption?
A: Not all plugins. However, the plugin framework has added metadata and developer functionality that allow developers to build and test their plugins so they can be added as “hot-pluggable.” This means this capability is specific to the plugins themselves as well as with the settings, which are dependent on the environment and type of change to determine whether there will be disruption. An example is StackLight’s Toolchain, which is hot-pluggable post-deployment.
Q: As far as upgrading from Mirantis OpenStack 8.0 to 9.0, is there documentation available for that?
A: Documentation is readily available and always improving. Because upgrades are challenging for large distributed infrastructure software, Mirantis continually creates tooling to make the process smoother and more automated. Feedback on the documentation is always welcome.
Q: Does the new release support SDN and Contrail?
A: Yes. Currently, the Control field plugin is available for Liberty-compatible release, and Contrail is the Mitaka-compatible version.
Q: The current base OS is Ubuntu 14.04, but are there any plans to upgrade to 16.04?
A: Yes. Operating systems are regularly validated, so 16 is on the roadmap.
Q: With the new release allowing updates to your previously-deployed OpenStack environment, can we also apply a new plugin with Fuel on a deployed environment?
A: Yes, unless it a previous version. For example, with Fuel 9, you can’t deploy a new plugin push deployment to a MOSS 7 environment without having to upgrade the environment itself. However, Fuel can manage multiple versions of Mirantis OpenStack environments.
Q: What is the status on Ironic and VX LAN?
A: Both are supported in 9.0.
Q: Does Murano support deployment of Kubernetes clusters?
A: Yes, absolutely. We do a lot with Kubernetes work, and there’s a new set of announcements coming soon about the work.
Q: What NFV features make Mirantis’ value-add different from others, and how can enterprises benefit from this feature?
A: Mirantis’ value-add is twofold. First, we support all Intel Enhanced Platform Awareness features. Second, we have provisioned for enabling and configuring these features through Fuel. We also support partners like 6WIND, who have DPDK accelerators, and we have Fuel plugins for that. So, we focus on making it easy to operationalize, and that differentiates us.
Q: How can you differentiate Mirantis from services hosted elsewhere, AWS for example?
A: Fundamentally, this compares two different things, a private cloud to a public cloud environment. You will find similarity at the IaaS layer. However, OpenStack is an open system that allows you to choose the components you want. For example, you can add an SDN like Contrail. Thus, in the PaaS, the two deviate considerably. Amazon is prescriptive, choosing the software available to offer customers. Conversely, OpenStack works with a multitude of partners so customers can tailor solutions that work best for them. If they want, for example, Pivotal Cloud Foundry, they can have it. If they want Kubernetes as a container framework, they can have it. If they want a specific database or NoSQL database, they can use Murano and publish that database.
Q: How many nodes are required to deploy OpenStack in Mirantis OpenStack 9.0?
A: Depending on the function, the lower limit is three. If running it virtualized, you could do it all physically on a single machine, but the nodes specifically will be your field master node if you’re using Fuel (you don’t have to use Fuel), which would then deploy to a single controller and a single compute host. This is one of the most minimal deployments if you’re looking at playing with features and practicing deployment, and it means you could conceivably run it on a laptop, though this isn’t advised for running a production deployment. There are instructions for running it in VirtualBox as well.
This is just a tiny fraction of what we covered, of course. Interested in hearing more?  You can view the whole presentation online, or download Mirantis OpenStack 9.0 for yourself.
The post What&8217;s new in Mirantis OpenStack 9.0: Webinar Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis