Recent blog posts, June 19

Using Ansible Validations With Red Hat OpenStack Platform – Part 3 by August Simonelli, Technical Marketing Manager, Cloud

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

Read more at http://redhatstackblog.redhat.com/2017/06/15/using-ansible-validations-with-red-hat-openstack-platform-part-3/

TripleO deep dive session index by Carlos Camacho

This is a brief index with all TripleO deep dive sessions, you can see all videos on the TripleO YouTube channel.

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-index.html

TripleO deep dive session #10 (Containers) by Carlos Camacho

This is the 10th release of the TripleO “Deep Dive” sessions

Read more at http://anstack.github.io/blog/2017/06/15/tripleo-deep-dive-session-10.html

OpenStack, Containers, and Logging by Lars Kellogg-Stedman

I’ve been thinking about logging in the context of OpenStack and containerized service deployments. I’d like to lay out some of my thoughts on this topic and see if people think I am talking crazy or not.

Read more at http://blog.oddbit.com/2017/06/14/openstack-containers-and-logging/

John Trowbridge: TripleO in Ocata by Rich Bowen

John Trowbridge (Trown) talks about his work on TripleO in the OpenStack Ocata period, and what’s coming in Pike.

Read more at http://rdoproject.org/blog/2017/06/john-trowbridge-tripleo-in-ocata/

Doug Hellmann: Release management in OpenStack Ocata by Rich Bowen

Doug Hellmann talks about release management in OpenStack Ocata, at the OpenStack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/06/doug-hellmann-release-management-in-openstack-ocata/

Using Ansible Validations With Red Hat OpenStack Platform – Part 2 by August Simonelli, Technical Marketing Manager, Cloud

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Read more at http://redhatstackblog.redhat.com/2017/06/12/using-ansible-validations-with-red-hat-openstack-platform-part-2/
Quelle: RDO

What data scientists can do with better machine learning

Smart companies are finding new ways to squeeze more value out of their massive data storehouses. They’re unlocking insights from their data that build new business models, improve customer experiences and outpace competitors. So where do these business-changing insights come from?
Data doesn’t interpret itself. A table of numbers won’t arrange themselves in a pattern that spells out “here’s what your customers really want.” We look to data scientists to find meaning and value—and those insights can fuel transformation across your business.
Data science itself is undergoing rapid transformation. Early this year, Gartner predicted that nearly half of data science tasks will be automated by 2020.
That’s not alarming. In my view, machine learning won’t just automate data science, it will more profoundly transform and accelerate the businesses that embrace it.
Better machine learning doesn’t replace what data scientists do. But machine learning is building better tools to help them automate processes like discovery and visualization. There’s a huge opportunity for automation to improve the tools that will bring data and data-driven insights outside of analytics organizations. When business users can access and interpret data more effectively, data scientists can focus on more complex data analysis.
It’s no secret that IBM is invested in elevating data science. Year after year, analysts consistently rank IBM as a leader in the data science platform space. We want to give data scientists a platform to share successes and be partners in identifying and overcoming roadblocks.
I hope you’ll join us at Fast Track Your Data – Live from Munich starting June 22, where data science and the impact of machine learning are a core topic. Join IBM and industry leaders for demos, breakout sessions and panels. Highlights include:

Demo: immersive insights from 3-D visualization for the data scientist. IBM data professionals will show how to bring the power of data science tools to Augmented Reality (AR) visualizations helping to improve user experience, data exploration and analysis.
Build smarter apps with data science and app developers. This session will explore collaboration and integration opportunities to connect processes that can fuel business decisions.
Ask a data scientist: A one-on-one experience. Data specialists from the Machine Learning Hub in Germany will be on-hand to tackle problems, answer questions and share best practices.
Mixing oil and water: Getting data scientists and business analysts to work together painlessly. This session will explore ways to help improve collaboration and speed data insights beyond the analytics organization.  

Data scientists are building the data-driven future of business. Machine learning will help them do it. I look forward to sharing ideas and best practices at Fast Track Your Data – Live from Munich. If you can’t make it in person, I hope you’ll join us at the conference through the live stream.
In the meantime, explore more about the IBM data science platform and machine learning.
A version of this article was originally published on the IBM Big Data and Analytics Hub.
The post What data scientists can do with better machine learning appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

BMW and IBM team up for cloud-connected CarData network

BMW and IBM announced this week that they’re working together on a cloud computing project that could help as many as 8.5 million drivers diagnose and repair problems, save on car insurance, and take advantage of other third-party services.
The BMW CarData network will run on the IBM Bluemix platform, where it will have access to Watson Internet of Things (IoT) capabilities. Drivers who use the BMW ConnectedDrive app will be able to access services as data is collected. Only drivers who give permission will share telemetrics data, and drivers will be asked for their consent for every app and service.
IBM will act as a “neutral server,” which means it can collect data from other car manufacturers’ vehicles in addition to BMW’s.
So far, CarData is only up and running in Germany, but BMW has plans to expand to other countries soon.
For more, read AutoBlog’s full story.
The post BMW and IBM team up for cloud-connected CarData network appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

“Not Our Software” Is No Excuse for Forklift Upgrades: CI/CD Using MCP DriveTrain — Q&A

The post “Not Our Software” Is No Excuse for Forklift Upgrades: CI/CD Using MCP DriveTrain — Q&A appeared first on Mirantis | Pure Play Open Cloud.
These days, technology moves much too quickly for “forklift upgrades”, where you go for months or even years between versions of a software package. Not only are you depriving your users of features they can be using and keeping developers from getting valuable user feedback, you’re also increasing the risk that when you DO upgrade, you’re going to have problems.
Last week we spoke to Ryan Day about using Continuous Integration/Continuous Deployment (CI/CD) to keep not just your own software, but also externally produced software, up to date. It’s a subject that’s close to our hearts; Mirantis Cloud Platform is more than just OpenStack, adding not just Kubernetes but also DriveTrain, a unified cloud deployment and Infrastructure as Code tool.
You can view the whole webinar, and Ryan’s answered the Q&As here in this blog.
Meanwhile, if you’re interested in seeing how DriveTrain can help you get a handle on your infrastructure, contact us, and we’ll be happy to show you how it’s done.
Q: Are OpenStack and Kubernetes are deployed in containers?
A: Containers provide a number of different advantages, and our goal is to eventually containerize all of the appropriate OpenStack services, but as they weren’t originally designed to be containerized, we want to make sure we proceed carefully to maintain the robustness Mirantis OpenStack is known for.  For this reason, in Mirantis Cloud Platform 1.0, some DriveTrain components are deployed in containers, but Kubernetes and OpenStack are installed with packages. Containerized OpenStack services are in Mirantis’ product roadmap.
Q: I’m currently having hardware difficulties deploying Fuel. Are there any specific hardware or network requirements for MCP?
If you’re having hardware difficulty with Fuel in Mirantis OpenStack 9.0 (MOS) or older, it could be related to the specific drivers packaged in the MOS ISO. However, with MCP, we rely primarily on Ubuntu’s hardware compatibility rather than packaged drivers. With regard to changing configuration options to suit various hardware choices, this is actually much easier with MCP because of the CI/CD nature of the software.
Q: Is it possible to download and test MCP without a support contract or managed services?
The best thing to do here is to contact us, so you can speak with an MCP expert, ask questions and see how MCP relates to your own use case.
Q: Does DriveTrain work with other OpenStack distributions?
Mirantis Cloud Platform is designed with a philosophy of “open cloud”, so theoretically, you can replace the “OpenStack” part of the deployment with another distribution. You would have to test/integrate to accommodate the differences in the distributions, but if you were really determined, you could do it. As you might expect, however, Mirantis has only tested DriveTrain with Mirantis OpenStack (MOS).
Q: Can I use salt to deploy to bare-metal?
You can use MaaS, Ironic, Foreman, and other tools to install an operating system on bare-metal, as long as they also install a salt minion agent. From this point on, Salt is responsible for all configuration, even on bare-metal. (I should note that at the moment, Salt automation of MaaS bare-metal provisioning is currently in tech-preview.)
Q: Does DriveTrain follow DevOps pipeline practices for open source projects?
Mirantis will continue to package and harden open-source cloud platforms such as OpenStack and Kubernetes, and we’ll continue to contribute back to the various communities whose tools we use. For example, in addition to OpenStack and Kubernetes, we leverage open source components for DriveTrain and StackLight, and any time we make improvements to those projects we will contribute those patches. That also goes for Salt formulas, which are also open source and are available at http://github.com/salt-formulas.
The post “Not Our Software” Is No Excuse for Forklift Upgrades: CI/CD Using MCP DriveTrain — Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Extending Netcool Operations Insight with Agile Service Manager for dynamic converged topology

When everything that you know is changing so quickly, how can IT and network operations teams be effective and responsive to maximize customer service?
No matter your business goals, addressing IT infrastructure operations requires reliable context to understand and resolve problems quickly. Additionally, the infrastructure you are relying on is changing more rapidly than ever before. These changes can improve your environment, have a negative impact or have no discernable effect.
When context matters, your operations team needs service and infrastructure information right away. And it is important to trust the information that you’re getting.
IBM has just delivered the Agile Service Manager feature of Netcool Operations Insight for operations teams like yours. You need to be aware of application, service and infrastructure topologies and relationships. And you need that information to be actionable in near real-time, whether by automation or manual response.
Agile Service Manager gives you dynamic, converged topology and relationship information through a combination of traditional polling discovery and active bidirectional communication with the other actors in your environment. With Agile Service Manager, you can:

Collect long-lived information from traditional discovery tools including IBM Network Management
Maintain bidirectional contact with orchestration and change engines so that Agile Service Manager is up to date with all changes that they make
Monitor applications, services and infrastructure—including network domain, server and cloud infrastructure—as well as storage to have an accurate view of the total infrastructure to speed problem identification
Maintain a historical view of resources and relationships so operations teams know not just how things are structured right now, but how they were structured at a given point in time, and when critical changes occurred. They can also compare topologies at two points in time to understand how resources, relationships and state has changed

Agile Service Manager interacts with the change agents in your environment, incrementally updating topology information based on changes whether they are intended or not. Knowing what the environment looks like right now, combined with awareness of when important changes occurred, can radically accelerate problem remediation. It’s the key to mastering automated lifecycle solutions.
IBM Netcool Operations Insight is a leader in cognitive operations, dynamic infrastructure and service management, and enabling hybrid management of hybrid cloud applications so that you can:

Effectively identify important problems
Allow teams to operate Efficiently
Successfully manage operations in a fast-changing IT infrastructure

For more information on IBMs Point of View check out this white paper from Analysys Mason or take a look at the Gartner OSS Magic Quadrant.
And take a look at the latest updates about Netcool and IBM Operations management.
The post Extending Netcool Operations Insight with Agile Service Manager for dynamic converged topology appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Using Ansible Validations With Red Hat OpenStack Platform – Part 3

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.
In the final part of our series, we demonstrate how to run those same validations using two new methods: the OpenStack scheduling service, Mistral, and the Red Hat OpenStack director UI.

Method 2: Mistral
Validations can be executed using the OpenStack Mistral Unified CLI. Mistral is the task service on the director and can be used for doing everything from calling local scripts, as we are doing here, to launching instances.
You can easily find the available validations using Mistral from the openstack unified CLI. The command returns all the validations loaded on director, which can be a long list. Below we have run the command, but omitted all but the ceilometerdb-size check:
[stack@undercloud ansible]$  openstack action execution run tripleo.validations.list_validations | jq ‘.result[]’

{
 “name”: “Ceilometer Database Size Check”,
 “groups”: [
   “pre-deployment”
 ],
 “id”: “ceilometerdb-size”,
 “metadata”: {},
 “description”: “The undercloud’s ceilometer database can grow to a substantial size if metering_time_to_live and event_time_to_live is set to a negative value (infinite limit). This validation checks each setting and fails if variables are set to a negative value or if they have no custom setting (their value is -1 by default).n”
}

Next step is to execute this workflow by using the “id” value found in the Mistral output:
$ openstack workflow execution create tripleo.validations.v1.run_validation ‘{“validation_name”: “ceilometerdb-size”}’
The example below is what it looks like when run on the director and it contains the final piece of information needed to execute our check:

Look for the “Workflow ID”, and once more run a Mistral command using it:
$ openstack workflow execution output show 4003541b-c52e-4403-b634-4f9987a326e1
The output on the director is below:

As expected, the negative value in metering_time_to_live has triggered the check and the returned output indicates it clearly.
Method 3: The Director GUI
The last way we will run a validation is via the director UI. The validations visible from within the UI depend on what playbooks are present in the /usr/share/openstack-tripleo-validations/validations/ directory on the director. Validations can be added and removed dynamically.
Here is a short (60-second) video which demonstrates adding the ceilometerdb-size validation to the director via the CLI and then running it from the UI:

Pretty cool, right?
Where to from here?
As you write your own validations you can submit them upstream and help grow the community. To learn more about the upstream validations check out their project repository on github.
And don’t forget, contributing an approved commit to an OpenStack project can gain you Active Technical Contributor (ATC) status for the release cycle. So, not only do you earn wicked OpenStack developer cred, but you may be eligible to attend a Project Teams Gathering (PTG) and receive discounted entry to the OpenStack Summit for that release.
With the availability of Ansible on Red Hat OpenStack Platform you can immediately access the power Ansible brings to IT automation and management. There are more than 20 pre-supplied TripleO validation playbooks supplied with Red Hat OpenStack Platform 11 director and many more upstream.
Ansible validations are ready now. Try them out. Join the community. Keep your Cloud happy.
Thanks!
That’s the end of our series on Ansible validations. Don’t forget to read Part 1 and Part 2 if you haven’t already.
Thanks for reading!
Further info about Red Hat OpenStack Platform
For more information about Red Hat OpenStack Platform please visit the technology overview page, product documentation, and release notes.
Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including a free samples from every title!
And don’t forget you can evaluate Red Hat OpenStack Platform for free for 60 days!
The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.
 
Quelle: RedHat Stack

3 ways to deliver better customer service with cloud

With IT budgets shrinking and customer expectations rising in a congested market, IT departments are under increasing pressure to maintain and improve customer service.
How can this be achieved when 60 to 70 percent of IT budgets are already consumed keeping existing systems running?
It’s time to optimize how IT budgets are spent and take a fresh view of IT infrastructure. There are three key tenets for considering how to reduce IT costs, release resources to focus on better customer experience and bolster competitive advantage:
1. Keep control of your data
In an industry where regulations such as the EU’s General Data Protection Regulation (GDPR) have increasing influence and significant potential ramifications, data sovereignty and compliance are vital considerations when running operations in the cloud.
The level of transparency required is only rising in support of a focus on customer needs and privacy. This means that organizations need full visibility of the location and governance of their data – on short notice – to have confidence in full accountability should an enquiry come into play. The cost of not meeting these standards is severe, and in many cases, surveys have shown that it could put brands out of business.
The important thing here is that organizations own the data, regardless of its location. Security, resilience, flexibility and processing speed are also key factors in the data location decision-making process. IBM currently has three data centers providing in-country back up, restoration and disaster recovery with zero cost within data center high-bandwidth networks. This capacity will soon double to six data centers, representing the largest UK footprint and providing a full availability zone to many organizations.
Why is this important? Now more than ever, brands need the flexibility to decide where their data is stored, whether that be in public, private or hybrid solutions.
2. Remember service is king
With the rise of e-commerce comes an exponential growth in site visitors, transactions and peak trading. This retail model is expanding across different sectors, too, and with it comes rising consumer expectations. Consumers expect the same from their bank as they do from their retailer and other favorite brands. This rising experiential benchmark means that brands must provide a seamless digital experience to maintain loyalty.
Whether you represent a retailer anticipating thousands of orders each day or a finance organization looking to streamline the loan approval process, a managed service with clear service-level agreements in place removes the operational headaches of IT management and frees up resources for more innovative activity. For example, a financial organization was able to increase loan approval rates by 5 percent, increase system availability by 20 percent and take a 15 percent cut in operating costs.
DIY cloud is important during development, but not for production systems where reliability and consistency are key. Service-level agreements are vital and many organizations are extending their teams with managed services.
3. Don’t pay more for key licenses
It may be old news to many, but running Oracle database workloads on some clouds can be more costly, due to it discontinuing cloud agreements with some major cloud vendors. This increase does not apply to all clouds however, and IBM has always designed cloud systems with OVM and Oracle certifications on hardware for RAC, so license costs remain at prior levels.
Given the broader costs of IT, a continuing drain on resources and some companies’ lack of flexibility to invest in innovative projects, having the right infrastructural foundation can be a game changer for any organization.
Learn more about cloud managed services.
The post 3 ways to deliver better customer service with cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud