6 videos on how to install Red Hat OpenStack Platform and CloudForms

Our excellent Training & Certification team has posted some videos in our RedHatCloud youtube channel that quickly go over the installation procedure of Red Hat OpenStack Platform 8, and how to boot a CloudForms instance to perform basic management functions. Kudos to our awesome video team (Jim Meegan and Ben Oliver) and to our curriculum architect (Forrest Taylor).
These videos were first developed as guided demonstrations for use in our Red Hat OpenStack Administration II (CL210) and Red Hat CloudForms Hybrid Cloud Management (CL220) courses. Now they are available for you to view for free. Remember that we also offer a free introductory course, the CL010 Red Hat OpenStack Technical Overview, to get a taste of our courses.

Phase I: Deploying the undercloud

Phase II: Deploying the Overcloud

Phase III: Verifying the Overcloud

Phase IV: Deploying CloudForms on OpenStack

Phase V: Using CloudForms as the operator admin, to scale up your infrastructure automatically according to capacity and utilization metrics

Phase VI: Using CloudForms as a tenant, to consume resources (boot an instance, connect via VNC) using the user portal and leverage the centralized service catalog

We hope you liked this introduction. Remember, if you want to try the latest Red Hat OpenStack Platform, you can get a free 60-day evaluation for your on-premise deployment, learn the basics of OpenStack with TryStack (built from upstream RDO project) or go with one of our many certified partners and get your own Hosted Private Cloud hosted, like the Rackspace Private Cloud, Powered by Red Hat.
For more information about Red Hat OpenStack Platform, visit our product page
Quelle: RedHat Stack

Blog posts last week

Here’s what RDO enthusiasts have been blogging about in the past week.

ARA 0.10, the biggest release yet, is out ! by dmsimard

19 commits, 59 changed files, 2,404 additions and 588 deletions… and more than a month’s on and off work.

Read more at http://tm3.org/cp

Upstream Contribution – Give Up or Double Down? by assafmuller

Ever since I’ve been involved with OpenStack people have been complaining that upstream is hard. The number one complaint is that it takes forever to get your patches merged. I thought I’d take a look at some data and attempt to visualize it. I wrote some code that accepts an OpenStack project and a list of contributors and spits out a bunch of graphs. For example:

Read more at http://tm3.org/cq

Recorded presentations from OpenStack Canada Day by dmsimard

OpenStack Days are sort of local mini one-day OpenStack summits.

Read more at http://tm3.org/cr

How to Install and Run Tempest by mkopec

Tempest is a set of integration tests to run against an OpenStack cluster. In this blog I’m going to show you, how to install tempest from git repository, how to install all requirements and run tests against an OpenStack cluster.

Read more at http://tm3.org/cs

On communities: Empower humans to be amazing by Flavio Percoco

When it comes to communities, a system is the set of processes you put in place

Read more at http://tm3.org/ct

How are you using RDO? (Survey results) by Rich Bowen

Over the last few weeks, we’ve been conducting a survey of RDO users, to get an idea of who is using RDO, how, and for what.

Read more at http://tm3.org/ck

Testing composable upgrades by Carlos Camacho

This is a brief recipe about how I’m testing composable upgrades N->O.

Read more at http://tm3.org/cl

TripleO cheatsheet by Carlos Camacho

This is a cheatsheet some of my regularly used commands to test, develop or debug TripleO deployments.

Read more at http://tm3.org/cm

JSON Home Tests and Keystone API changes by Adam Young

If you change the public signature of an API, or add a new API in Keystone, there is a good chance the Tests that confirm JSON home layout will break.  And that test is fairly unfriendly:  It compares a JSON doc with another JSON doc, and spews out the entirety of both JSON docs, without telling you which section breaks.  Here is how I deal with it:

Read more at http://tm3.org/cn
Quelle: RDO

Why can’t all cloud providers deliver adequate security?

IT security is a top priority for most CIOs. After all, gaps in infrastructure could leave their companies and customers vulnerable to attacks.
So when evaluating a cloud managed services provider, asking the right security questions can be critical in determining if the solution is a good fit. Choosing a cloud solution that meets a company’s unique requirements can help reduce operational costs and drive innovation while enhancing security.
With this in mind,  our IBM cloud security experts highlight six question in this short webcast that, when asked, can help you decide whether a cloud service provider can meet your security requirements.
A focus on security
Source: Redefining Connections: Insights from the Global C-Suite Study – The CIO Perspective, IBM Institute of Business Value, 2016
A recent study conducted by IBM found that 76 percent of CIOs consider IT security their biggest risk. It was far and away the top response.
To avoid potential problems, a cloud managed services provider should incorporate built-in security layers at every level from the data center to the operating system, delivering a fully-configured solution with industry-leading physical security and regular vulnerability scans performed by highly-skilled specialists.
Questions to ask
When deciding whether a cloud managed services provider can meet your security requirements, start with these questions:
1. Who is responsible for security?
The answer may not be as obvious as you think.
Some cloud managed services providers might not take the full responsibility of maintaining a security-rich environment for your data. After they provide the hardware, the security and compliance responsibilities could rest with you. Also, some providers may require an agreement stipulating that your company is responsible for anything you do on your system that might affect your “neighbors” on that same cloud infrastructure.
Choose a cloud managed services provider capable of taking full responsibility for the security of the infrastructure rather than placing the onus on  your company or a third party.
Be certain that your data is managed with the same tools, standards and processes that the provider uses for its own systems.  To avoid confusion that can lead to serious issues later on, make sure this division of responsibility is clearly defined in your agreement with the provider.
2. How do I know security is adequate?
Your cloud solution should be able to help you manage regulatory compliance standards. While some providers may use certifications as a way of demonstrating security, it’s important to know what you’re looking at. Some certifications may cover only certain services or locations.
Choose a cloud managed services provider that covers the security of the entire infrastructure as well as policies and procedures. The security section of the IBM Cloud Managed Services Comparison Guide includes a list of certifications you may want to look for when evaluating cloud providers.
3. What if something goes wrong?
Quick recovery after a disaster is crucial to your business operations. Failure to properly handle outages can lead to lost revenue, productivity challenges and a damaged reputation with your customers.
Choose a managed cloud hosting solution that includes offsite disaster recovery options to help you get back online quickly.  Make sure your agreement includes production-level service level agreements (SLAs) and regular testing of emergency backup options.
To learn more about what to ask and listen for when deciding whether a cloud service provider can meet your security requirements, register for the webcast, &;Six questions every CIO should ask about cloud security.&;
The post Why can’t all cloud providers deliver adequate security? appeared first on news.
Quelle: Thoughts on Cloud

Enterprise cloud strategy: Applications and data in a multi-cloud environment

Say you&;ve decided to hedge your IT bets with a multi-cloud environment. That’s excellent, except, what&8217;s your applications and data strategy?
That&8217;s not an idle question. The hard reality is that if you don&8217;t coordinate your cloud environments, innovative applications will struggle to integrate with traditional systems. Cost management, security and compliance — like organizational swords of Damocles — will hover over your entire operation.
In working with clients who effectively manage multiple clouds, I see five key elements of applications and data strategy:
Data residency and locality
Data residency (sometimes called data sovereignty) defines where a company’s data physically resides, with rules for how it&8217;s handled and transferred, including backup and disaster recovery scenarios. It&8217;s often governed by countries or regions such as the European Union.
Data locality, on the other hand, determines how and where data should be stored for processing.
Taken together, data residency and locality affect your applications and your efforts to globally digitize more than anything else. Different cloud providers allow various levels of control over data placement. They also provide the tools to verify and ensure compliance with residency laws. In this regard, it&8217;s crucial to have a common set of tools and processes.
Data backup and restoration across clouds are necessities. Your cloud services provider (CSP) must be able to handle this, telling you exactly where it places the data in its cloud. Likewise, you should know where the CSP stores copies of the data so you can replicate them to another location in case of a disaster or audit.
Security and compliance
You need a common set of security policies and implementations across your multi-cloud environment. This includes rules for identity management, authentication, vulnerability assessment, intrusion detection and other security areas.
In an environment with high compliance requirements, customer-managed encryption keys are also essential. You should pay attention to how and where they&8217;re stored, as well as who has access to decrypted data, particularly CSP personnel.
Additionally, your CSP&8217;s platform capabilities must securely manage infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), software-as-a-service (SaaS), business-process-as-a-service (BPaaS) and database-as-a-service (DBaaS) deployment models.
Also, the CSP&8217;s cloud will invariably house multiple tenants. Data should be segregated from them with top-level access policies, segmentation and isolation.
Integration: APIs and API management
APIs are the connective tissue of your applications. They need effective lifecycle management across traditional, private cloud and public cloud applications.
Your CSP should provide an API lifecycle solution that includes &;create, run, secure, manage&; actions in a single offering. That solution should also have flexible deployment options — multi-cloud and on premises — managed from a single control pane. That gives you the ability to manage APIs, products, policies and users through one view across your cloud environments.
In assessing a CSP, it’s also worth knowing whether it can integrate PaaS services through a common gateway. Its API platform should be a distributed model to implement security and traffic policies, as well as proactively monitor APIs.
Portability and migration
When taking applications to a multi-cloud environment, you must choose among three migration models. You can &8220;lift and shift&8221; with a direct port of the application to the cloud, perform a full refactor that completely customizes the application, or choose a partial refactor in which only parts of the application are customized.
A lot rides on your CSP&8217;s ability to support these models. Since legacy applications depend on infrastructure resiliency to satisfy uptime, you may not be able to fit them to the CSP’s deployment standards. In fact, such modifications may delay cloud benefits. To address this problem, consider containers for new applications deployed across your different clouds.
Some enterprises tackle migration by installing a physical appliance on the CSP’s premises or in co- located facilities, then integrating it with their applications. If you go this route, understand what the options are, particularly the technical limits with respect to data volumes, scale and latency.
New applications and tooling
To ensure efficiency, operations such as building, testing, and deploying applications should be linked together to create a continuous integration/continuous deployment (CI/CD) pipeline. These tool chains often require customizations when part of a multi-cloud environment. One common error: new applications are designed for scalability in public cloud IaaS or PaaS scenarios, but their performance service-level agreements (SLAs) are not addressed early enough in the cycle. Understanding your CSP&8217;s SLAs, along with designing and testing for performance, are crucial for successful global deployment.
For more information, read IBM Optimizes Multicloud Strategies for Enterprise Digital Transformation.
The post Enterprise cloud strategy: Applications and data in a multi-cloud environment appeared first on news.
Quelle: Thoughts on Cloud

What, exactly, is the cloud?

Just what is , and why should anyone care?
As PC Mag defines it, cloud computing is the access of online material from a proverbial “cloud” of information rather than data retrieval from hard drives or other physical memory, such as flash drives or discs.
Nevertheless, the cloud is merely a concept. It’s important to remember that information accessed through via the cloud is still stored on a physical server that is susceptible to moisture, hacking, and even overheating, just like a personal hard drive. The only difference therein is that cloud servers are usually maintained and managed by companies and professionals at remote and secure locations, colloquially referred to as “server farms.”
At the risk of oversimplifying its impact, cloud computing effects everyone in the world, every day, and at all times. Its importance stems not only from the security and convenience it provides for companies with highly valuable data, but also from the personal services it makes increasingly simpler for normal people. Email access, video streaming, messaging, even online office software that updates in real time as a team simultaneously works on it. It’s all technically a part of what we consider the cloud.
Of course, while the business-to-business, security based services many companies such as IBM offer to large-scale clients may still be what first comes to mind when considering cloud computing, it’s not the final frontier. Cloud computing as a whole is more about an ease of access to information not previously possible in the era of physical drivers. There’s still a long way to go before we see cloud computing integrated completely  into society. However, it’s a bit misleading to assert that the cloud is the wave of the future. Frankly, the cloud is what’s happening in the tech right now.
An entire industry has risen, seemingly overnight, to cater to the data-driven needs of corporations and consumers alike. It’s something to sit up and take note of.
IBM is a great example of a company that is promoting cloud-based services on a wide scale. Hosting and cloud-based services are at the forefront of the company’s offerings and the way it communicates about its services shows a distinct strategy.
IBM does not claim to be the biggest or cheapest provider of cloud based services — the McDonald’s of cloud computing if you will — where each consumer is offered the same array of cheap options. To the contrary, IBM is more focused on providing unique solutions and diverse product lines. This specific orientation in the marketplace is what gives IBM a competitive advantage in cloud computing. More importantly, it’s what gives the company sustained success in web-based services.
The post What, exactly, is the cloud? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What traditional enterprises should know about object storage

Object storage is often a tale of two cities.
If you run a cloud savvy organization or develop cloud-native applications, you get it and you employ it. Think of mobile applications we use each day: social mobile, collaborative applications in which object storage is foundational. On the other hand, if you represent an enterprise, corporation or are a “traditional” (whatever that means) developer, object storage is perceived as something new.
In a recent report from Gartner, one of the key recommendations is to “train developers on best practices related to application design and the operational considerations relevant to an object storage system.”
The opportunity is there. As an enterprise embraces object storage skills, techniques and perspective, a key piece of the innovation puzzle falls into place. Other pieces include, but are not limited to, cloud platform and cognitive services. With just those three ingredients, you can easily come up with exciting solutions like the ones that my colleagues Suresh Jasrasaria and Andrew Trice share here.
It doesn’t take much imagination to see how this trio of object storage, cognitive and platform services can be applied to other applications across life sciences, media and entertainment, energy, gaming, virtual reality, commerce, and other industries.
Yet for some, seeing is still believing. That’s why there is a free tier of object storage services available through the Bluemix platform. For 12 months, users of the standard, cross-region service (which is available in the US but can be activated anywhere in the world) get the first 25 gigabytes, 20,000 GET requests and 2,000 PUT requests they use each month at no charge.
For those in the world of “traditional” development, those 12 months could be a time of great discovery. A major object storage puzzle piece may click into place.
Learn more about IBM Object Storage.
(Image via Flickr)
The post What traditional enterprises should know about object storage appeared first on news.
Quelle: Thoughts on Cloud

American Airlines soars into cloud with IBM

American Airlines, the largest passenger air carrier in North America, announced this week that it has chosen IBM as a cloud provider.
Specifically, the airline intends to move some of its applications to the cloud and make use of IBM infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) tools. The agreement also means American will have access to the 50 IBM data centers around the world, the Bluemix development platform and analytics capabilities.
Patrick Grubbs, vice president of travel and transportation at IBM Cloud, said the work between IBM and American will include &;customer facing systems as well as back office.&;
American has been looking integrate and streamline its systems since merging with US Airways in 2013.
Robert LeBlanc, Senior Vice President of IBM Cloud, said, &8220;This partnership is about delivering the flexibility to American Airlines&; business, allowing them to enhance their customer relationships and further gain a competitive advantage.&8221;
The new IBM agreement with American Airlines extends a longstanding relationship. In the 1950s, the two companies collaborated to create the first-ever electronic reservation and ticketing system in the air travel industry.
For more, check out ZDNet&;s full story.
(Image via Wikimedia Commons)
The post American Airlines soars into cloud with IBM appeared first on news.
Quelle: Thoughts on Cloud

Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification

The post Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification appeared first on Mirantis | The Pure Play OpenStack Company.
Company also adds self-paced training course to Kubernetes and Docker training offerings

SUNNYVALE, CA – Dec. 1, 2016 – Mirantis today launched the first vendor-agnostic Kubernetes and Docker certification, giving enterprises a way to identify container skills in a competitive cloud market. Professionals preparing for the certification are recommended to take the Kubernetes and Docker bootcamp. The company also announced a new online, self-paced KD100 training for self learners looking for economy pricing and additional flexibility.

skills have progressed from being niche to mainstream as the world’s most in-demand skill set. LinkedIn named cloud computing as the hottest skill in demand in France, India, and the United States in 2015. Within cloud computing, Kubernetes and containers have grown in popularity. The OpenStack User Survey shows Kubernetes taking the lead as the top Platform-as-a-Service (PaaS) tool, while 451 Research has called containers the “future of virtualization,” predicting strong container growth across on-premises, hosted and public clouds.

“As interest in Kubernetes and containers gains momentum across the industry, Mirantis felt it vital to add a true vendor-agnostic certification for Kubernetes and Docker,” said Lee Xie, Sr. Director, Educational Services, Mirantis. “Mirantis offers several formats to train professionals on the automated deployment, scaling, management, and running of container applications. This provides maximum flexibility to prepare for the KDC100 certification exam.”

Pricing and Availability

The proctored Kubernetes and Docker certification (KDC100), is a hands-on, 30-task exam, priced at $600. This includes a certificate, listing on Mirantis’ verification portal for prospective employers, and certification signature logos for those that pass the exam. The first session is scheduled for December 29 in Sunnyvale, California, with an attached virtual session. For those interested in a packaged offering, the KD110 bundle includes the KD100 bootcamp and the KDC100 exam for $2,395. The KD100 bootcamp, available in classroom and live virtual formats, is the official recommended training for the KDC100 certification exam.

Mirantis Online Training

The company also announced a new online, self-paced KD100 training. The online course will include one-year access to the KD100 course content and videos, 72 hours of online hands-on labs, as well as a completion certificate that will be provided upon finishing the class. The new class is coming in January 2017. For a limited time, it will be available for preregistration at the discounted price of $195 (regularly $395).

&;This [KD100] class has given me the confidence to say I understand the technology behind Docker and Kubernetes. It also provided me with a lot of use cases that I will be able to use from my perspective as a CIO of a large web hosting company,&; said Nickola Naous, chief information officer, TMDHosting, Inc.

For more information on these and other Mirantis training courses, visit: https://training.mirantis.com/.

About Mirantis

Mirantis helps top enterprises build and manage private cloud infrastructure using OpenStack and related open source technologies. The company is the top contributor of open source code to OpenStack project and follows a build-operate-transfer model to deliver its OpenStack distribution and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest OpenStack clouds in the world. Its customers include iconic brands like AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

 

Contact information:

Sarah Bennett

Mirantis PR Manager

sbennett@mirantis.comThe post Mirantis Launches First Vendor-Agnostic Kubernetes and Docker Certification appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis