[Podcast] PodCTL #12 – Introduction to CRI-O

While the industry has standardized on Kubernetes as the container orchestration standard, there is still quite a bit of choice and innovation that is happening around container standards. In this week’s episode, we talk with Dan Walsh (@rhatdan, Consulting Engineer at Red Hat, container team lead) and Mrunal Patel (@mrunalp, Principal Engineer at Red Hat, OCI/runc […]
Quelle: OpenShift

New data center solutions help simplify public cloud migration and speed private cloud deployment

Among the bevy of new, modernized data center solutions unveiled by IBM this week are two designed to help organizations deploy public clouds more quickly and migrate data to and from the IBM Public Cloud with greater ease.
Spectrum Virtualize not only enables simplified migration to the public cloud, it also helps with disaster recovery. Its intent is to “support mirroring between on-premises and cloud data centers or between cloud data centers.”
On the private cloud side, IBM Spectrum Access helps storage admins deploy private clouds quickly by “delivering the economics and simplicity of the cloud with accessibility, virtualization and performance of an on-premises implementation.”
Among the other new data center solutions was a cloud-based software beta program that integrates storage with artificial intelligence and machine learning to make storage infrastructure more efficient and healthy. IBM also unveiled a new, even denser FlashSystem array that contributes to lower data costs by almost 60 percent.
Find out more about the new IBM data center solutions in TechRepublic’s full article.
The post New data center solutions help simplify public cloud migration and speed private cloud deployment appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Your guide to the best of Dreamforce 2017

Dreamforce 2017 is officially sold out. Everyone is heading to San Francisco in November. Now what? We’re glad you asked! This year, IBM is the top sponsor. That means we have tons of perks to pass on to you, plus a slew of sessions that will show you how to get the most from your Salesforce experience.
First off, is the real reason you’re attending – the sessions! Here’s what the IBM team has to offer for Dreamforce 2017:

Good vs. bad chatbots: How to build effective intelligent apps
The state of Salesforce: AI & insights for a return on intelligence
5 steps for achieving a metrics-driven, human-centered approach to innovation
Unleash your data: Unlock the full potential of Salesforce
Weather-based decision making: The future of sales and service
Creating customers for life: Digital service transformation at Telus

Since you’re getting the best of IBM Cloud Integration, the Weather Company and Bluewolf, these sessions will feature multi-dimensional methodologies for streamlining your integrations with the click of a mouse. You’ll also receive insights for creating memorable customer experiences and discover new ways to integrate the Salesforce platform with data from external systems like SAP and MDM systems.
Speaking of those exemplary customer service experiences, don’t miss our super session on using Watson Virtual Assistant, IBM Cloud Integration and Salesforce to build the kind of interactions that make your customer service into a true business differentiator. IBM Cloud Integration supports engaging omni-channel experiences, including everything from AI-led conversations with your customers through chatbots to delivering data to Salesforce Analytics to ensure that your customer service reps are informed with all the latest context.
By the way, you’ll find out even more about how Cloud Integration forges the connections between multi-cloud, cloud and on premises systems in our breakout session on Monday. Don’t miss it!
Please be sure to stop by our booth – you can’t miss us – for demos, chats with experts and a deep dive into IBM Integration for Salesforce.
But, of course, the conference isn’t all work and no play! IBM will be hosting premier parties, as well as VIP encounters, exclusive roundtables and summits.
Visit the event page to learn more. We can’t wait to see you there!
You’ll also want to visit the IBM website to learn more about Cloud Integration for Salesforce.
The post Your guide to the best of Dreamforce 2017 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

CentOS Dojo @ CERN

Hi,

Alan, Matthias, Rich and I were at CERN last week on thursday and friday to attend the CentOS dojo.
Rich also a wrote a series of blog posts about the dojo.

First day: CentOS SIGs meetup

Thursday was dedicated to a SIGs meeting, I’ll give few highlights but you can read notes on his etherpad.

We managed to agree on a proposal to allow bot accounts for SIGs which is one of RDO current pain points.
There was also progress into improving CI for SIGs contents, like defining a matrix for SIGs depending on each other to trigger tests
Testing against CentOS extras is also an issue. SIGs were advised to provide automated tests that CentOS QA can run and send feedback to SIGs (not blocking updates but still an improvement). Thanks to the t_functional framework
Many discussions around the package build workflow (signing, embargoed builds, deprecate content).
SIG process: what happens when a chair is MIA? (happened for storage SIG)
That was a very productive and focused session, we even managed not to get over schedule, defining a proper agenda ahead of time have helped.

At the end of the day, we had a tour of the datacenter (to see and touch the nodes that run RDO <3). Then, we visited the ATLAS experiment facility.

Second day: CentOS dojo

Friday was the dojo (See schedule with slides attached!) itself, we had about 100 persons registered, with more or less20 not showing up. It started by Belmiro Moreira talk about the OpenStack infrastructure at CERN. It is amazing to see that their RDO cloud runs over 279k cores and has been updated to Pike. It was followed up by a talk from Hervé Rousseau about CERN storage facilities, and the challenge they are facing (Data Deluge in 2026!). They are big users of Ceph and CephFS.

Afterwards, we had a SIGs status from Storage, Opstools (mrunge) and Cloud (myself). It seems that attendance was happy to discover Opstools in a new light, Matthias had many questions after his talk.
For my Cloud SIG talk (slides, I collected many stats to show the vitality of our community. I would like to thank boucher and the Software Factory Team for the RepoXplorer project for the stats, it was really helpful.
Then, I spoke our contributions to cross-SIG collaboration, including amoralej proposal for a ceph build pipeline inspired by ours.
And I ended up with our own infrastructure, showing off DLRN, WeIRDO etc.
The day ended up by a talk from kwizart (RPMFusion maintainers) about CentOS and 3rd party repository.

The hallway track was also interesting as I got to meet with Magnum PTL and the other folks maintaining it at CERN. I finally got feedback about magnum packaging working fine, and we spoke about adding RDO 3rd-party CI to magnum. We don’t ship magnum in OSP, but this is a visible project and used by RDO biggest use-case, so helping them to set it up is an excellent news for RDO.

Conclusion

This was an excellent event, where SIGs were able to focus on solving our current pain points. As a community, RDO does value our collaboration with CentOS to provide a native and rock-solid experience of OpenStack, from the kernel to the API endpoints!
Quelle: RDO

Container Management with CloudForms – Financial Management

This blog is part 5 of our series on Container Management with CloudForms.
 
In this last post, we focus on financial management of container environments for both chargeback and for optimizing infrastructure resource usage and spending.

 
In these dynamic and expanding environments, it is very important to know what type (public, private, etc) and how much resources a particular workload is consuming, and, if desired, charge the associated cost back to the user.
 
Similarly, we also want to know how much the infrastructure is actually costing us.
 
CloudForms gives the ability to define cost models at the infrastructure layer but also at the container layer across virtual, private cloud and multiple public clouds. The infrastructure model gives us insight in the cost of operating the infrastructure, and the container model allows us to pass those costs on to the users.
 
The chargeback models in CloudForms are very flexible. We can for example define and attach usage costs to CPU, memory, disk and network for the infrastructure. We can also specify and report costs on project or image for the container workload. Multiple currencies are supported.
 
The following video demonstration highlights these capabilities in CloudForms:

Resource Consumption & Reporting
Showback & Chargeback

 

 
Conclusion
 
This concludes this series of blogs on managing containers at scale with Red Hat CloudForms. As we have seen, we can use CloudForms to addresses all key challenges of running containers at scale, in the areas of operational efficiency, service health, security and compliance, and financial management.
 
We invite you to try CloudForms in your environment. One of the easier way is to instantiate the CloudForms container image directly from the Red Hat Container Catalog, and get started in managing your OpenShift container environment.
 
Furthermore, additional resources on Red Hat Management can be found on Accelerate I.T. services delivery and Automate I.T. Processes.
 
Quelle: CloudForms

Can managed cloud services accelerate your SAP/HANA deployment?

Data analytics, artificial intelligence, Internet of Things (IoT) and similar technologies are helping all sorts of companies transform their digital presence. Lots of companies want to join the fun.
Moving to SAP/HANA could help: it enables lines of business to analyze data in new ways and apply what they learn to help close sales and shorten the sales cycle. For many businesses, moving to HANA helps accelerate business process improvements that drive value. Real-time analytics enable changes that can improve customer service or speed time to market for new services.
But SAP/HANA deployments come with challenges. You may already have major investments sunk into SAP licenses for your CRM deployment that may not transfer to a HANA installation. Plus, with a new platform comes a steep learning curve. How can you change your business processes to best use HANA’s capabilities and realize a strong return on your investment? Will HANA really provide the benefits it promises?
Many IT leaders face the same questions and dilemmas when considering a move to the SAP/HANA platform:

26 percent of businesses surveyed by Frost & Sullivan have moved their SAP deployments to HANA.
44 percent are planning a move in the future.

Recognizing the benefits of HANA doesn’t help execute a migration, however. It is difficult for SAP users to move to HANA without help and support.
Managed cloud service providers can help bridge the gap between in-house expertise and knowledge necessary to successful move to the SAP/HANA platform. Among managed cloud users, 26 percent have moved to the HANA platform, versus only 9 percent of businesses not using managed cloud.
What can the right managed cloud provider bring to your business?

Critical expertise in planning and migrating your SAP/HANA deployment. Providers that have developed specific expertise in moving SAP installations onto the HANA platform are well-positioned to help your business plan and execute a HANA migration. Among managed cloud users, 49 percent used their provider for migration services; 82 percent found their managed cloud provider’s assistance extremely or very important to the success of their cloud initiative.
A plan to leverage the HANA platform. Many providers will help you create a transformation roadmap, showing how and when a HANA migration can provide the best returns for your business.
Application service-level agreements (SLAs). Provider SLAs ensure your critical SAP applications are performing to the level your business requires at the time they need it.
Global capabilities. Providers with global managed service capabilities can ensure that your business-critical HANA databases are available around the globe, regardless of where your team needs to access them.
Security and regulatory compliance. Maintaining the security of the data in your SAP/HANA platform is also of highest importance. Since SAP applications typically involve sensitive customer or employee data, safeguarding that data is paramount. Data breaches or leaks can damage a business to the point of ruin, so being able to turn to a provider who makes security a prime focus offers a level of comfort that a do-it-yourself approach simply doesn’t offer. Keeping data compliant with applicable regulations is also key, and a managed services provider can help with oversight and auditing to help ensure you follow regulations applicable in your region and industry.
Routine application management. By offloading your SAP/HANA workloads to a managed cloud provider, you are not only able to augment your in-house expertise, but free your own team for more innovative projects that can drive productivity and service delivery for your business.

The move to SAP/HANA can be key to a business’s success but difficult to plan and implement. Many enterprises today are turning to the service of a managed cloud provider, which can help plan a successful HANA migration, as well as providing daily management capabilities and a roadmap of how to best use HANA to drive results for your business.
Providers such as IBM, with its deep expertise, global capabilities, and secure and compliant cloud data centers are well-positioned to help you achieve strong business results from your SAP/HANA deployment.
For more on how you can transform your SAP/HANA deployment with cloud managed services, download the Frost & Sullivan report.
The post Can managed cloud services accelerate your SAP/HANA deployment? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud