Project Teams Gathering interviews

Several weeks ago I attended the Project Teams Gathering (PTG) in Denver, and conducted a number of interviews with project teams and a few of the PTLs (Project Technical Leads).

These interviews are now all up on the RDO YouTube channel. Please subscribe, as I’ll be doing more interviews like this at OpenStack Summit in Sydney, as well as at future events.

I want to draw particular attention to my interview with the Swift crew about how they collaborate across company lines and across timezones. Very inspiring.

Watch all the videos now.
Quelle: RDO

Spark disruption at the IBM Cloud and Cognitive Summits

Too often, disruption is cast in a negative light.
Global enterprises spend billions every year scrambling to find new ways to maintain the status quo, but disruption shouldn’t be feared. If those same enterprises increased the amount of time and energy they spent on creating their own game-changing innovations, they’d probably be more prepared to keep competitors at bay while pioneering in entirely new markets.
Take data science as an example. Data science is no longer an isolated, standalone job. It’s being democratized, and the tools are becoming more accessible and easier to use. Data science experts will continue to exist, but the skill of data science is becoming a standard requirement for more and more roles. That means every team, no matter how big or small, should be basing their decisions on quantifiable insights. They need to break out of what’s comfortable and embrace the new paradigm.
However hawkish it may seem, businesses must be aggressive to succeed. In that spirit, IBM is offering forward-thinking clients, prospects, and IBM Business Partners a sneak peek into upcoming offerings for data, Watson, cloud, Power Systems and Internet of Things (IoT).
Attendees at the IBM Cloud and Cognitive Summits in New York and Dallas can discover how to unleash their disruptive potential by reshaping IT, accelerating data intelligence, and creating smarter apps and services. The New York summit is 1 – 2 November 2017, and the Dallas summit is 13 – 14 November 2017.
In just two days, attendees will walk away with everything they need to disrupt not only the competition, but entire industries. Highlights include:

Leadership sessions in which attendees will work one-on-one with key IBM executives and explore cross-industry success stories.
Educational lightning talks and panels with an emphasis on leveraging IBM technology for disruption.
Strategy sessions aiming to solve tough challenges with focused Design Thinking.
Interactive demos and hands-on activations showcasing the latest IBM offerings.
Networking events where attendees will meet with industry leaders, IBM Fellows, Distinguished Engineers, and other high-level decision makers.

Learn more about the IBM Cloud and Cognitive Summits in New York and Dallas, and register before seats are gone.
The post Spark disruption at the IBM Cloud and Cognitive Summits appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Container Management with CloudForms – Operational Efficiency

This blog is part 2 of our series on Container Management with CloudForms.
 
In this blog, we look how the operations team can manage container environments and ensures the workload runs securely and efficiently. This includes the containers themselves, but also the underlying infrastructure. Operators need to ensure resources at all layers of the stack are optimized to provide the highest level of service for the container workload.

The first step before being able to manage a containerized environment is being able to know what containers and resources are available. When containers and nodes can come and go at any time, how can we know accurately what is inside a container based infrastructure?
 
Similarly we also want to know where those resources are located, and how they relate to each others, across the different layers of the infrastructure stack. This knowledge to key to be able to manage and troubleshoot efficiently.
 
And finally, how can we automate common tasks, for example using triggers when certain conditions happen? For example, flagging containers for being out of compliance when a new container image is published, automating the scaling up/down of the underlying infrastructure when a resource threshold is met, etc.
 
First, CloudForms discovers on premise and public cloud compute, storage and network infrastructure needed to support containerized applications on an ongoing basis. It also provides visibility on the container platform and workload.
 
CloudForms connects to the various management APIs, such as the Kubernetes API provided by OpenShift, and underlying infrastructure APIs, and discovers inventory and continuously monitors for the addition of new resources.
 
CloudForms visualizes the discovered resources (e.g. containers, nodes, underlying virtual/physical infrastructure or cloud), and shows their relationships. This way, we can see that a certain application runs on a certain node, which is actually a virtual machine that runs in a certain resource pool. This information is key to making the environment supportable, as we know accurately the resources used by each of our container workload across the entire stack.
 
On the automation side, CloudForms allows you to define control policies that listen to events in real time and can kick off automation if certain conditions are met. Automation can be as simple as alerting a team by email, generating a new incident in your service helpdesk application, or as complex as managing the elasticity of your underlying container based infrastructure.
 
You can also extend the CloudForms user interface (UI) to allow for operators to trigger custom tasks. This can be used to expose day 2 management operations on both the containers and their infrastructure.
 
The following video demonstration highlights these capabilities in CloudForms:

Visibility & Inventory
Resource Relationships
Troubleshooting (across the stack)
Task Automation

 

Quelle: CloudForms

A look inside hybrid cloud design

User experience and design are integral to hybrid cloud products at IBM.
Designers work hard to deliver the best user experiences using the Design Thinking process that IBM Design works to instill across the entire company. However, it can be difficult to really understand what designers do and how they impact the product overall for those outside of the design industry.
IBM Design has built a unique culture that designers then bring to all teams across the company. Designers use unique methods, tools and approaches to create the user experiences that are best for their products. For a look into how design teams at IBM Cloud operate, read the article below and follow the Bluemix Availability Monitoring design team as they give insight into what it means to be a designer at IBM, as well as how they bring that culture to the team at IBM Cloud.
Read more at Medium.
The post A look inside hybrid cloud design appeared first on Cloud computing news.
Quelle: Thoughts on Cloud