Final Speakers Announced for OpenShift Commons Gathering at Red Hat Summit

The OpenShift Commons Gathering at Red Hat Summit will focus on presenting real-world use cases from customers with production deployments of OpenShift. The speakers will share best practices and give direct feedback to the engineers, product managers and key contributors working on the next releases of OpenShift and its upstream projects, Kubernetes and Docker. We’ve also included three “Ask Me Anything” panels with key engineers and product managers from the OpenShift team to give attendees the opportunity to connect, interact, and ask questions. These direct peer-to-peer interactions are helping to make OpenShift the leading open source container platform.
Quelle: OpenShift

Technical Project Manager

The post Technical Project Manager appeared first on Mirantis | Pure Play Open Cloud.
What Linux was to open source and operating systems, OpenStack is to . It makes programmable infrastructure vendor-neutral and frictionless to access, not to mention it unlocks distributed applications and accelerates innovation. It transforms virtualization from an efficiency to a whole new compute paradigm.Mirantis is looking for an energetic Technical Project Manager for OpenStack projects, who will be working on implementation of OpenStack based clouds for multiple customers. If you’re an ambitious Technical PM and thrive on getting tough, real-world problems solved with a smart, motivated team, you want to work here.Your role:Work closely with our customer to define how implementation of private cloud will increase their business agility;Work daily with project stakeholders, architects and offshore engineering team to deliver;Mitigate risks to make sure that the current deliverables will help achieve strategic goals;Provide complete visibility on the project through reporting and status meeting with senior management;Define KPIs and collect metrics which show the picture of project execution;Actively participate in open source community including contributions based on deliverables made during the projects.Your profile:Excellent communicator with the focus on achieving customer’s business goals;Reasonably technical and understand cloud computing domain (primarily IaaS and PaaS space);Have background in Infrastructure, Networking, Data Center Management or related fields;Hands-on experience with Linux and virtualization;Detail oriented and have an ability to track project execution;Know management practices and development methodologies but can adjust the process to what will work to get things done;Understand the challenges of working in distributed environment with multiple teams working on different problems;Understand how to navigate through complex structure of enterprise organizations to get things done.What We Offer:Work with exceptionally passionate, talented and engaging colleagues;High-energy atmosphere Competitive compensation package and strong benefits plan;Flexible working hours;Lots of freedom for creativity and personal growth.The post Technical Project Manager appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Watson identifies the best shots at the Masters

Golf fans know great shots when they see them. And now, Watson does, too.
For this year&;s Masters Tournament, IBM — which has a long history with the Masters as a technology partner — is making use of Watson&8217;s Cognitive Highlights capability to find those memorable moments and spotlight them at Augusta National Golf Club and in cloud-based streaming video apps. It&8217;s a first for sporting events.
&;This year, they really wanted to take the Masters&8217; digital projects to a new level, so we began thinking about how we can have an immersive video experience and what would make that space even more impressive,&; said John Kent, program manager for the IBM worldwide sports and entertainment partnership group. &8220;That&8217;s how Watson became involved.&8221;
The Watson Cognitive Highlights technology uses factors including player gestures and crowd noise to pinpoint shots worthy of replay.
For more, check out ZDNet&;s full article.
The post Watson identifies the best shots at the Masters appeared first on news.
Quelle: Thoughts on Cloud

Mirantis Cloud Platform: Stop wandering in the desert

The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
There&;s no denying that the last year has seen a great deal of turmoil in the OpenStack world, and here at Mirantis we&8217;re not immune to it.
In fact, some would say that we&8217;re part of that turmoil. Well, we are in the middle of a sea change in how we handle cloud deployments, moving from a model in which we focused on deploying OpenStack to one in which we focus on achieving outcomes for our customers.  
And then there&8217;s the fact that we are changing the architecture of our technology.
It&8217;s true. Over the past few months, we have been moving from Mirantis OpenStack to Mirantis Cloud Platform (MCP), but there&8217;s no need to panic. While it may seem a little scary, we&8217;re not moving away from OpenStack – rather, we are growing up and tackling the bigger picture, not just a part of it. In early installations with marquee customers, we’ve seen MCP provide a tremendous advantage in deployment and scale-out time. In just a few days, we will publicly launch MCP, and you will have our first visible signpost leading you out of the desert. We still have lots of work to do, but we&8217;re convinced this is the right path for our industry to take, and we&8217;re making great progress in that direction.
Where we started
To understand what&8217;s going on here, it helps to have a firm grasp of where we started.
When I started here at Mirantis four years ago, we had one product, Mirantis Fuel, and it had one purpose: deploy OpenStack. Back then that was no easy feat. Even with a tool like Fuel, it could be a herculean task taking many days and lots of calls to people who knew more than I did.
Over the intervening years, we came to realize that we needed to take a bigger hand in OpenStack itself, and we produced Mirantis OpenStack, a set of hardened OpenStack packages.  We also came to realize that deployment was only the beginning of the process; customers needed Lifecycle Management.
The Big Tent
And so Fuel grew. And grew. And grew. Finally, Fuel became be so big that we felt we needed to involve the community even more than we already had, and we submitted Fuel to the Big Tent.
Here Fuel has thrived, and does an awesome job of deploying OpenStack, and a decent job at lifecycle management.
But it&8217;s not enough.
Basically, when you come right down to it, OpenStack is nothing more than a big, complicated, distributed application. Sure, it&8217;s a big, complicated distributed application that deploys a cloud platform, but it&8217;s still a big complicated distributed application.
And let&8217;s face it: deploying and managing big, complicated, distributed applications is a solved problem.
The Mirantis Cloud Platform architecture
So let&8217;s look at what this means in practice.  The most important thing to understand is that where Mirantis OpenStack was focused on deployment, MCP is focused on the operations tasks you need to worry about after that deployment. MCP means:

A single cloud that runs VMs, containers, and bare metal with rich Software Defined Networking (SDN) and Software Defined Storage (SDS) functionality
Flexible deployment and simplified operations and lifecycle management through a new DevOps tool called DriveTrain
Operations Support Services in the form of enhanced StackLight software, which also provides continuous monitoring to ensure compliance to strict availability SLAs

OK, so that&8217;s a little less confusing than the diagram, but there&8217;s still a lot of &;sales&; speak in there.
Let&8217;s get down to the nitty gritty of what MCP means.
What Mirantis Cloud Platform really means
Let&8217;s look at each of those things individually and see why it matters.
A multi-platform cloud
There was a time when you would have separate environments for each type of computing you wanted to do. High performance workloads ran on bare metal, virtual machines ran on OpenStack, containers (if you were using them at all) ran on their own dedicated clusters.
In the last few years, bare metal was brought into Openstack, so that you could manage your physical machines the same way you managed your virtual ones.
Now Mirantis Cloud Platform brings in the last remaining piece. Your Kubernetes cluster is part of your cloud, enabling you to easily manage your container-based applications in the same environment and with the same tools as your traditional cloud resources.
All of this is made possible by the inclusion of powerful SDN and SDS components. Software Defined Networking for OpenStack is handled by OpenContrail, providing the benefits of commercial-grade networking without the lock-in, with Calico stepping in for the container environment. Storage takes the form of powerful open source Ceph clusters, which are used by both OpenStack and container applications.
These components enable MCP to provide an environment where all of these pieces work together seamlessly, so your cloud can be so much more than just OpenStack.
Knowing what&8217;s happening under the covers
With all of these pieces, you need to know what&8217;s happening &; and what might happen next. To that end, Mirantis Cloud Platform includes an updated version of StackLight, which gives you a comprehensive view of how each component of your cloud is performing; if an application on a particular VM acts up, you can isolate the problem before it brings down the entire node,
What&8217;s more, the StackLight Operations Support System analyzes the voluminous information it gets from your OpenStack cloud and can often let you know there&8217;s trouble &8212; before it causes problems.
All of this enables you to ensure uptime for your users &8212; and compliance with SLAs.
Finally solving the operations dilemma
Perhaps the biggest change, however, is in the form of DriveTrain. DriveTrain is a combination of various open source projects, such as Gerrit and Jenkins for CI/CD and Salt for configuration management, enabling a powerful, flexible way for you to both deploy and manage your cloud.
Because let&8217;s face it: the job of running a private cloud doesn&8217;t end when you&8217;ve spun up the cloud &8212; it&8217;s just begun.
Upgrading OpenStack has always been a nightmare, but DriveTrain is designed so that your cloud infrastructure software can always be up-to-date. Here&8217;s how it works:
Mirantis continually monitors changes to OpenStack and other relevant projects, providing extensive testing and making sure that no errors get introduced, in a process called &8220;hardening&8221;.  Once we decide these changes are ready for general use, we release them into the DriveTrain CI/CD infrastructure.
Once changes hit the CI/CD infrastructure, you pull them down into a staging environment and decide when you&8217;re ready to push them to production.
In other words, no more holding your breath every six months &8212; or worse, running cloud software that&8217;s year old.
Where do you want to go?
OpenStack started with great promise, but in the last few years it&8217;s become clear that the private cloud world is more than just one solution; it&8217;s time for everyone &8212; and that includes us here at Mirantis &8212; to step up and embrace a future that includes virtual machines, bare metal and containers, but in a way that makes both technological and business sense.
Because at the end of the day, it&8217;s all about outcomes; if your cloud doesn&8217;t do what you want, or if you can&8217;t manage it, or if you can&8217;t keep it up to date, you need something better. We&8217;ve been working hard at making MCP the solution that gets you where you want to be. Let us know how we can help get you there.
The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Boosting Helm with AppController

The post Boosting Helm with AppController appeared first on Mirantis | Pure Play Open Cloud.
Helm is emerging as a standard for Kubernetes application packaging. While researching it we discovered that its orchestration part can be improved. We did just that by injecting AppController right into the Helm orchestration engine. Check out our video from KubeCon EU to get insights into the advanced orchestration capabilities that AppController aims to introduce in Helm.

The post Boosting Helm with AppController appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

2017 trend lines: When DevOps and hybrid collide

Author’s preface: The term Site Reliability Engineering (SRE) is a job function that brings DevOps into infrastructure in a powerful way. Talking about hybrid infrastructure and DevOps without mentioning SRE would be a major omission. For further reading about SRE thinking, explore this series of blog posts.
What happens when DevOps methods meet hybrid environments? Following are some emerging trends and my commentary on each.
There are two major casualties as the pace of innovation in IT continues to accelerate: manual processes (non-DevOps) and tightly-coupled software stacks (non-hybrid).
We are changing some things much too quickly for developers and operators to keep up using processes that require human intervention in routine activities like integrated testing or deployment. Furthermore, monolithic platforms—our traditional “duck-and-cover” protection from pace of change—are less attractive for numerous reasons, including slower pace, vendor lock-in and lack of choice.
The necessary complexity of hybrid development can make it harder to build robust, portable DevOps automation.
Necessary complexity? Yes, that’s 2017 in a nutshell. Traditionally, people consider hybrid to mean operating in split infrastructures such as on-premises and cloud simultaneously. But the challenges of splitting operations is about much more than two or more infrastructures. The reality of hybrid is that there are variations throughout the IT stack that force users to code with hybrid issues, even without straddling clouds.
The pace of innovation guarantees that we will be constantly in a hybrid operations mode.
Robust, portable DevOps automation? We not only need it, automation will absolutely will be required. DevOps is generally presented as cultural and process transformations. But the consequence of that transformation is the need to automate processes to improve system performance.
Modern software development is built using massive collections of reusable modules and services—many open source—that developers carefully assemble into working applications. Since each component has its own release cycle and dependency graph, we must continuously integrate (CI) our applications to make sure that they continue to operate correctly.
It’s foolish to manage your application stack as a manual integration problem. You must automate it.
A critical defense against integration challenges is to constantly patch and update your software. The faster and smaller you can deploy software creates more protection from inevitable changes that cause issues created by both external and internal sources. The practice of continuous deployment (CD) ensures that your development and operations teams can respond quickly—or better yet, automatically—to the inevitable issues in creating software. This means that your applications are more resilient and your teams collaborating on development to production pipelines.
DevOps drive for CI and CD creates robust applications.
Hybrid creates a unique set of challenges for DevOps practices between the exploding complexity of choice. Change makes automation a moving target. I’ve been watching teams’ CI/CD pipelines grow from simple linear flows into Rube Goldberg machines that test multiple operating systems on multiple infrastructures.
These are not gratuitous tests. Teams have a very real need to manage their applications across a rapidly changing IT landscape. This creates a dilemma as they have to invest more and more time chasing issues created by hybrid requirements. But those requirements are not going away because they have commercial ROI and technical rationale.
Variation created by hybrid creates challenges for automation.
We must find a way to contain the work demanded by hybrid, but we cannot simply ignore the hybrid imperative. The clear answer for 2017 is to find good abstractions that protect teams from differences introduced by hybrid. The most popular abstraction, containers, is already revolutionizing workload portability by hiding many infrastructure details and providing the small delivery units desired by CI/CD pipelines.
We must invest in abstractions to help with hybrid DevOps because complexity is increasing.
We’ve clearly learned that DevOps automation pays back returns in agility and performance. Originally, small-batch, lean thinking was counter-intuitive. Now it’s time to make similar investments in hybrid automation so that we can leverage the most innovation available in IT today.
If you like these ideas, please subscribe to my blog RobHirschfeld.com where I explore SRE Ops, DevOps and open hybrid infrastructure challenges.
The post 2017 trend lines: When DevOps and hybrid collide appeared first on news.
Quelle: Thoughts on Cloud

Red Hat joins the DPDK Project

Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.
&;Open source communities continue to be a driving force behind technology innovation, and open networking and NFV are great examples of that. Red Hat believes deeply in the power of open source to help transform the telecommunications industry, enabling service providers to build next generation efficient, flexible and agile networks,” said Chris Wright, Vice President and Chief Technologist, Office of Technology at Red Hat. “DPDK has played an important role in this network transformation, and our contributions to the DPDK community are aimed at helping to continue this innovation.&;
DPDK, the Data Plane Development Kit, is a set of libraries and drivers which enable very fast processing of network packets, by handling traffic in user space or on specialized hardware to provide greater throughput and processing performance. The ability to do this is vital to get the maximum performance out of network hardware under dataplane intensive workloads. For this reason, DPDK has become key to the telecommunications industry as part of Network Functions Virtualization (NFV) infrastructure, to enable applications like wireless and wireline packet core, deep packet inspection, video streaming, and voice services.

Open source projects like DPDK have taken a leadership role in driving the transition to NFV and enabling technology innovation in the field of networking by accelerating the datapath for network traffic across virtual switching and routing infrastructure.
It is opportune that this move is announced during the Open Networking Summit, an event which celebrates the role of open source projects and open standards in the networking industry. DPDK is a critical component to enable projects like OPNFV, Open vSwitch and fd.io to accelerate the datapath for network traffic across virtual switching and routing infrastructure, and provide the necessary performance to network operators.
Quelle: RedHat Stack

Using Clojure on OpenShift

I’ve been a Lisp guy since undergraduate days, and in the JVM phase of my career that has meant Clojure. Though it’s been many years since I coded as a day job, Clojure is my go-to for playing around to see how things work. This often makes for an extra bit of fun since there isn’t always a previously-blazed path for Clojure.
Quelle: OpenShift

NVIDIA Tesla P100 GPU: Be prepared for the AI revolution

We know you’ve been wanting faster and easier access to GPU computing for AI in the cloud, so today we are excited to be the first major cloud provider to globally offer the NVIDIA Tesla P100 GPU card currently being revved up in our data centers and will be available in the forthcoming weeks.
NVIDIA and IBM have been partnering since 2014 to bring you the latest GPU technology in the cloud including being first to market in 2015 with the NVIDIA Tesla K80 and the Tesla M60 in 2016.
And now our customers can experience even more power to take on their AI and deep learning workloads with the P100s.
The landscape is changing and IBM Cloud and NVIDIA are at the forefront of the revolution.
Where supercomputing was once only something afforded by large corporations, by adding NVIDIA GPUs into the cloud, we are making it easily accessible to all.
We’re seeing deep learning and AI transition from traditionally research oriented computation to workloads dealing with infinite computing needs. The advantages of running high performance computing (HPC) in the cloud with NVIDIA GPUs, span industries and today offers financial services, healthcare, and scientific research the ability to perform better and calculate and analyze data faster. In fact, GPUs are now crossing over into business situations to attack business oriented problems, which essentially helps customers of any size or type answer their most complex big data challenges.
Recently, IBM Cloud, together with MapD and Bitfusion, were able to scale up to 64 Tesla K80 GPUs across 32 servers to filter, query and aggregate a 40-billion-row data set in just 271 milliseconds. That’s a mind-blowing 147 billion rows per SECOND. Imagine the possibilities with the new P100.
Watch how Bitfusion incorporated its software developed to manage deep learning and GPUs to help our customer MapD add GPUs to their cloud environment to accelerate their data analytics.

When you provision your bare metal server with two NVIDIA Tesla P100 GPU cards, you can see 50 times the performance than its predecessor, the Tesla K80. The addition of the accelerator can deliver up to 65 percent more machine learning capabilities giving you higher throughput than traditional virtualized servers.
For those ready to start trying out GPUs in a cloud environment, we currently offer NVIDIA Tesla M60 and K80, designed for high performance acceleration of scientific computation and data analytics. The Tesla M60 (with NVIDIA GRID software) along with the GRID K2, are engineered for professional grade virtualized graphics—all available on various pre-configured bare metal servers with hourly and monthly options.
Check out the configurations and the infographic.
A version of this article originally appeared on the IBM Bluemix blog.
The post NVIDIA Tesla P100 GPU: Be prepared for the AI revolution appeared first on news.
Quelle: Thoughts on Cloud

Bringing streaming video to 270 million soccer players

Broadcasting live video of games isn’t just for professional athletes anymore.
Thanks to online streaming platform Footters, which has adopted IBM Cloud Video for on-demand video content for soccer players, it’s quite the contrary.
The Spanish company aims to connect as many as 24 million soccer — or, as it’s called everywhere but the US, football — teams and 270 million players around the world with its platform designed to stream amateur matches and help professionals meet. As of now, Footters is working with 50 teams.
Footters worked with content design and creation firm 12Segundos to sort through all the streaming video options on the market and choose IBM Cloud Video. Stability and compliance were big factors, as well as the additional services Footters could offer teams, including data analysis and editing capabilities.
Footters CEO Julio Fariñas says the cognitive technology in the IBM Cloud Video service “offers functions that maximize and speed up the extraction of data from a football match. For example, in the future we will be able to know how often a player has run up and down the wing and how many times he has passed the ball.”
There’s also the pay-per-view monetization model for the football clubs to use with fans and followers. They can also place ads if they want to. Viewers can pay via a monthly or annual subscription, or they can simply pay for as much of a game as they choose to watch.
Footters’ goal isn’t just streaming games, though that is important. Its focus is chiefly on youth and amateur soccer, as well as up-and-coming leagues around the world. For example, in Spain, it aims to work with teams everywhere but in the 1st and 2nd divisions, which have their own agreements with television channels.
The company is shooting to connect teams, players, agents, scouts, institutions, tournaments, brands and even families, thereby building a closer soccer community.
Learn more about IBM Cloud Video.
The post Bringing streaming video to 270 million soccer players appeared first on news.
Quelle: Thoughts on Cloud