GameFly gaming as a service streams epic, console-quality game titles

Gaming can be an expensive hobby.
First, gamers must invest in a console, which could be as much as $400. Then they start building a game library, with costs of up to $70 per title. Next, there are the peripherals such as headsets, extra controllers and a charging station. Before they know it, it’s time to get the next-gen console and purchase new releases of game titles.
With the GameFly Streaming gaming-as-a-service subscription offering, gamers can play console-quality games on the connected devices they already own.
Gamers get access to more than 100 console-quality games that users are familiar with from their previous console experience. . The titles range from family-friendly to epic action games and are refreshed regularly. Players can save their games to the cloud and resume playing on any device they want.

Evolution of media and entertainment
In the media and entertainment market, gaming is the last segment to begin streaming.
Twenty years ago, people consumed audio as physical CDs. When the internet became more mature and the bandwidth became better, people started to download music. It wasn’t long before people started to stream audio. The same thing happened with video. People watched movies on DVD, then they downloaded them, and now they can stream them using streaming services such as Netflix and YouTube.
Gaming is the most complex segment of the media and entertainment industry. People still consume games through physical discs made for specialized gaming consoles. Though more and more players are downloading games, the files typically still need to run on high-end PCs or game consoles.
GameFly has brought that third element to the market: streaming games without the need to have a game console or physical DVDs. Gamers can stream and consume the games like video services on Netflix or a VOD channel with a local Telco provider.
How the magic happens
GameFly Streaming is available in app stores on smart TV brands, such as Samsung, LG and Philips, streaming media players, and telco companies’ set top boxes. Gamers connect any game controller to the TV screen and play high-end console games directly on their TVs.
It sounds so simple, but a lot goes on behind the scenes to make on-demand games work.
For the gaming as a service magic to happen, GameFly had to master latency.
When the user presses a command on the game controller, the command goes from the game controller to the TV screen, and from the TV screen over the internet to a server located maybe 1,000 miles away.
GameFly must encode the output of the server to enter the standard end tag stream, send it back to the server over the open internet, decode the video on the TV or device, and display it.
All this has to happen in less than 80 milliseconds.
GameFly Streaming needed a hosting solution that could support its global service as well as offer graphics processing unit (GPU) technology, which is required for successful game streaming. One reason GameFly chose IBM Cloud was because of its worldwide data center presence. The closer the servers are to users, the better the gaming-as-a-service delivery is.
GameFly uses 12 IBM data center locations and is getting an average latency of 25 milliseconds in the US and 32 milliseconds in Europe. That’s a key benefit of using IBM Cloud.
Also appealing about IBM Cloud is that GameFly can build servers in the data centers with the GPUs that it needs and add new servers in four hours or less. It can do that flexibly, on a month-to-month basis if necessary. It helps to control capacity and make better use of the servers.
Taking a test drive

Gamers who would like to try the service and lacking a game controller, can download a mobile app and pair it with their TV. It’s a good way for players to sample gaming as a service or a platform or game other than what they have at home.
Learn more about GameFly Streaming.
Find out more about IBM Cloud gaming solutions.
The post GameFly gaming as a service streams epic, console-quality game titles appeared first on news.
Quelle: Thoughts on Cloud

Red Hat Summit 2017 – Planning your OpenStack labs

This year in Boston, MA you can attend the Red Hat Summit 2017, the event to get your updates on open source technologies and meet with all the experts you follow throughout the year.
It&;s taking place from May 2-4 and is full of interesting sessions, keynotes, and labs.
This year I was part of the process of selecting the labs you are going to experience at Red Hat Summit and wanted to share here some to help you plan your OpenStack labs experience. These labs are for you to spend time with the experts who will teach you hands-on how to get the most out of your Red Hat OpenStack product.
Each lab is a 2-hour session, so planning is essential to getting the most out of your days at Red Hat Summit.
As you might be struggling to find and plan your sessions together with some lab time, here is an overview of the labs you can find in the session catalog for exact room and times. Each entry includes the lab number, title, abstract, instructors and is linked to the session catalog entry:

L103175 &; Deploy Ceph Rados Gateway as a replacement for OpenStack Swift
Come learn about these new features in Red Hat OpenStack Platform 10: There is now full support for Ceph Rados Gateway, and &;composable roles&; let administrators deploy services in a much more flexible way. Ceph capabilities are no longer limited to block only. With a REST object API, you are now able to store and consume your data through a RESTful interface, just like Amazon S3 and OpenStack Swift. Ceph Rados Gateway has a 99.9% API compliance with Amazon S3, and it can communicate with the Swift API. In this lab, you&8217;ll tackle the REST object API use case, and to get the most of your Ceph cluster, you&8217;ll learn how to use Red Hat OpenStack Platform director to deploy Red Hat OpenStack Platform with dedicated Rados Gateways nodes.
Instructors: Sebastien Han, Gregory Charot, Cyril Lopez
 
L104387 &8211; Hands on for the first time with Red Hat OpenStack Platform
In this lab, an instructor will lead you in configuring and running core OpenStack services in a Red Hat OpenStack Platform environment. We&8217;ll also cover authentication, compute, networking, and storage. If you&8217;re new to Red Hat OpenStack Platform, this session is for you.
Instructors: Rhys Oxenham, Jacob Liberman, Guil Barros
 
L102852 &8211; Hands on with Red Hat OpenStack Platform director
Red Hat OpenStack Platform director is a tool set for installing and managing Infrastructure-as-a-Service (IaaS) clouds. In this two-hour instructor-led lab, you will deploy and configure a Red Hat OpenStack Platform cloud using OpenStack Platform director. This will be a self-paced, hands-on lab, and it&8217;ll include both the command line and graphical user interfaces. You&8217;ll also learn, in an interactive session, about the architecture and approach of Red Hat OpenStack Platform director.
Instructors: Rhys Oxenham, Jacob Liberman
 
L104665 &8211; The Ceph power show—hands on with Ceph
Join our Ceph architects and experts for this guided, hands-on lab with Red Hat Ceph Storage. You&8217;ll get an expert introduction to Ceph concepts and features, followed by a series of live interactive modules to gain some experience. This lab is perfect for users of all skills, from beginners to experienced users who want to explore advanced features of OpenStack storage. You&8217;ll get some credits to the Red Hat Ceph Storage Test Drive portal that can be used later to learn and evaluate Red Hat Ceph Storage and Red Hat Gluster Storage. You&8217;ll leave this session having a better understanding of Ceph architecture and concepts, with experience on Red Hat Ceph Storage, and the confidence to install, set up, and provision Ceph in your own environment.
Instructors: Karan Singh, Kyle Bader, Daniel Messer
As you can see, there is plenty of OpenStack in these hands-on labs to get you through the week and hope to welcome you to one or more of the labs!
Quelle: RedHat Stack

Intelligent NFV performance with OpenContrail

The post Intelligent NFV performance with OpenContrail appeared first on Mirantis | Pure Play Open Cloud.
The private cloud market has changed in the past year, and our customers are no longer interested in just getting an amazing tool for installing OpenStack; instead, they are looking more at use cases. Because we see a lot of interest in NFV cloud use cases, Mirantis includes OpenContrail as the default SDN for its new Mirantis Cloud Platform. In fact, NFV has become a mantra for most service providers, and because Mirantis is a key player in this market, we work on a lot of testing and performance validation.
The most common value for performance comparison between solutions is bandwidth, which shows how much capacity a network connection has for supporting data transfer, as measured in bits per second. In this domain, the OpenContrail vRouter can reach near line speed (about 90%, in fact). However, performance also depends on other factors, such as latency, or packets-per-second (pps), which are as important as bandwidth. Packets per second rate is a key factor for VNF (firewalls, routers, etc.) instances running on top of NFV clouds. In this article, we&;ll compare PPS rate for different OpenContrail setups so you can decide what will work best for your specific use case.
The simplest way to test PPS rate is to run a VM to VM test. We will provide a short overview of OpenContrail low-level techniques for NFV infrastructure, and perform a comparative analysis of different approaches using simple PPS benchmarking. To make testing fair, we will use only a 10GbE physical interface, and will limit resource consumption for data plane acceleration technologies, making the environment identical for all approaches.
OpenContrail vRouter modes
For different use cases, Mirantis supports several ways of running the OpenContrail vRouter as part of Mirantis Cloud Platform 1.0 (MCP). Let&8217;s look at each of them before we go ahead and take measurements.
Kernel vRouter
OpenContrail has a module called vRouter that performs data forwarding in the kernel. The vRouter module is an alternative to Linux bridge or Open vSwitch (OVS) in the kernel, and one of its functionalities is encapsulating packets sent to the overlay network and decapsulating packets received from the overlay network. A simplified schematic of VM to VM connectivity for 2 compute nodes can be found in Figure 1:

Figure 1: A simplified schematic of VM to VM connectivity for 2 compute nodes
The problem with a kernel module is that packets-per-second is limited by various factors, such as memory copies, the number of VM exits, and the overhead of processing interrupts. Therefore vRouter can be integrated with the Intel DPDK to optimize PPS performance.
DPDK vRouter
Intel DPDK is an open source set of libraries and drivers that perform fast packet processing by enabling drivers to obtain direct control of the NIC address space and map packets directly into an application. The polling model of NIC drivers helps to avoid the overhead of interrupts from the NIC. To integrate with DPDK, the vRouter can now run in a user process instead of a kernel module. This process links with the DPDK libraries and communicates with the vrouter host agent, which runs as a separate process. The schematic for a simplified overview of vRouter-DPDK based nodes is shown in Figure 2:

Figure 2: The schematic for a simplified overview of vRouter-DPDK based nodes
vRouter-DPDK uses user-space packet processing and CPU affinity to dedicate poll mode drivers being served by a particular CPU. This approach enables packets to be processed in user-space during the complete life time &; from physical NIC to vhost-user port.
Netronome Agilio Solution
Software and hardware components distributed by Netronome provide an OpenContrail-based platform to perform high-speed packet processing. It’s a scalable, easy to operate solution that includes all server-side networking features, such as overlay networking based on MPLS over UDP/GRE and VXLAN. The Agilio SmartNIC solution supports DPDK, SR-IOV and Express Virtio (XVIO) for data plane acceleration while running the OpenContrail control plane. Wide integration with OpenStack enables you to run VMs with Virtio devices or SR-IOV Passthrough vNICs, as in Figure 3:

Figure 3:  OpenContrail network schematic based on Netronome Agilio SmartNICs and software
A key feature of the Netronome Agilio solution is deep integration with OpenContrail and offloading of lookups and actions for vRouter tables.
Compute nodes based on Agilio SmartNICs and software can work in an OpenStack cluster based on OpenContrail without changes to orchestration. That means it’s scale-independent and can be plugged into existing OpenContrail environments with zero downtime.
Mirantis Cloud Platform can be used as an easy and fast delivery tool to set up Netronome Agilio-based compute nodes and provide orchestration and analysis of the cluster environment. Using Agilio and MCP, it is easily to setup a high-performance cluster with a ready-to-use NFV infrastructure.
Testing scenario
To make the test fair and clear, we will use an OpenStack cluster with two compute nodes. Each node will have a 10GbE NIC for the tenant network.
As we mentioned before, the simplest way to test the PPS rate is to run a VM to VM test. Each VM will have 2 Virtio interfaces to receive and transmit packets, 4 vCPU cores, 4096 MB of RAM and will run Pktgen-DPDK inside to generate and receive a high rate of traffic. For each VM a single Virtio interface will be used for generation, and another interface will be used for receiving incoming traffic from the other VM.
To make an analytic comparison of all technologies, we will not use more than 2 cores for the data plane acceleration engines. The results of the RX PPS rate for all VMs will be considered as a result for the VM to VM test.
First of all, we will try to measure kernel vRouter VM to VM performance. Nodes will be connected with Intel 82599 NICs. The following results were achieved for a UDP traffic performance test:
As you can see, the kernel vRouter is not suitable for providing a high packet per second rate, mostly because the interrupt-based model can’t handle a high rate of packets per second. With 64 byte packets we can only achieve 3% of line rate.
For the DPDK-based vRouter, we achieved the following results:

Based on these results, the DPDK based solution is better at handling high-rated traffic based on small UDP packets.
Lastly, we tested the Netronome Agilio SmartNIC-based compute nodes:

With only 2 forwarder cores, we are able to achieve line-rate speed on Netronome Agilio CX 10GbE SmartNICs on all size of packets.
You can also see a demonstration of the Netronome Agilio Solution here.
Since we have achieved line-rate speed on the 10GbE interface using Netronome Agilio SmartNICs we wanted to have the maximum possible PPS rate based on 2 CPUs. To determine the maximum performance result for this deployment, we will upgrade existing nodes with Netronome Agilio CX 40GbE SmartNIC and repeat the maximum PPS scenario one more time. We will use direct wire connection between 40GbE ports and will set up 64-bytes UDP traffic. Even with hard resources limitations, we achieved:

Rate
Packet size, Bytes

Netronome Agilio Agilio CX 40GbE SmartNIC
19.9 Mpps
64

What we learned
Taking all of the results together, we can see a pattern:

Based on 64 byte UDP traffic, we can also see where each solution stands compared to 10GbE line rate:

Rate
% of line rate

Netronome Agilio
14.9 Mpps
100

vRouter DPDK
4.0 Mpps
26

Kernel vRouter
0.56 Mpps
3

OpenContrail remains the best production-ready SDN solution for OpenStack clusters, but to provide NFV-related infrastructure, OpenContrail can be used in different ways:

The Kernel vRouter, based on interrupt model packet processing, works, but does not satisfy the high PPS rate requirement.
The DPDK-based vRouter significantly improves the PPS rate, but due to high resource consumption and because of defined limitations, it can’t achieve the required performance. We also can assume that using a modern DPDK library will improve performance and optimise resource consumption.
The Netronome Agilio SmartNIC solution significantly improves OpenContrail SDN performance, focusing on saving host resources and providing a stable high-performance infrastructure.

With Mirantis Cloud Platform tooling, it is possible to provision, orchestrate and destroy high performance clusters with various networking features, making networking intelligent and agile.
The post Intelligent NFV performance with OpenContrail appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Thoughts and Perspectives from OpenShift Commons Berlin

Last week the cloud native, containers and Kubernetes communities converged on Berlin, Germany for OpenShift Commons Gathering, CloudNativeCon and KubeCon. Berlin was the perfect location for this intersection of events because it is historically defined by its transition from the past to the present, and culturally by its diversity of activities and fields of knowledge. Berlin sits at [&;]
Quelle: OpenShift

How Woodside Energy outthinks uncertainty with Watson

Seventy miles off the coast of Australia, in the deep waters of the North West Shelf, you’ll find what looks like a gas platform. But don’t be fooled. This massive tapestry of pipes and shafts, gauges and steel is more than a piece of machinery — it’s a prototype for a cognitive business.

Saving and sharing tribal knowledge with Watson
Woodside, Australia’s largest independent energy company has been a global leader in oil and gas for over half a century. Their secret? Hire and develop heroes.
This formula has helped Woodside build some of the largest structures on the planet, in some of the most remote parts of the ocean, and safely transport the energy they produce to people around the globe.
To ensure the next generation could successfully carry the torch, Woodside knew they had to harness the instinctual know-how of their best employees. This goal — to create a cognitive business to augment and share their tribal knowledge — is what led Woodside into an industry-first partnership with IBM and Watson.
That industry — offshore energy — is as expensive as it is intensive. A gas platform can cost $500,000 per day to operate, and require real time monitoring of thousands of inputs 24/7 by a crew living on top of 100,000 tons of steel in the middle of the ocean. The stakes — from human lives to environmental safety to shareholder results — are incredibly high.

To meet these demands, Woodside hires and nurtures “heroes” — highly intelligent employees whose natural instincts are honed by years of experience. While Woodside has been archiving their employees’ reports, decision logs, and technical evaluations for decades, they’ve also been losing swaths of irreplaceable corporate memory as older engineers retired, taking their instincts and experience with them.
To remain competitive, Woodside knew they had to streamline corporate-wide access to their archives and tribal knowledge, to spread not just information — but the contextual relevance of this information. They knew they had to apply deep learning to shorten employee learning curves, and add cognitive to their decision making.
Watson could enable Woodside to mine their millions of documents, serve up relevant insights to their engineers instantly, and do it all in language that engineers spoke and understand. Most importantly, Woodside would do more than just save time and money: Finding the right advice faster would give them a competitive head start, leading to safer and better decisions across the company, and ultimately more energy getting to more people in more effective ways.
But before Watson could help a new wave of “heroes,” Woodside and IBM needed to teach Watson to think like one.
Reinventing your business (not your IT infrastructure) with Watson
The term “cognitive business” might sound ambitious, especially from an IT perspective, but IBM showed Woodside how digital reinvention could happen without reinventing their infrastructure.
One reason is that Watson’s services and APIs are hosted on the cloud and designed to be deployed across various corporate platforms. Another is the fact that Watson learns a business the way people do.
Like any new hire that needs to be on-boarded, Watson needs large quantities of industry information coupled with quality insights from experienced employees. Thanks to their massive archives and world-class workforce, Woodside had both.
To start, Woodside worked together with IBM to create the corpus  — the body of materials Watson would learn from now and continue to collect into the future. Woodside compiled a stack of structured and unstructured data from their archives. All of this was uploaded and ingested by Watson.

A core group of Woodside’s current and former employees began to test Watson on what it had learned, guiding Watson’s answers and teaching the system to think like one of them.
Based on a series of questions posed to Watson, the group used their expertise to rate the responses, which were fed back into the system, allowing Watson to learn and become smarter. By combining years of data with years of experience, Watson was able to tap into Woodside’s tribal knowledge and discover the best advice of thousands of engineers, as well as learn how to process new information as it was added to the corpus.
When Woodside and IBM were confident in Watson’s abilities, it was time to deploy it in the real world — in the hands of engineers on a gas platform in the middle of the Indian Ocean.
Deploying and building with Watson
Woodside developers had a challenging task — how do you take something as powerful and complex as Watson and deploy it in a way that is useful and understandable to non-developers?
Working with IBM and the Watson Developer Cloud, Woodside developers identified the APIs needed to craft an architecture and build an intuitive design that lets engineers find the advice they need.
Natural Language Classifier: This API allows users to search a corpus by asking questions as if they were talking to another person. Watson uses Natural Language Classifier to parse out the intent of a question even if it is asked in different ways.
Retrieve and Rank: After understanding the question, Watson retrieves all relevant information from the corpus, ranks them in terms of relevance, and responds with the best matches, as well as related points of inspiration.
Conversation: By incorporating a human tone, Conversation creates a better user experience and allows Watson to interact with engineers in their own language.

Seeing results with Watson
While Woodside’s partnership with IBM and Watson is on-going, certain results have been seen almost immediately.
Watson has ingested tens of thousands of Woodside documents related to project development in the system, each typically over 100 pages in length. It would take a human being, working 24 hours a day, more than five years to read all this information. Watson can process it and produce meaningful answers in seconds.
Woodside is just one example of how becoming a cognitive business can positively impact any business — in almost any industry — by leveraging existing tribal knowledge to augment employee skills and improve the bottom line with easily deployed applications.
Think your company might be the next Woodside? Find which Watson solution is ideal for your company.
A version of this article originally appeared on Medium.

The post How Woodside Energy outthinks uncertainty with Watson appeared first on news.
Quelle: Thoughts on Cloud

Momentum mounts for Kubernetes, cloud native

For any new technology, there are few attributes more valuable then momentum. In the open tech space, few projects have as much momentum as Kubernetes and cloud native application development.
The Cloud Native Computing Foundation (CNCF) kicked off the European leg of its biannual CloudNativeCon/ event in Berlin by welcoming five new member organizations and two new projects.
CNCF has pulled in rkt and as its eighth and ninth open projects, joining Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC and CoreDNS,
IBM senior technical staff member Phil Estes is one of the open source maintainers for containerd. He explained a bit about the project and the role of IBM in the video below:

This week, containerd joined the @CloudNativeFdn. @estesp explains what it means for the community. Details: https://t.co/AQigsrXzqY pic.twitter.com/oC9XAOjO9D
— IBM Cloud (@IBMcloud) March 30, 2017

Meanwhile, CNCF announced that SUSE, HarmonyCloud, QAware, Solinea and TenxCloud have joined as contributing member organizations.
&;The cloud native movement is increasingly spreading to all parts of the world,&; CNCF executive director Dan Kohn told a sellout crowd of 1,500. That number tripled from CloudNativeCon in London a year prior.
We reported last fall that Kubernetes adoption was on the cusp of catching a giant wave. That wave has evolved into a groundswell among developers. There are now 4,000 projects based on Kubernetes, more than 50 products supporting it and more than 200 meetups around the world.
Even more significant has been the IBM announcement in March that Kubernetes is available on IBM Bluemix Container Service.
Linux Foundation Vice President Chris Aniszczyk and IBM Fellow, VP and Cloud Platform CTO Jason McGee discussed the move by IBM to Kube (and much more) on a podcast recoded from the venue. You can listen to it here:

A few more highlights from Berlin:
• 17-year-old Lucas Käldström, the youngest core Kubernetes maintainer, wowed the crowd with his talk on autoscaling a multi-platform Kubernetes cluster built with kubeadm.

Listening to Lucas talk about multi-architecture cluster support for containers/k8s. Oh, he&;s in high school too! pic.twitter.com/V8G3qAylzz
— Phil Estes (@estesp) March 30, 2017

• Docker’s Justin Cormack delivered one of the conference’s most popular sessions with his talk on containerd:

Now @justincormack from @Docker talking containerd in SRO room @CloudNativeFdn Kubecon Berlin. Hey @chanezon open a window, it&8217;s hot! pic.twitter.com/SlVHCyTwH6
— Jeffrey Borek (@jeffborek) March 30, 2017

• An update on the Open Container Initiative from Jeff Borek (IBM), Chris Aniszczyk (Linux Foundation), Vincent Batts (Red Hat) and Brandon Philips (CoreOS)

An update on @OCI_ORG and container standards from @Cra, @JeffBorek, @vbatts, @sauryadas_ & @BrandonPhilips. … https://t.co/MqqBKxwjBU
— Kevin J. Allen (@KevJosephAllen) March 29, 2017

More information about Bluemix.
The post Momentum mounts for Kubernetes, cloud native appeared first on news.
Quelle: Thoughts on Cloud

Introducing Microservice Builder (Beta)

Application development needs to adapt to enable faster innovation and business agility. According to an IDC survey, 60 percent of new applications will use cloud-enabled continuous delivery microservice architectures, DevOps, and containers.
Why are microservice architectures gaining traction? Because they provide an agile and stable way for companies to develop and deliver modern, lightweight and composable workloads. With microservices, businesses have the freedom to deploy across public, private and hybrid application environments. And they can effectively eliminate long-term commitments to any single technology.
IBM can help you get ahead of the microservices revolution with a new offering, Microservice Builder (Beta). The tool can help companies build and deploy containerized applications in a microservice framework.
Introducing Microservice Builder (Beta)
Microservice Builder (Beta) provides a complete approach with simple, step-by-step support that details how to create applications in a microservices framework. Our aim is to greatly simplify the software delivery pipeline that enables continuous delivery of Dockerized microservices. It provides developers with a frictionless application lifecycle from development through production.
With Microservice Builder (Beta), developers can easily learn about the intricacies of microservice apps, quickly compose and build innovative services and then rapidly deploy them to various stages through a pre-integrated DevOps pipeline.
Microservice Builder (Beta) includes programming model extensions to Java EE, defined by the Microprofile.io collaboration, for writing Java-based microservices. Containerized apps created using Microservice Builder can then be deployed to Kubernetes-orchestrated Docker environments either on-premises or in the public cloud.
How Microservice Builder (Beta) can benefit your business
Your business is likely looking for ways to shorten development cycles and reduce costs.  Five things Microservice Builder (Beta) can help you accomplish:

Leverage a continuous delivery pipeline to accelerate software delivery from weeks to days, and days to minutes
Support rapid hybrid and cloud-native application development and testing cycles with greater agility, scalability and security
Reduce costs and complexity with portability across IBM and other cloud providers including public, private and hybrid clouds
Diagnose and resolve application infrastructure issues, minimizing downtime and maintaining service level agreements (SLAs)
Easily connect existing applications to new cloud services—such as Watson Cognitive services—to discover actionable insights

Microservice Builder (Beta) features to explore
Ready to dive in and build? Microservice Builder (Beta) makes it easy to get started. We’ve included beta binaries to support building and testing environments for microservice applications. Using container services, you can easily deploy applications to Bluemix. And if something goes wrong, there’s log analytics and monitoring to help you diagnose problems more easily.
Finally, Microservice Builder (Beta) also provides access to IBM Spectrum Conductor for Containers, a Kubernetes-based Docker management system, to simplify deployment on a single server or a prebuilt Kubernetes cluster.
Don’t wait for the microservices revolution to reach your competitors. Get ahead of the revolution. Get started building hybrid and cloud-native applications in a microservices framework with Microservice Builder (Beta) by clicking here.
The post Introducing Microservice Builder (Beta) appeared first on news.
Quelle: Thoughts on Cloud

Cloud-based transcoding system delivers low-latency video

How can thousands of people with different kinds of devices and different network connections watch the same video at the same time?
The answer lies in transcoding, the process of converting media files from one format to another.
Vantrix Corporation develops media-processing software for cable operators, broadcasters and content owners. The Vantrix Transcoder is a powerful, software-defined solution that cost effectively meets demand for video on any screen. The glass-to-glass system allows for capture, cloud processing and viewing in a low-latency manner. It adapts the video stream to users’ devices to ensure videos run seamlessly.
Watching broadcasts in near-real time
The worst thing for a video is latency — stalling or stopping. People get very frustrated.
Imagine watching a sporting event in which the real-time factor is of utmost importance. If there’s a delay, you might hear your neighbor screaming “Goal!” when you haven’t seen it yet. It’s a spoiler.
The ultra-high-density transcoding solution from Vantrix reduces the latency as much as possible because it’s scalable and deployed in the IBM Cloud.
A main driver for selecting IBM was global reach. Vantrix can deliver a packet, frame or video element within 200 milliseconds across the world. For viewers, it’s almost like being there.
A virtual-reality camera, no image stitching required
Vantrix developed its own 360-degree camera, which delivers a video feed that’s half spherical, similar to what people see with their eyes. The same cloud-based transcoding process is used to adapt this very large video stream to any user device, but to see the full video, viewers would use a virtual reality (VR) headset.
Most VR broadcasts use multiple lenses and have to “stitch” images together, which means they lose quality and in the live use case, introduce latency.
The Vantrix camera doesn’t require image stitching, so it can deliver higher-quality video, more quickly.
Watching sports with a VR headset
If viewers are watching sports with a VR headset, not only can they see the field or court, but they can also look sideways and see the bench with the players, or see fans. They’re not confined to the choice of the broadcast. It’s a more natural way to watch a game, because viewers can control their own experiences. They can look anywhere they want as long as they want.
With this format, broadcasters can deliver additional information. For example, viewers can call up a digital overlay on top of the video with statistics about players or another game being played concurrently.

Many big broadcasters are exploring ways to personalize their broadcasts. For example, they can customize ads. When someone is watching in VR, broadcasters have 100 percent of that person’s attention, so ads have much more impact.
With analytics, broadcasters could determine where viewers are looking so they can tell an advertiser where to move a logo to get more eyeball seconds as the original spot. That means more people see the ad and more revenue for the broadcaster.
Remote security monitoring
Another common application for Vantrix’s 360-degree camera is security. The camera captures far more activity than a traditional camera that doesn’t have “peripheral vision.”
The recording system has a buffer leading up to a trigger event, and the footage is recorded and stored safely in the cloud, meaning if the burglar destroys the camera, the video prior to the incident is already stored in the cloud.
Any trigger or event can be played back securely by whoever has the access and rights to look at the video, from wherever they are.
If the event happened in New York, for example, the transcoding service can stream it to L.A. With a fast Internet connection and strong cloud services, viewers won’t have a big delay in terms of from when it happened to when they can watch it.
Transcoding as a service
The Vantrix Media Platform software already has a very robust set of application programming interfaces (APIs). Vantrix used IBM Bluemix hybrid cloud development platform to both develop a multitenant version of its software and to fully integrate its APIs into the Bluemix ecosystem, enabling video transcoding as service for Bluemix developers.
Learn more about IBM Cloud streaming video.
The post Cloud-based transcoding system delivers low-latency video appeared first on news.
Quelle: Thoughts on Cloud

Using Post Hook to Initialize a Database

In the OpenShift v2 days, we used Action Hooks to initialize a database with test data. OpenShift 3.x also provides pod lifecycle hooks that can be leveraged to initialize the database after the database starts inside a pod. This blog explains the approach using pod lifecycle hooks. In this blog, I am using MYSQL database as an example. A similar approach can be used with other databases.
Quelle: OpenShift

The trends fueling hybrid cloud management

What a week at IBM Interconnect 2017. After a busy and productive week at our signature cloud event, I’d like to recap some of the key trends and directions that IBM is seeing in the hybrid cloud management space.
Not long ago, the cloud was seen primarily as a cost-cutting tool. Many saw it as means of improving agility through easy access to low-cost infrastructure. But today, the cloud is gaining traction for its potential to change the dynamics of innovation and digital disruption.
Unlocking more value in the cloud
One way to drive the new experiences clients are after is to bring together value from across the business. This can include a combination of existing data and applications with new methods of engagement in the form of mobile apps, APIs and microservices.
This approach leverages current investments while extending the reach of business right into the hand of their customers. Businesses can also augment internal data with external insights coming from social media, weather forecasts, and Internet of Things (IoT) devices to enrich customer experiences and interactions.
As companies move to more strategic consumption of the cloud, they often uncover opportunities to reimagine business processes and entire industry models. Cognitive services— artificial intelligence—can help businesses learn from individual customer interactions over time. They can adapt automatically to accommodate changing preferences, buying patterns and even understand tone-of-voice to tailor interactions to the person’s mood. On a broader scale, the ability to combine cognitive insights with cloud services can determine the course of industry leaders and laggards.
Setting cloud strategy today for long-term gains
The extent to which business leaders consider the implications of the cloud strategy they select today could have the greatest impact on long-term success. Is the strategy centered solely on scalable infrastructure? Is it adaptable to your business model and investment levels? Does it deliver higher-value business applications and industry functions? Will it enable game-changing business models equipped with blockchain, cognitive services and new data insights to optimize customer experiences?
One key factor in the race to outpace competitors is the ability to innovate with speed. Companies want to select the best of their capabilities and combine it with the latest of what’s available externally from vendors, partners and communities. That’s a critical benefit of using the cloud.
Public clouds enable companies to get new value outside-in from third parties, allowing for rapid adoption of new services, including data, applications, devops toolchains, or community innovations.
Private clouds help companies extract more value inside-out from their business, allowing them to securely extract and analyze data on their customers, transactions and products.
Combining public cloud and private cloud enables business advantage by driving business outcomes more quickly, more effectively and at lower cost. This is why multicloud environments are rapidly becoming the new norm for the enterprise.
Introducing IBM Cloud Automation Manager
The right multicloud strategy can allow companies to combine the delivery and consumption models that best suit their unique business and industry requirements. They can mix and match in a way that optimizes speed, flexibility and business value. To optimize the benefits of these diverse environments requires a unified cloud management platform.
This week, at InterConnect, we were thrilled to unveil Cloud Automation Manager, a purpose-built, cloud-agnostic, multicloud management platform created together with clients to provide significant business value. With Cloud Automation Manager, you can rapidly automate provisioning of cloud applications and resources on any cloud, while leveraging cognitive insights to manage your multicloud environment with ease.
Cloud Automation Manager includes pre-built automation packs spanning infrastructure to full stack apps and helps companies optimize workload placement without lock-in or loss of control. It combines speed, flexibility, control and intelligence so that IT operations managers can more easily and efficiently provision and automate workloads across multiple clouds. At the same time, they can provide developers and DevOps teams with self-service access to a catalog of cloud resources and applications—all from their cloud of choice.
To find out more, please visit our website. To get started with Cloud Automation Manager today at no cost, visit us at ibm.biz/tryIBMCAM.
The post The trends fueling hybrid cloud management appeared first on news.
Quelle: Thoughts on Cloud