6 Java developer highlights at InterConnect 2017

Java developers, listen up. By now, you may have heard about the sessions, labs, roundtables, activities and events being offered at IBM InterConnect 2017. With more than 2,000 sessions and 200 labs to choose from, it can be a daunting task to create your agenda for the conference. Luckily, we’ve done some of the work for you.
Here are six things that Java developers shouldn’t miss at InterConnect.
1. Hit the road with Code Rally
Code Rally is an open-source racing game where you play by programming a vehicle in Java—or Node.js if you prefer—to race around virtual tracks. Code Rally is an example of a microservice architecture, and each vehicle is its own self-contained microservice.  When deployed, each microservice works within our race simulation service to compete against other coders. Head over to the DevZone in the Concourse during the conference to give Code Rally a test drive.
2. DevZone
As a developer, the DevZone is the place to be. Located in the Concourse, you can hang out throughout the week with other developers to learn and share technical knowledge that will help you create the next generation of apps and services. While you’re at the DevZone, you can also talk to an IBM expert at an Ask Me Anything Expert Station, or learn a new skill in a short 20-minute Hello World Lab.
3. Session : The rise of microservices
Microservices are a hot topic in the world of software development. They help teams divide and conquer to solve problems faster and deliver more rapidly. In this session, RedMonk analyst and co-founder James Governor will discuss the rise of microservices with IBM Fellow and Cloud Platform CTO Jason McGee. James and Jason will explore the concept of microservices and how cloud has enabled their rise. They will cover the capabilities needed to be successful combined with real-world examples, lessons learned, and insights on how to get started from where you are today.
4. Open Tech Summit
Mobile, cloud, and big data are all trends that are changing the way we interact with people. Capturing the value from these interactions requires rapid innovation, interoperability and scalability enabled by an open approach. At the Open Tech Summit on Sunday, March 19th from 4:00 PM &; 7:00 PM, leaders of the most game-changing open technologies communities will share their perspective on the benefits of open technology. Come network and engage directly with experts across the industry.
5. Lab: Agile development using MicroProfile and IBM WebSphere Liberty
MicroProfile and Java EE 7 make developing and deploying microservice style applications quick and efficient. In this lab, you will learn how to use MicroProfile and Java EE 7’s  application development capabilities to create a microservice that uses CDI, JAX-RS, WebSockets, Concurrency Utilities for Java and a NoSQL database running on WebSphere Liberty.
6. Session : Building cloud-native microservices with Liberty and Node.js, a product development journey
In addition to talking about the benefits of developing applications as microservices, and showing you how to build them, IBM teams have also been building new microservice-based offerings. Head over to this session where I will discuss the latest IBM offerings with senior technical staff member Brian Pulito. We’ll cover how this was developed as a collection of cloud native microservices built on WebSphere Application Server and Node.js technologies. Learn about the tools, team structure, and development practices used when building the IBM Voice Gateway.
There will be no shortage of Java activity at the conference. You don’t want to miss this opportunity to train, network, and learn about developing with Java. Register for IBM InterConnect today.   
The post 6 Java developer highlights at InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
As Network Functions Virtualization (NFV) technology matures, multiple NFV orchestration solutions have emerged and 2016 was a busy year. While some commercial products were already available on the market, multiple open source initiatives were also launched, with most  delivering initial code releases, and others planning to roll-out software artifacts later this year.
With so much going on, we thought we&;d provide you with a technical overview of some of the various NFV orchestration options, so you can get a feel for what&8217;s right for you. In particular, we&8217;ll cover:

Open Source MANO (OSM)
OPEN-O
CORD
Gigaspaces Cloudify

In addition, multiple NFV projects have been funded under European Union R&D programs. Projects such as OpenBaton, T-NOVA/TeNor and SONATA have their codebases available in  public repos, but industry support, involvement of external contributors and further  sustainability might be a challenging for these projects, so for now we&8217;ll consider them out of scope for this post, where we&8217;ll review and compare orchestration projects across the following areas:

General overview and current project state
Compliance with NFV MANO reference architecture
Software architecture
NSD definition approach
VIM and VNFM support
Capabilities to provision End to End service
Interaction with relevant standardization bodies and communities

General overview and current project state
We’ll start with a general overview of each project, along with, its ambitions, development approach, the involved community, and related information.
OSM
The OpenSource MANO project was officially launched at the World Mobile Congress (WMC) in 2016. Starting with several founding members, including Mirantis, Telefónica, BT, Canonical, Intel, RIFT.io, Telekom Austria Group and Telenor, the OSM community now includes 55 different organisations. The OSM project is hosted at ETSI facilities and targets delivering an open source management and orchestration (MANO) stack closely aligned with the ETSI NFV reference architecture.
OSM issued two releases,Rel 0 and Rel 1, during 2016. The most recent at the time of this writing, OSM Rel. 1, has been publicly available since October, 2016, and can be downloaded from the official website. Project governance is managed via several groups, including the Technical Steering group responsible for OSM&8217;s technical aspects, the Leadership group, and the End User Advisory group. You can find more details about OSM project may be found at the official Wiki.
OPEN-O
The OPEN-O project is hosted by the Linux foundation and was also formally announced at 2016 MWC. Initial project advocates were mostly from Asian companies, such as Huawei, ZTE and China Mobile. Eventually, the project project got further support from Brocade, Ericsson, GigaSpaces, Intel and others.
The main project objective is to enable end-to-end service agility across multiple domains using unified platform for NFV and SDN orchestration. The OPEN-O project delivered its first release in November, 2016 plans to roll-out future releases in a 6 month cycle. Overall project governance is managed by the project Board, with technology-specific issues managed by the Technical Steering Committee. You can find more general details about the OPEN-O project may be found at the project web-site.
CORD/XOS
Originally CORD (Central Office Re-architected as a Datacenter) was introduced as one of the use cases for the ONOS SDN Controller, but it grew-up into a separate project under ON.Lab governance. (ON.Lab recently merged with the Open Networking Foundation.)
The ultimate project goal is to combine NFV, SDN and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. The reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. CORD Rel.1 and Rel.2 integrate a number of open source projects, such as ONOS to manage SDN infrastructure, OpenStack to deploy NFV workloads, and XOS as a service orchestrator. To reflect use cases&8217; uniqueness, CORD introduces a number of service profiles, such as Mobile (M-CORD), Residential (R-CORD), and Enterprise (E-CORD).  You can find more details about CORD project can be found at the official project web site.   
Gigaspaces Cloudify
Gigaspaces’ Cloudify is the open source TOSCA-based cloud orchestration software platform.  Originally introduced as a pure cloud orchestration solution (similar to OpenStack HEAT), the platform was further expanded to include NFV-related use cases, and the Cloudify Telecom Edition emerged.  
Considering its original platform purpose, Cloudify has an extensible architecture and can interact with multiple IaaS/PaaS providers such as AWS, OpenStack, Microsoft Azure and so on. Overall, Cloudify software is open source under the Apache 2 license and the source code is hosted in a public repository. While the Cloudify platform is open source and welcomes community contributions, the overall project roadmap is defined by Gigaspaces. You can find more details about the Cloudify platform at the official web site.
Compliance with ETSI NFV MANO reference architecture
At the time of this writing, a number of alternatives and specific approaches, such as Lifecycle Service Orchestration (LSO) from Metro Ethernet Forum, have emerged, huge industry support and involvement has helped to promote ETSI NFV Management and Orchestration (MANO) as the de-facto reference NFV architecture. From this standpoint, NFV MANO provides comprehensive guidance for entities, reference points and workflows to be implemented by appropriate NFV platforms (fig. 1):

Figure 1 &; ETSI NFV MANO reference architecture
OSM
As this project is hosted by ETSI, the OSM community tries to be compliant with the ETSI NFV MANO reference architecture, respecting appropriate IFA working group specifications. Key reference points, such as Or-Vnfm and Or-Vi might be identified within OSM components. The VNF and Network Service (NS) catalog are explicitly present in an OSM service orchestrator (SO) component. Meanwhile, a lot of further development efforts are planned to reach feature parity with currently specified features and interfaces.  
OPEN-O
While the OPEN-O project in general has no objective to be compliant with NFV MANO, the NFVO component of OPEN-O is aligned with an ETSI reference model, and all key MANO elements, such as VNFM and VIM might be found in an NFVO architecture. Moreover, the scope of the OPEN-O project goes beyond just NFV orchestration, and as a result goes beyond the scope identified by the ETSI NFV MANO reference architecture. One important piece of this project relates to SDN-based networking services provisioning and orchestration, which might be further used either in conjunction with NFV services or as a standalone feature.
CORD
Since its invention, CORD has defined its own reference architecture and cross-component communication logic. The reference CORD implementation is very OpenFlow-centric around ONOS, the orchestration component (XOS), and whitebox hardware. Technically, most of the CORD building blocks might be mapped to MANO-defined NFVI, VIM and VNFM, but this is incidental; the overall architectural approach defined by ETSI MANO, as well as the appropriate reference points and interfaces were not considered in scope by the CORD community. Similar to OPEN-O, the scope of this project goes beyond just NFV services provisioning. Instead, NFV services provisioning is considered as one of the several possible use cases for the CORD platform.
Gigaspaces Cloudify
The original focus of the Cloudify platform was orchestration of application deployment in a cloud. Later, when the NFV use case emerged, the Telecom Edition of the Cloudify platform was delivered. This platform combines both NFVO and generic VNFM components of the MANO defined entities (fig. 2).

Figure 2 &8211; Cloudify in relation to a NFV MANO reference architecture
By its very nature, Cloudify Blueprints might be considered as the NS and VNF catalog entities defined by MANO. Meanwhile, some interfaces and actions specified by the NFV IFA subgroup are not present or considered as out of scope for the Cloudify platform.  From this standpoint, you could say that Cloudify is aligned with the MANO reference architecture but not fully compliant.
Software architecture and components  
As you might expect, all NFV Orchestration solutions are complex integrated software platforms combined from multiple components.
OSM
The Open Source MANO (OSM) project consists of 3 basic components (fig. 3):

Figure 3 &8211; OSM project architecture

The Service Orchestrator (SO), responsible for end-to-end service orchestration and provisioning. The SO stores the VNF definitions and NS catalogs, manages workflow of the service deployment and can query the status of already deployed services. OSM integrates the rift.io orchestration engine as an SO.
The Resource Orchestrator (RO) is used to provision services over a particular IaaS provider in a given location. At the time if this writing, the RO component is capable of deploying networking services over OpenStack, VMware, and OpenVIM.  The SO and RO components can be jointly mapped to the NFVO entity in the ETSI MANO architecture
The VNF Configuration and Abstraction (VCA) module performs the initial VNF configuration using Juju Charms. Considering this purpose, the VCA module can be considered as a generic VNFM with a limited feature set.

Additionally, OSM hosts the OpenVIM project, which is a lightweight VIM layer implementation suitable for small NFV deployments as an alternative to heavyweight OpenStack or VMware VIMs.
Most of the software components are developed in python, while SO, as a user facing entity, heavily relies on a JavaScript and NodeJS framework.
OPEN-O
From a general standpoint, the complete OPEN-O software architecture can be split into 5 component groups (Fig.4):

Figure 4 &8211; OPEN-O project software architecture

Common service: Consists of shared services used by all other components.
Common TOSCA:  Provides TOSCA-related features such as NSD catalog management, NSD definition parsing, workflow execution, and so on; this component is based on the ARIA TOSCA project.
Global Service Orchestrator (GSO): As the name suggests, this group provides overall lifecycle management of the end-to-end service.
SDN Orchestrator (SDN-O): Provides abstraction and lifecycle management of SDN services; an essential piece of this block are the SDN drivers, which provide device-specific modules for communication with a particular device or SDN controller.
NFV Orchestrator (NFV-O): This group provides NFV services instantiation and lifecycle management.

The OPEN-O project uses a microservices-based architecture, and consists of more than 20 microservices. The central platform element is the Microservice Bus, which is the core microservice of the Common Service components group. Each platform component should register with this bus. During registration, each microservice specifies exposed APIs and endpoint addresses. As a result, the overall software architecture is flexible and can be easily extended with additional modules. OPEN-O Rel. 1 consists of both Java and python-based microservices.   
CORD/XOS
As mentioned above, CORD was introduced originally as an ONOS application, but grew into a standalone platform that covers both ONOS-managed SDN regions and service orchestration entities implemented by XOS.
Both ONOS and XOS provide a service framework to enable the Everything-as-a-Service (XaaS) concept. Thus, the reference CORD implementation consists of both a hardware Pod (consisting of whitebox switches and servers) and a software platform (such as ONOS or XOS with appropriate applications). From the software standpoint, the CORD platform implements an agent or driver-based approach in which XOS ensures that each registered driver used for a particular service is in an operational state (Fig. 5):

Figure 5 &8211; CORD platform architecture
The CORD reference implementation consists of Java (ONOS and its applications) and python (XOS) software stacks. Additionally, Ansible is heavily used by the CORD for automation and configuration management
Gigaspaces Cloudify
From the high-level perspective, platform consists of several different pieces, as you can see in figure 6:

Figure 6 &8211; Cloudify platform architecture

Cloudify Manager is the orchestrator that performs deployment and lifecycle management of the applications or NSDs described in the templates, called blueprints.
The Cloudify Agents are used to manage workflow execution via an appropriate plugin.

To provide overall lifecycle management, Cloudify integrates third-party components such as:

Elasticsearch, used as a data store of the deployment state, including runtime data and logs data coming from various platform components.
Logstash, used to process log information coming from platform components and agents.
Riemann, used as a policy engine to process runtime decisions about availability, SLA and overall monitoring.
RabbitMQ, used as an async transport for communication among all platform components, including remote agents.

The orchestration functionality itself is provided by the ARIA TOSCA project, which defines the TOSCA-based blueprint format and deployment workflow engine. Cloudify “native” components and plugins are python applications.
Approach for NSD definition
The Network Service Descriptor (NSD) specifies components and the relations between them to be deployed on the top of the IaaS during the NFV service instantiation. Orchestration platforms typically use some templating language to define NSDs. While the industry in general considers TOSCA as a de-facto standard to define NSDs, alternative approaches are also available across various platforms.
OSM
OSM follows the official MANO specification, which has definitions both for NSDs and VNF Descriptors (VNFD). To define NSD templates, YAML-based documents are used.  NSD is processed by the OSM Service Orchestrator to instantiate a Network Service, which itself might include VNFs, Forwarding Graphs, and Links between them.  A VNFD is a deployment template that specifies a VNF in terms of deployment and operational behaviour requirements.  Additionally VNFD specifies connections between Virtual Deployment Units (VDUs) using the internal Virtual Links (VLs). Each VDU in an OSM presentation relates to a VM or a Container.  OSM uses archived format both for NSD and VNFD. This archive consists of the service/VNF description, initial configuration scripts and other auxiliary details. You can find more information about OSM NSD/VNFD structure at the official website.
OPEN-O
In OPEN-O, the TOSCA-based  templates is used to describe the NS/VNF Package. Both the TOSCA general service profile and the more recent NFV profile can be used for NSD/VNFD, which is further packaged according to the the Cloud Service Archive (CSAR) format.   
A CSAR is a zip archive that contains at least two directories: TOSCA-Metadata and Definitions. The TOSCA-Metadata directory contains information that describes the content of the CSAR and is referred to as the TOSCA metafile. The Definitions directory contains one or more TOSCA Definitions documents. These Definitions documents contain definitions of the cloud application to be deployed during CSAR processing. More details about OPEN-O NSD/VNFD definitions may be found at the official web site.
CORD/XOS
To define a new CORD service, you need to define both TOSCA-­based templates and Python-based software components. Particularly when adding a new service, depending on its nature, you might alter one of several platform elements:

TOSCA service definition files, appropriate models, specified as YAML text files
REST APIs models, specified in Python
XOS models, implemented as a django application
Synchronizers, used to ensure the Service instantiated correctly and transitioned  to the required state.

The overall service definition format is based on the TOSCA Simple Profile language specification and presented in the YAML format.
Gigaspaces Cloudify
To instantiate a service or application, Cloudify uses templates called “Blueprints” which are effectively orchestration and deployment plans. Blueprints are specified in the form of TOSCA YAML files  and describe the service topology as a set of nodes, relationships, dependencies, instantiation and configuration settings, monitoring, and maintenance. Other than the YAML itself, a Blueprint can include multiple external resources such as configuration and installation scripts (or Puppet Manifests, or Chef Recipes, and so on) and basically any other resource required to run the application. You can find more details about the structure of Blueprints here.
VNFM and VIM support
NFV service deployment is performed on the appropriate IaaS, which itself is a set of virtualized compute, network and storage resources.  The ETSI MANO reference architecture identifies a component to manage these virtualized resources. This component is referred to as the Virtual Infrastructure Manager (VIM). Traditionally, the open source community treats OpenStack/KVM as a “de-facto” standard VIM. However, an NFV service might be span across various VIM types and various hypervisors. Thus multi-VIM support is a common requirement for an Orchestration engine.
Additionally, a separate element in a NFV MANO architecture is the VNF Manager, which is responsible for lifecycle management of the particular VNF. The VNFM component might be either generic, treating the VNF as a black box and performing similar operations for various VNFs, or there might be a vendor-specific VNFM that has unique capabilities for management of a given VNF. Both VIM and VNFM communication are performed via appropriate reference points, as defined by the NFV MANO architecture.
OSM
The OSM project was initially considered a multi-VIM platform, and at the time of this writing, it supports OpenStack, Vmware and OpenVIM. OpenVIM is a lightweight VIM implementation that is effectively a python wrapper around libvirt and a basic host networking configuration.
At the time of this writing, the OSM VCA has limited capabilities, but still can be considered a generic VNFM based on JuJu Charms. Further, it is possible to introduce support for vendor-specific VNFMs,  but additional development and integration efforts might be required on the Service Orchestrator (Rift.io) side.
OPEN-O
Release 1 of the  OPEN-O project supports only OpenStack as a VIM. This support is available as a Java-based driver for the NFVO component. For further releases, support for VMware as a VIM is planned.
The Open-O Rel.1 platform has a generic VNFM that is based on JuJu Charms. Furthermore, the pluggable architecture of the OPEN-O platform can support any vendor-specific VNFM, but additional development and integration efforts will be required.
CORD/XOS
At the time of this writing the reference implementation of the CORD platform is architectured around OpenStack as a platform to spawn NFV workloads. While there is no direct relationship to the NFV MANO architecture, the XOS orchestrator is responsible for VNF lifecycle management, and thus might be thought of as the entity that provides VNFM-like functions.
Gigaspaces Cloudify
When Cloudify was adapted for the NFV use case, it inherited plugins for OpenStack, VMware, Azure and others that were already available for general-purpose cloud deployments. So we can say that Cloudify has MultiVIM support and any arbitrary VIM support may be added via the appropriate plugin. Following Gigaspaces’ reference model for NFV, there is a  generic VNFM that can be used with a Cloudify NFV orchestrator out of the box. Additional vendor-specific VNFM can be onboarded, but appropriate plugin development is required.
Capabilities to provision end-to-end service
NFV service provisioning consists of multiple steps, such as VNF instantiation, configuration, underlay network provisioning, and so on.  Moreover, an NFV service might span multiple clouds and geographical locations. This kind of architecture requires complex workflow management by an NFV Orchestrator, and coordination and synchronisation between infrastructure entities. This section provides an overview of the various orchestrators&8217; abilities to provision end-to-end service.
OSM
The OSM orchestration platform supports NFV service deployment spanning multiple VIMs. In particular, the OSM RO component (openmano) stores information about all VIMs available for deployment, while the Service Orchestrator can use this information during the NSD instantiation process. Meanwhile, underlay networking between VIMs should be preconfigured. There are plans to enable End-to-End network provisioning in future, but OSM Rel. 1 has no such capability.
OPEN-O
By design, the OPEN-O platform considers both NFV and SDN infrastructure regions that might be used to provision end-to-end service. So technically, you can say that Multisite NFV service can be provisioned by OPEN-O platform. However, the OPEN-O Rel.1 platform implements just a couple of specific use cases, and at the time of this writing, you can&8217;t use it to provision an arbitrary Multisite NFV service.
CORD/XOS
The reference implementation of the CORD platform defines the provisioning of a service over a defined CORD Pod. To enable Multisite NFV Service instantiation, an additional orchestration level on the top of CORD/XOS is required. So from this perspective, at the time of this writing, CORD is not capable of instantiating a Multisite NFV service.
Gigaspaces Cloudify
As Cloudify originally supported application deployment over multiple IaaS providers, technically it is possible to create a blueprint to deploy an NFV service that spans across multiple VIMs. However underlay network provisioning might require specific plugin development.
Interaction with standardization bodies and relevant communities
Each of the reviewed projects has strong industry community support. Depending on the nature of each community and the priorities of the project, there is a different focus on collaboration with an industry, other open source projects and standardization bodies.
OSM
Being hosted by ETSI, the OSM project closely collaborates with the ETSI NFV working group and follows the appropriate specifications, reference points and interfaces. At the time of this writing there are no collaborations between OSM in the scope of the OPNFV project, but it is under consideration by the OSM community. The same relates to other relevant open source projects, such as OpenStack and OpenDaylight; these projects are used “AS-IS” by the OSM platform without cross collaboration.
OPEN-O
The OPEN-O project aims to integrate both SDN and NFV solutions to provide end-to-end service, so there is formal communication to the ETSI NFV group, while the project itself doesn’t strictly follows interfaces defined by the ETSI NFV IFA working group. On other hand there is strong integration effort with the OPNFV community via initiation of the OPERA project, which aims to integrate the OPEN-O platform as a MANO orchestrator for the OPNFV platform.  Additionally there is strong interaction between OPEN-O and MEF as a part of the OpenLSO platform, and the ONOS project towards seamless integration and enabling end-to-end SDN Orchestration.  
CORD/XOS
Having originated at the On.LAB (recently merged with ONF) this project follows the approach and technology stack defined by ONF. As of the time of this writing, the CORD project has no formal presence in OPNFV. Meanwhile, there is communication with MEF and ONF towards requirements gathering and use cases for the CORD project. In particular, MEF explicitly refers to E-CORD and its applicability for defining their OpenCS MEF project.
Gigaspaces Cloudify
While the Cloudify platform is an open source product, it is mostly developed by a single company, thus the overall roadmap and community strategy is defined by Gigaspaces. This also relates to any collaboration with standardisation bodies: GigaSpaces participates in ETSI-approved NFV PoCs where Cloudify is used as a service orchestrator, and in an MEF-initiated LSO Proof of Concept, where Cloudify is used to provision E-Line EVPL service, and so on.  Additionally, the Cloudify platform is used separately by the OPNFV community in the FuncTest project for vIMS test cases, but this mostly relates to Cloudify use cases, rather than vendor-initiated community collaboration.
Conclusions
Summarising the current state of the NFV orchestration platforms, we may conclude the following:
The OSM platform is already suitable for evaluation purposes, and has relatively simple and straightforward architecture. Several sample NSDs and VNFDs are available for evaluation in the public gerrit repo. As a result, the platform can be easily installed and integrated with an appropriate VIM to evaluate basic NFV capabilities, trial use cases and PoCs. The project is relatively young, however, and a number of features still require development and will be available in upcoming releases. Furthermore, lack of support for end-to-end NFV service provisioning across multiple regions, including underlay network provisioning, should be considered in relation to your desired use case. Considering mature OSM community and close interaction with ETSI NFV group this project might emerge as a viable option for production-grade NFV Orchestration.
At the time of this writing, the main visible benefit of the OPEN-O platform is the flexible and extendable microservices-based architecture. The OPEN-O approach considers End-to-End service provisioning spanning multiple SDN and NFV regions from the very beginning. Additionally, the OPEN-O project actively collaborates with the OPNFV community toward tight integration of the Orchestrator with OPNFV platform. Unfortunately, at the time of this writing, the OPEN-O platform requires further development to be capable of providing arbitrary NFV service provisioning. Additionally a lack of documentation makes it hard to understand the microservice logic and the interaction workflow. Meanwhile, the recent OPEN-O and ECOMP merge under the ONAP project creates powerful open source community with strong industry support, which may reshape the overall NFV orchestration market.
The CORD project is the right option when OpenFlow and whiteboxes are the primary option for computing and networking infrastructure. The platform considers multiple use cases, and a large community is involved in platform development.  Meanwhile, at the time of this writing, the  CORD platform is a relatively “niche” solution around OpenFlow and related technologies pushed to the market by ONF.
Gigaspaces Cloudify is a platform that already has a relatively long history, and at the time of this writing emerges as the most mature orchestration solution among the reviewed platforms. While the NFV use case for a Cloudify platform wasn’t originally considered, Cloudify&8217;s pluggable and extendable architecture and embedded workflow engine enables arbitrary NFV service provisioning. However, if you do consider Cloudify as an orchestration engine, be sure to consider the risk of having the decision-making process regarding the overall platform strategy controlled solely by Gigaspaces.
References

OSM official website
OSM project wiki
OPEN-O project official website
CORD project official website
Cloudify platform official website
Network Functions Virtualisation (NFV); Management and Orchestration
Cloudify approach for NFV Management & Orchestration
ARIA TOSCA project
TOSCA Simple Profile Specification
TOSCA Simple Profile for Network Functions Virtualization
OPNF OPERA project
OpenCS project   
MEF OpenLSO and OpenCS projects
OPNFV vIMS functional testing
OSM Data Models; NSD and VNFD format
Cloudify Blueprint overview

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

3 statistics that prove enterprise video is on the rise

The landscape of video content is changing, and not just at home.
New viewing patterns have created a dramatic shift in the media world as consumers move to cloud-based streaming video to watch their favorite shows on laptops, tablets and mobile phones. Companies have taken note and are ramping up their use of streaming video for the enterprise to keep employees, customers and business partners engaged and informed.
Data pooled on the IBM Cloud Video service from almost a billion viewers over the past two years illustrate the growth of streaming video within the enterprise, highlighting a significant increase in mobile viewing, content quality and global activity.
The data tracks videos streamed by organizations for both internal and external communications, taking advantage of new cloud technologies to produce and deliver their content directly to audiences around the world.
Here are three key takeaways:
1. Mobile viewership of enterprise cloud-based video increased five-fold
People are all but attached to their phones, so it’s no surprise those habits extend to the workplace. Consumers are accustomed to accessing information on the go, and employees now expect the same level of convenience at work.
Accordingly, the percentage of enterprise streaming on mobile devices was almost five times higher in 2016 than in 2015. Views coming from mobile devices (rather than desktop computers) increased from 5.85 percent in 2015 to 28.82 percent in 2016.
2. Video quality within the enterprise is improving
When they’re streaming video for entertainment, viewers expect a seamless experience. Within the enterprise, employees also demand high-quality video. Low-resolution video when live-streaming a presentation or trying to complete employee training can create frustrations that distract from the video’s intended purpose.
The average video file size for the enterprise increased by 29 percent from 2015 to 2016, rising from .77 gigabytes to 1 gigabyte, according to the IBM Cloud Video data. This is even more remarkable given the average video length decreased by 8 percent. Enterprise videos are getting shorter, but the files are getting larger, which means companies are making quality a priority.
3. Enterprise viewership outside of the US is rising
The data show that enterprise viewership outside of the US grew by more than 25 percent from 2015 to 2016, as international businesses increasingly integrate cloud-based video into their strategies. With many employees working from remote locations, video is essential for communicating effectively and making everyone feel part of the team.
Live-streamed video fosters a strong company culture and enables employees to share information with colleagues more efficiently, without sacrificing the visual components or face-to-face communication. It makes one-to-many meetings, such as employee town halls, more engaging for everyone, regardless of where they are located.
Trends support expanded use of enterprise video
From these figures, it’s clear employees and managers alike increasingly rely on streaming video within the enterprise. Enterprise use is a key driver in making video one of the fastest-growing areas of data in the cloud. Within the workplace, cloud-based video has clearly transitioned from an optional technology to an essential communications tool.
As the value of enterprise video increases, more companies will turn to on-the-go streaming through mobile devices and use video to connect remote employees. Just like media and entertainment firms, companies across all industries will look for streaming technologies that meet viewer demands for high-quality video streamed reliably on any connected device, anywhere employees are.
Learn more about streaming video business solutions.
The post 3 statistics that prove enterprise video is on the rise appeared first on news.
Quelle: Thoughts on Cloud

Creating connections with the HatsOff mobile app

This is the third part in a series about a group of competitors in the Connect to Cognitive Build contest. Read the first part to find out how the team developed the idea for the app and the second part about design.
Our app that rewards kindness, Hatsoff, is on the verge of becoming something big.
As the team continued to work on the prototype, we needed a Bluemix development organization setup with sufficient runtimes and resources, as well as a code repository for the app and API prototype.
For the showcase mobile application, we debated developing a purely native iOS app or a hybrid app using IBM MobileFirst platform. We went for the hybrid option, which enabled us to implement the application on different platforms, so we can reach users who use different operating systems, and the capabilities that MobileFirst provides.
We then started to develop the user interface screens using ionic framework. I did the initial work, then handed it off to our colleague Sai. He made it come to life by adding the new logo and color scheme, then implementing the Agent screens.
For the backend architecture, we had to determine which NoSQL database to choose to store our data from the many that could handle geo-spatial queries. It was an easy choice to use Cloudant DB. We also used API Connect and its Loopback technology to model and implement our back-end services and APIs.
We created APIs to mine the data we collected to find characteristics. We had to determine which cognitive services were best suited for user interaction with HatsOff, which server runtime framework to use for our APIs exposed through API Connect, and how to best use a serverless or function-based programming environment such as OpenWhisk. We used some Watson APIs such as Speech to Text, Text to Speech, Alchemy Language and Watson Virtual Agent. There might also be uses for other services, such as Personality Insights, which analyzes customer conversations and provides personalized services.

OpenWhisk triggers actions for events such as a user reaching a HatsOff point threshold. This action could be a notification to an agent, or it could be more complex and kick off a workflow to re-evaluate the user’s premium.
Choosing to use blockchain
Kicking off a workflow in the enterprise from Bluemix is possible with the use of the Secure Gateway or Business Operations Connect. The idea can go a bit further by connecting businesses to their customers through blockchain networks with the use of smart contract.
We explored use cases for how blockchain could fit in with our solution as it applied to the automobile insurance industry. Our idea had to do with insurance policies represented as smart contracts, with rules that adjust premiums. These premiums could be triggered by HatsOff calling a non-validating member node in the network. It was a viable and appealing fit for blockchain. We experimented with a smorgasbord of new technologies. From there Neil Delima and Ron Lynn stepped in to wire the user interface to the backend services.
We’re not only developing an application to show random acts of kindness, but also providing a set of APIs that any company can use for integration. An insurance company interested in peer feedback for decision making can subscribe to and integrate these APIs with their existing apps. Their apps would call the HatsOff user registration APIs with existing user data, such as a car’s make, model, license plate number, and so on. The API offering consists of a bundle of existing services which include API Connect, OpenWhisk and cognitive services such as Text to Speech and Alchemy Language. Providers could use these services as part of the HatsOff bundle or as a standalone with their existing solutions.
An example would be a web application that uses Speech to Text. There are other options, such as bundling APIs with Watson Internet of Things (IoT) driver behavior. Integration with social media sites enables augmenting a user’s profile information and posting a HatsOff to someone’s Facebook page.
All these things — innovative application of technology, a team of volunteer contributors, never losing sight of the business perspective, grounding our efforts around the business case — are why we were successful. It’s so much more than a kindly driver allowing me to make a left turn in rush-hour traffic.
What can we do with this data? Who benefits, and even more importantly, where is the value for businesses? Our big moonshot is to create multimillion-dollar value for IBM with API Connect, Watson and the whole world of IoT.
The HatsOff app’s core team and several incredible volunteers are fast at work putting together a working prototype in which all the details come together. We are asking the right questions, open to learn, and focused on the customer that will take our app to the next level.
Learn more about how IBM is helping clients take advantage of the digital economy.
HatsOff team members Ron Lynn, Padma Chukka and Neil Delima contributed to this story.
The post Creating connections with the HatsOff mobile app appeared first on news.
Quelle: Thoughts on Cloud

Creative Service Catalog Descriptions in CloudForms

In this post, we will show you how to make your service catalog descriptions more elegant and flexible in Red Hat CloudForms. If you just type a description, along with a long description, you&;ll get something like this:

 
This is fine, it&8217;s informative and simple. But we could improve on it.
The Long Description field in a CloudForms catalog item can take raw HTML. This means we can add some additional changes like font size and bold text. Here is a more complex catalog item for example:

 
The Implementation
Really there isn&8217;t much limit on what you can do with these other than your imagination and HTML skills. Just be aware that global style tags will not function in the self-service UI, but inline formatting works just fine. Some options, like the one above, might just add a bit of aesthetic sugar, but more complex services, especially those that will be presented to customers, can be complemented by an informative and attractive description.
For example, let’s say we have a service that provisions two virtual machines and a load balancer. We could use the following HTML:
<!DOCTYPE html>
<html>
<body>
<h1>Create 2 Virtual Machines under a Load balancer and configure Load Balancing rules
for the VMs</h1>
<p>This template allows you to create 2 Virtual Machines under a Load balancer and
configure a load balancing rule on Port 80. This template also deploys a Storage
Account, Virtual Network, Public IP address, Availability Set and Network
Interfaces.</p>
<p>In this template, we use the resource loops capability to create the network
interfaces and virtual machines</p>
</body>
</html>
Which would give us something like this:

 
Take a look at how it’s displayed in the self service UI, both when hovering over the information link:

 
And on the order page itself:

 
As you develop more complex services, the value of these features will become more and more apparent.
A More Complex Example
Let’s take a look at a service that deploys and configures multiple virtual machines in Microsoft Azure and sets up Ansible to manage them. Everything you need to create this service can be found on the GitHub page with the Orchestration Template. Especially since we can automatically generate service dialogs from these templates. This code:
<!DOCTYPE html>
<html>
<body>
<h1>Advanced Linux Ansible Template: Setup Ansible to efficiently manage N Linux VMs</h1>
<p>This advanced template deploys N Linux VMs (Ubuntu) and it configures Ansible so you
can easily manage all the VMS . Don’t suffer more pain configuring and managing all
your VMs, just use Ansible! Ansible is a very powerful masterless configuration
management system based on SSH.</p>
<p>This template creates a storage account (Standard or Premium storage), a Virtual
Network, an Availability Sets (3 Fault Domains and 10 Update Domains), one private
NIC per VM, one public IP, a Load Balancer and you can specify SSH keys to access
your VMS remotely from your latop. You will need an additional certificate / public
key for the Ansible configuration, before executing the template you have upload them
to a Private Azure storage account in a container named ssh.</p>
<p>The template uses two Custom Scripts  :</p>
<ul>
<li>The first script configures SSH keys (public) in all the VMs for the Root user
so you can manage the VMS with ansible.</li>
<li>The second script installs ansible on a A1/DS1 Jumpbox VM so you can use it as a
controller.The script also deploys the provided certificate to /root/.ssh. Then,
it will execute an ansible playbook to create a RAID with all the available
disks.</li>
<li><p>Before you execute the script, you will need to create a PRIVATE storage
account and a container named ssh, and upload your certificate and public keys
for Ansible/ssh. </p>
<p>Once the template finishes, ssh into the AnsibleController VM (by defult the
load balancer has a NAT rule using the port 64000), then you can manage your VMS
with Ansible and the root user. For instance: </p>
<pre><code>sudo su root
ansible all -m ping (to ping all the VMs)
or
ansible all -m setup (to show all VMs system info )
</code></pre></li>
</ul>
<p>This template also ilustrates how to use Outputs and Tags.</p>
<ul>
<li>The template will generate an output with the fqdn of the new public IP so you
can easily connect to the Ansible VM.</li>
<li>The template will associate two tags to all the VMS : ServerRole (Webserver,
database etc) and ServerEnvironment (DEV,PRE,INT, PRO etc)</li>
</ul>
<h2>Known Issues and Limitations</h2>
<ul>
<li>Fixed number of data disks.This is due to a current limitation on the resource
manager; this template creates 2 data disks with ReadOnly Caching</li>
<li>Only the Ansible controller VM will be accesible for SSH.</li>
<li>Scripts are not yet idempotent and cannot handle updates.</li>
<li>Current version doesn’t use secured endpoints. If you are going to host
confidential data make sure that you secure the VNET by using Security
Groups.</li>
</ul>
</body>
</html>
Renders the following:

 
And this is how it looks in the self service UI order page:

 
Additional Notes
When developing descriptions like these, it can be a bit frustrating to have to edit and save your long descriptions to see how your work is coming along. I like to use this online editor or the Try It editor from w3schools. That way you can see your results quickly and get close to what you&8217;re looking for before building the catalog item in CloudForms. The site is also a great reference for HTML syntax. You can use these editors to build things like tables that can more efficiently describe your services to the service consumer in the most efficient way possible, as in this example:

 
That’s the basic idea. The built-in features of CloudForms to allow service designers to utilize HTML in their service descriptions gives us the tools we need to create more informative, as well as professional looking, catalog items. Try it out with some of your existing services and I suspect you’ll find it’s quite easy to improve your overall presentation with very little effort.
Quelle: CloudForms

Cloud makes high-performance gaming on any device possible

If you want to do gaming right, it can be a very expensive hobby.
Every new game that comes out has more demanding hardware specs. Even if you buy a bleeding-edge gaming computer, in six or nine months it will be dated. You’ll be contemplating hardware upgrades or even an entire system upgrade. And Playing Call of Duty or Halo on your Android or iOS device was once unimaginable.
That is the major pain point we are solving with LiquidSky.
With cloud technology, we at LiquidSky offer users a powerful Windows or Steam environment on any web-enabled device so they can install and play games. Gamers who use a low-end PC, Mac, Linux machine, or Android phone or tablet can give it the performance of a $1,500 gaming desktop computer.
How it works
Let’s say you use an Android phone. Normally, you would be restricted to playing Angry Birds, Candy Crush and other apps with relatively low-end graphics.
If you open the LiquidSky app, you can play all the latest AAA gaming titles. The game runs on a server within IBM Cloud data centers, streaming using software as a video from IBM Bluemix Virtual Servers to a user’s device. It’s the same concept as watching a streaming movie or TV show on Netflix.
User inputs are sent back to the server, similar to virtual desktop technology such as Citrix or VMware, except LiquidSky has less than 30 milliseconds of latency from server to client.
Server optimization and cost savings
LiquidSky uses IBM Cloud data centers because they offer the performance and global presence we need. We looked at other providers, but they do not offer auto-scalability at a bare metal level as IBM does. With Bluemix Virtual Servers, LiquidSky can take over the entire server to run our own software, and can chop up, divide and hypervise the resources to meet our custom needs. We can also fit more users on a server, which saves money.
We save money on bandwidth because IBM owns the entire network of servers. That means we can move users among servers and manage users across a global ecosystem.
We like the IBM network. It’s clean and well-optimized, so there are not a lot of obstacles to get around or hoops to go through with different routers. Then there are points of presence (PoPs) from each region, pretty much worldwide, so we can get very close to our users with low latency, clean delivery and little packet loss.
Dynamic resources benefit gamers
A $1,500 computer is an expensive investment, so when gamers can simply sign on with LiquidSky and download whatever game they want, it’s exciting.
LiquidSky delivers gaming resources on an as-needed basis. A user’s entire computer is essentially on our server, which can be reached from almost any device. There will be no latency and no network issues with high-definition, streaming games. This is what we mean by “liquid” in LiquidSky: dynamic resources. It’s the flowing of resources freely from user to user.
You can read more about LiquidSky at TIME.com or FORTUNE.com.
Learn more about IBM Cloud gaming solutions.
The post Cloud makes high-performance gaming on any device possible appeared first on news.
Quelle: Thoughts on Cloud

IBM and Aricent partner for client success

For more than eight years, Aricent has been one of IBM’s strategic digital product transformation partners. Now the companies are expanding that relationship with a new 10-year product partnership.
IBM and Aricent will introduce expanded digital capabilities to meet customer expectations and performance objectives. The two companies are committed to enhancing IBM’s cloud product portfolio through these new digital experiences and through intelligent automation.
This agreement marks a significant milestone in how IBM is leveraging its long-term partnership with Aricent, where our goals are aligned and together we have the capacity to extend high value products in IBM’s systems management portfolio to the hybrid cloud.
IBM and Aricent will collaborate to accelerate product roadmaps, modernize user experiences and continue to enrich the cloud capabilities. As well, they will accelerate time to market and increase customer value and revenue growth.
To this partnership, Aricent brings deep engineering expertise in cloud, SaaS and digital platform technologies, as well as investments in cloud-native software frameworks. Aricent is a global design and engineering company with more than 12,000 talented employees.
The IBM-Aricent partnership will also introduce product capabilities into new geographies, platform eco-systems and deployment models with the goal of enabling our clients’ digital transformation and helping them succeed in the cognitive era.
The post IBM and Aricent partner for client success appeared first on news.
Quelle: Thoughts on Cloud

Blogs, week of March 6th

There’s lots of great blog posts this week from the RDO community.

RDO Ocata Release Behind The Scenes by Haïkel Guémar

I have been involved in 6 GA releases of RDO (From Juno to Ocata), and I wanted to share a glimpse of the preparation work. Since Juno, our process has tremendously evolved: we refocused RDO on EL7, joined the CentOS Cloud SIG, moved to Software Factory.

Read more at http://tm3.org/ec

Developing Mistral workflows for TripleO by Steve Hardy

During the newton/ocata development cycles, TripleO made changes to the architecture so we make use of Mistral (the OpenStack workflow API project) to drive workflows required to deploy your OpenStack cloud.

Read more at http://tm3.org/ed

Use a CI/CD workflow to manage TripleO life cycle by Nicolas Hicher

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.

Read more at http://tm3.org/ee

Red Hat Knows OpenStack by Rich Bowen

Clips of some of my interviews from the OpenStack PTG last week. Many more to come.

Read more at http://tm3.org/ef

OpenStack Pike PTG: TripleO, TripleO UI – Some highlights by jpichon

For the second part of the PTG (vertical projects), I mainly stayed in the TripleO room, moving around a couple of times to attend cross-project sessions related to i18n.

Read more at http://tm3.org/eg

OpenStack PTG, trip report by rbowen

last week, I attended the OpenStack PTG (Project Teams Gathering) in Atlanta.

Read more at http://tm3.org/eh
Quelle: RDO

IBM and Salesforce partner to unlock data across clouds and enterprises

A recent Bain & Company survey found that 80 percent of companies say they deliver superior service, but only 8 percent of customers find that to be true. While many companies preach about great customer service, not many actually practice it. We want to change that.
Think about your own experiences when it comes to the challenge of providing great customer service. How many times have you had to provide the same information over and over? How often have you been bounced from department to department or “expert” to “expert” because they didn’t have the right information to solve your issue?  How many times have you dealt with an automated voice system that couldn’t understand you or repeatedly sent you to the wrong place?
For many organizations, the issue is not that they don’t have the right data. It’s that they don’t have access to the right data when and where they need it.
As companies grow, expand and evolve, in many cases their IT environment has not been able to keep pace. The result is a myriad of applications and data scattered across mainframes, cloud applications, servers in other divisions, business partner records and personal spreadsheets. Even when organizations can get access to the data, they can’t work with it in a format that helps them to drive insights.
The best companies are focused on using cognitive solutions to deliver incredible customer moments.  That’s why IBM and Salesforce, the world’s number one CRM company, have entered a strategic partnership to accelerate how customers unlock and monetize data and intelligence with joint solutions.  With new integration patterns designed specifically for Salesforce, IBM Application Suite for Salesforce helps organizations realize their full potential by unlocking access to data across multiple clouds and enterprises for use by Salesforce clouds.
The crux of these solutions is their simplicity. Now, it’s easy for anyone to make the right connections in minutes without needing any technical support. Business users are empowered to be even more responsive to customers through do-it-yourself interfaces that enable integrating Salesforce data with other business systems to quickly analyze, manipulate and act upon customer data held in Marketo, Asana, SAP and more.
For organizations looking to get even more functionality out of Salesforce, the solution expands to enable powerful interactions between Salesforce and enterprise IT systems. IT professionals can broadcast Salesforce events to enterprise applications for real-time updates. They can easily keep data in sync across the CRM and other applications through pre-built templates to popular software-as-a-service (SaaS) and on-premises apps. Developers building Lightning or Apex applications have Odata 4.0 support to ensure fast, virtual access to any record in any enterprise system, whether off the shelf or home grown.
The key advantage of the IBM and Salesforce partnership is the ability to connect enterprise and external data that sits outside a company’s CRM system to its CRM data to gain better insights on how those elements will impact clients. Organizations will have a full picture where and how events are impacting their customers, and consequently, their bottom line.
Through this partnership, IBM and Salesforce customers will create more engaging customer interactions.  For example, a financial advisor will be able to more easily consider outside factors such as news and financial market reports that may affect individual clients. The advisor can use that knowledge to take a preemptive and more personalized approach to managing those portfolios and relationships. Insurers will have better insights into adverse weather events that can impact clients in a particular region, helping them to proactively engage with those who might be affected.  That is the vision we are bring to our customers.
And the best part is you can get started today. If you want to learn more join the webcast or visit our website.
Follow @IBMIntegration on Twitter to find out all the latest news.
The post IBM and Salesforce partner to unlock data across clouds and enterprises appeared first on news.
Quelle: Thoughts on Cloud

InterConnect Inside Scoop: video games, cars and beer

Do you have a friend who always has tickets to see your favorite football team? Or who always seems to have the inside scoop on the best bands coming into town? Well, I may not have gotten you tickets this year, but for those planning on going to InterConnect, I would love to share my insider perspective on demos where you will have a blast.
If you’re like me, you have a short attention span. You don’t want to sit through sessions that don’t peak your interest, and you enjoy breaks from all the hard work. And InterConnect can be daunting. Celebrity guests, renowned engineers, executive speakers, more than 2,000 training sessions and over $8,000 in education—it’s a lot. But along with all these serious business opportunities, there are also plenty of chances to have a good time. As your friend in the know, I would love to share the inside scoop with three of my favorite demos.
1: Watson picks your beer
Do you love beer but struggle to find something that fits your taste? It’s time to do away with trial and error. At this demo, you will share a few of your preferences with IBMers, and Watson will customize a perfect flight of three beers based on your tastes. An IoT device will use machine learning and algorithms to give you a fascinating personal beer profile. Compare your tastes with friends and enjoy beer chosen just for you. Welcome to the future of craft beer.
2: Video games and cognitive intelligence
We professionals in the tech industry have something of a reputation for loving video games. While no hobby is for everyone, lots of us geeks will no doubt lose it when they hear that artificial intelligence and StarCraft will be featured at the same demo.
IBM will bring in professional e-sports gamers to compete for an audience with screens showing the game from every angle. Comfy lounge seating and standing areas will allow fans to watch, cheer and socialize during the game. iPads on stands will display data analytics in real-time. You will have a chance to see how your gaming skills and reflexes compare to the pros. It’s a great way to unwind after a long day of networking and professional training.
3. The future of cars
Do you ever wonder what the future of driving will look like? Come see how cognitive capabilities from IBM can help automakers change the game. Enjoy a cool interactive demo and get a chance to chat with the experts.
These demos are by no means the only way to have a good time at the conference. In my last blog, I shared that improv legend and future Hamilton star Wayne Brady will be speaking. Comedians, cars, beer and video games—this going to be a lot of fun. Make sure to register today.
If you want to chat more about InterConnect, continue the conversation by leaving a comment, connecting with me on LinkedIn or following @IBMCloud on Twitter. I look forward to hearing from you.
The post InterConnect Inside Scoop: video games, cars and beer appeared first on news.
Quelle: Thoughts on Cloud