6 Java developer highlights at InterConnect 2017

Java developers, listen up. By now, you may have heard about the sessions, labs, roundtables, activities and events being offered at IBM InterConnect 2017. With more than 2,000 sessions and 200 labs to choose from, it can be a daunting task to create your agenda for the conference. Luckily, we’ve done some of the work for you.
Here are six things that Java developers shouldn’t miss at InterConnect.
1. Hit the road with Code Rally
Code Rally is an open-source racing game where you play by programming a vehicle in Java—or Node.js if you prefer—to race around virtual tracks. Code Rally is an example of a microservice architecture, and each vehicle is its own self-contained microservice.  When deployed, each microservice works within our race simulation service to compete against other coders. Head over to the DevZone in the Concourse during the conference to give Code Rally a test drive.
2. DevZone
As a developer, the DevZone is the place to be. Located in the Concourse, you can hang out throughout the week with other developers to learn and share technical knowledge that will help you create the next generation of apps and services. While you’re at the DevZone, you can also talk to an IBM expert at an Ask Me Anything Expert Station, or learn a new skill in a short 20-minute Hello World Lab.
3. Session : The rise of microservices
Microservices are a hot topic in the world of software development. They help teams divide and conquer to solve problems faster and deliver more rapidly. In this session, RedMonk analyst and co-founder James Governor will discuss the rise of microservices with IBM Fellow and Cloud Platform CTO Jason McGee. James and Jason will explore the concept of microservices and how cloud has enabled their rise. They will cover the capabilities needed to be successful combined with real-world examples, lessons learned, and insights on how to get started from where you are today.
4. Open Tech Summit
Mobile, cloud, and big data are all trends that are changing the way we interact with people. Capturing the value from these interactions requires rapid innovation, interoperability and scalability enabled by an open approach. At the Open Tech Summit on Sunday, March 19th from 4:00 PM &; 7:00 PM, leaders of the most game-changing open technologies communities will share their perspective on the benefits of open technology. Come network and engage directly with experts across the industry.
5. Lab: Agile development using MicroProfile and IBM WebSphere Liberty
MicroProfile and Java EE 7 make developing and deploying microservice style applications quick and efficient. In this lab, you will learn how to use MicroProfile and Java EE 7’s  application development capabilities to create a microservice that uses CDI, JAX-RS, WebSockets, Concurrency Utilities for Java and a NoSQL database running on WebSphere Liberty.
6. Session : Building cloud-native microservices with Liberty and Node.js, a product development journey
In addition to talking about the benefits of developing applications as microservices, and showing you how to build them, IBM teams have also been building new microservice-based offerings. Head over to this session where I will discuss the latest IBM offerings with senior technical staff member Brian Pulito. We’ll cover how this was developed as a collection of cloud native microservices built on WebSphere Application Server and Node.js technologies. Learn about the tools, team structure, and development practices used when building the IBM Voice Gateway.
There will be no shortage of Java activity at the conference. You don’t want to miss this opportunity to train, network, and learn about developing with Java. Register for IBM InterConnect today.   
The post 6 Java developer highlights at InterConnect 2017 appeared first on news.
Quelle: Thoughts on Cloud

Increasing PolyBase Row width limitation in Azure SQL Data Warehouse

Azure SQL Data Warehouse (SQL DW) is a SQL-based, fully managed, petabyte-scale cloud solution for data warehousing. SQL DW is highly elastic, you can provision in minutes and scale capacity in seconds. You can scale compute and storage independently, allowing you to burst compute for complex analytical workloads or scale down your warehouse for archival scenarios, and pay based off what you&;re using instead of being locked into predefined cluster configurations.

In the latest release of PolyBase in SQL DW, we have increased the row width limit to 1MB from 32KB. This will allow you to ingest your wide columns directly from Windows Azure Storage Blob or Azure Data Lake Store into SQL DW.

When thinking about loading data into SQL DW via PolyBase, you need to take into consideration a couple key points regarding the data size of strings.

For character types (char, varchar, nchar, nvarchar), the 1MB data size is based on memory consumption of data in UTF-16 format. This means that each character is represented by 2 bytes.
When importing variable length columns ((n)varchar, varbinary), the loading tool pads the buffer to the width of the schema in the external table definition regardless of data type. This means that a varchar(8000) has 8000 bytes reserved regardless of the size of the data in the row.

To help improve performance, define your external table with minimal amount of padding on schema data types to maximize the amount of data transferred per internal buffer.

Additionally, it is a best practice to use a medium or a large resource class and to scale up to a larger DWU instance to take advantage of additional memory needed for importing data, especially into CCI tables. More information can be found at our documentation for Memory allocation by DWU and Resource Class.

Next Steps

Give loading with External Tables into SQL DW a try with our loading tutorial.

Learn More

What is Azure SQL Data Warehouse?

What is Azure Data Lake Store?

SQL Data Warehouse best practices

MSDN forum

Stack Overflow forum
Quelle: Azure

Announcing Azure SQL Database Premium RS, 4TB storage options, and enhanced portal experience

Today we are happy to announce the preview of the latest edition to our service tiers, Premium RS, a 4TB increase of storage limits for Premium P11 and P15, and along with it a new, enhanced portal experience for selecting and managing service tiers and performance levels.

Adding more choices in our service tiers and increasing the available storage is a crucial step towards reaching our long-term commitment of providing more flexibility. Both for compute as well as storage across all performance tiers, allowing increased flexibility to customers.

Premium RS

Premium RS is designed for your IO-intensive workloads that need Premium performance but do not require the highest availability guarantees. This tier is ideal for workloads can replay the data in case of a severe system error such as analytical workloads where the database is not system of record. In addition, Premium RS is great for non-production databases, such as development using in-memory technologies or pre-production performance testing. For more details refer to the documentation.

4TB storage option in Premium P11 and P15

You can now use up to 4TB of included storage with P11 and P15 Premium databases at no additional charge. Until we have worldwide availability later in CY 2017, the 4TB option can be selected for databases located in the following regions: East US 2, West US, Canada East and South East Asia (all starting March 9th) and West Europe, Japan East, Australia East, Canada Central (available today). For more details refer to the documentation.

Enhanced pricing tier portal experience

We have simplified your pricing tier manageability experience for databases in the portal. The configuration of your database can now be done in three simple steps reflecting the additional options we are providing such as Premium RS and additional storage configurations:

Select the service tier which corresponds to your workload needs.
Select the performance limits (DTU) required by your database.
Select the maximum storage required to your database. This added option hopes to make it simpler for you to manage the growth of your databases.

Next steps:

Review the pricing page for our new offers.
Create a new Premium RS database or elastic pool.
Create a P11 or P15 Premium database with 4TB of storage.

Quelle: Azure

Azure Service Bus Premium Messaging now available in UK

We’re pleased to announce Azure Service Bus Premium Messaging now available in the UK.

Service Bus Premium Messaging supports a broader array of mission-critical cloud apps, and provides all the messaging features of Service Bus queues and topics with predictable, repeatable performance and improved availability – now generally available in UK.

For more general information about Service Bus Premium Messaging, see this July 2016 blog post and this January 2017 article "Service Bus Premium and Standard messaging tiers".

We are excited about this addition, and invite customers using this Azure region to try Azure Service Bus Premium Messaging today!
Quelle: Azure

FBI Director Comey: "There Is No Such Thing As Absolute Privacy In America"

Darren Mccollester / Getty Images

At a cybersecurity conference hosted by Boston College, FBI director James Comey did not discuss his part in the Clinton email scandal, the firestorm over the Russian dossier, President Trump&;s alarming wiretapping allegations, or unprecedented Russian meddling in the presidential election. Instead, he stuck to that other national controversy in which he maintains a starring role: encryption.

“There is no such thing as absolute privacy in America,” Comey said Wednesday. “That&039;s the bargain. And we made that bargain over two centuries ago to achieve two goals. To achieve the very, very important goal of privacy and to achieve the important goal of security. Widespread default encryption changes that bargain. In my view it shatters the bargain.”

Comey&039;s remarks come just a day after Wikileaks published 9,000 documents and files that it says came from the CIA’s Center for Cyber Intelligence and allegedly detail the agency&039;s ability to hack into phones, laptops, and “smart” TVs.

“It is not the FBI&039;s job to tell the American people how to live.”

To support his argument that ubiquitous, default encryption is limiting the FBI&039;s lawful surveillance powers, Director Comey said the agency received 2,800 devices for which it had had lawful authority to access in the final three months of 2016. The FBI was not able to open 1,200 of those devices, about 43 percent, Comey said. The devices were linked to an array of criminal cases as well as counterintelligence and terrorism investigations and could not be accessed using any technique available to the FBI, Comey added. Although, Comey did not explain how the inability to access those devices impacted investigations.

Comey disputed claims that he is advocating for weaker encryption or so called encryption backdoors into our phones. He insisted, contrary to arguments made by prominent computer scientists and much of Silicon Valley, that firms can retain access to a person&039;s communications while also providing strong encryption. “Here&039;s the deal though: it is not the FBI&039;s job to tell the American people how to live,” Comey said. “I also don&039;t think it&039;s the job of tech companies to tell the American people how to live.”

During the contentious legal dispute last year between Apple and the FBI, many saw the use of metadata and the FBI developing its own in-house hacking expertise as reasonable alternatives to a controversial legal ruling or new legislation on encryption. But Comey said Wednesday that metadata is generally too limited to prove guilt in criminal cases and that building FBI hacking tools would be overly expensive and impractical for broader use.

Comey acknowledged that Americans enjoy a reasonable expectations of privacy in our homes, cars, and devices. But, he added, that with good reason and a court&039;s permission, law enforcement should be allowed to invade our private spaces.

“The advent of default ubiquitous strong encryption is making more and more of the room in which the FBI investigates dark,” Comey said. According to the FBI Director, sophisticated criminals, nation states, and spies have had access to encryption technology for decades, limiting the FBI&039;s ability to monitor their actions. The problem the agency faces now, since the disclosures of Edward Snowden, Comey said, is encryption tools are now widely available, eclipsing a much larger portion of the criminal world from the FBI&039;s view.

“You&039;re stuck with me for another six and a half years.”

Over the weekend, Director Comey asked officials at the Justice Department to publicly reject President Trump&039;s claims that President Obama ordered his phones wiretapped at Trump Tower. But the FBI Director did not address Trump&039;s allegations.

After the election, then President-elect Trump told CBS 60 Minutes that he was not sure if he would ask Comey to resign. In January, however, Trump asked Comey to stay on as FBI Director. At the Boston conference, Comey said he intends to complete his 10-year term. In his opening remarks, Comey said, “You&039;re stuck with me for another six and a half years. And so I&039;d love to be invited back again.”

Director Comey has been invited by the House Judiciary Committee to speak as a witness during the first public hearing on Russian interference in the presidential election. The hearing is scheduled for March 20.

Quelle: <a href="FBI Director Comey: "There Is No Such Thing As Absolute Privacy In America"“>BuzzFeed

Azure Data Factory February new features update

Azure Data Factory allows you to bring data from a rich variety of locations in diverse formats into Azure for advanced analytics and predictive modeling on top of massive amounts of data. We have been listening to your feedback and strive to continuously introduce new features and fixes to support more data ingest and transformation scenarios. Moving to the new year, we would like to start a monthly feature summary blog series so our users can easily keep track of new feature details and use them right away.

Here is a complete list of the Azure Data Factory updates for February. We will go through them one by one in this blog post.

New Oracle driver bundled with Data Management Gateway with performance enhancements
Service Principal authentication support for Azure Data Lake Store
Automatic table schema creation when loading into SQL Data Warehouse
Zip compression/decompression support
Support extracting data from arrays in JSON files
Ability to explicitly specify cloud copy execution location
Support updating the new Azure Resource Manager Machine Learning web service

New Oracle driver bundled with Data Management Gateway with performance enhancements

Introduction: Previously, to connect to Oracle data source through Data Management Gateway users were required to install the Oracle provider separately, causing them to run into different issues. Now, with the Data Management Gateway version 2.7 update, a new Microsoft driver for Oracle is installed so no separate Oracle driver installation is required. The new bundled driver providers better load throughput, with some customers observing 5x-8x performance increase. Refer to Oracle connector documentation page for details.

Configuration: The Data Management Gateway periodically checks for updates. You can check its version from the Help page as shown below. If you are running a version lower than v2.7, you can get update directly from the Download Center. With Data Management Gateway version 2.7, the new driver will be used automatically in Copy Wizard when Oracle is being used as source. Learn more about Oracle linked service properties.

Service Principal authentication support for Azure Data Lake Store

Introduction: In addition to the existing user credential authentication, Azure Data Factory now supports Service Principal to access the Azure Data Lake Store. The token used in the previous user credential authentication mode could expire after 12 hours to 90 days, so periodically reauthorizing the token manually or programmatically is required for scheduled pipelines. Learn more about the token expiration of data moving from Azure Data Lake Store using Azure Data Factory. Now with the Service Principal authentication, the key expiration threshold is much longer so you are suggested to use this mechanism going forward, especially for scheduled pipelines. Learn more about the Azure Data Lake Store and Service Principal.

Configuration: In the Copy Wizard, you will see a new Authentication type option with Service Principal as default, shown below. 

Automatic table schema creation when loading into SQL Data Warehouse

Introduction: When copying data from On-Premise SQL Server or Azure SQL Database to Azure SQL Data Warehouse using the Copy Wizard, if the table does not exist in the destination SQL Data Warehouse, Azure Data Factory can now automatically create the destination table using schema from source.

Configuration: From the Copy Wizard, in the Table mapping page, you now have the option to map to existing sink tables or create new ones using source tables’ schema. Proper data type conversion may happen if needed to fix the incompatibility between source and destination stores. Users will be warned in the Schema mapping page, as shown in the second image below, about potential incompatibility issues. Learn more about Auto table creation.

 

Zip compression/decompression support

Introduction: The Azure Data Factory Copy Activity can now unzip/zip your files with ZipDeflate compression type in addition to the existing GZip, BZip2, and Deflate compression support. This applies to all file-based stores, including Azure Blob, Azure Data Lake Store, Amazon S3, FTP/s, File System, and HDFS.

Configuration: You can find the option in Copy Wizard pages as shown below. Learn more from the specifying compression section in each corresponding connector topic.

Extracting data from arrays in JSON files

Introduction: Now the Copy Activity supports parsing arrays in JSON files. This is to address the feedback that the entire array can only be converted to a string or skipped. You can now extract data from array or cross apply objects in array with data under root object.

Configuration: The Copy Wizard provides you with the option to choose how JSON array can be parsed as shown below. In this example, the elements in “orderlines” array are parsed as “prod” and “price” columns. For more details on configuration and examples, check the specifying JSON format section in each file-based data store topic.

Ability to explicitly specify cloud copy execution location

Introduction: When copying data between cloud data stores, Azure Data Factory, by default, detects the region of your sink data store and picks the geographically closest service to perform the copy. If the region is not detectable or the service that powers the Copy Activity doesn’t have a deployment available in that region, you can now explicitly set the Execution Location option to specify the region of service to be used to perform the copy. Learn more about the globally available data movement.

Note: Your data will go through that region over the wire during copy.

Configuration: Copy wizard will prompt for the Execution location option in the Summary page if you fall into the cases mentioned above.

Support updating the new Azure Resource Manager Machine Learning web service

Introduction: You can use the Machine Learning Update Resource Activity to update the Azure Machine Learning scoring service, as a way to operationalize the Machine Learning model retrain for scoring accuracy. Now in addition to supporting the classic web service, Azure Data Factory can support the new Azure Resource Manager based Azure Machine Learning scoring web service using Service Principal.

Configuration: The Azure Machine Learning Linked Service JSON now supports Service Principal so you can access the new web service endpoint. Learn more from scoring web service is Azure Resource Manager web service.

 

Above are the new features we introduced in February. Have more feedbacks or questions? Share your thoughts with us on Azure Data Factory forum or feedback site, we’d love to hear more from you.
Quelle: Azure

Beta Docker Community Edition for Google Cloud Platform

Today we’re excited to announce beta Docker Community Edition (CE) for Cloud Platform (GCP). Users interested in helping test and improve Docker CE for GCP should sign up at beta.docker.com. We’ll let in users to the beta as the product matures and stabilizes, and we’re looking forward to your input and suggestions.
Docker CE for GCP is built on the same principle as Docker CE for AWS and Docker CE for Azure and provides a Docker setup on GCP that is:

Quick and easy to install in a few minutes
Released in sync with other Docker releases and always available with the latest Docker version
Simple to upgrade from one Docker CE version to the next
Configured securely and deployed on minimal, locked-down Linux maintained by Docker
Self-healing and capable of automatically recovering from infrastructure failures

Docker CE for GCP is the first Docker edition to launch using the InfraKit project. InfraKit helps us configure cloud infrastructure quickly, design upgrade-processes and self-healing tailored to Docker built-in orchestration and smooth out infrastructure differences between different cloud providers to give Docker users a consistent container platform that maximises portability.
Installing Docker CE for GCP
Once you have access to the beta, the simplest way to setup Docker CE is using the
Cloud Shell feature of the Google Cloud Console:
gcloud deployment-manager deployments create docker
–config

https://docker-for-gcp-templates.storage.googleapis.com/v8/Docker.jinja

–properties managerCount:3,workerCount:1,zone:us-central1-f

Setup takes a few minutes and the install output includes instructions on how to connect to the fully operational swarm.
michael_friis@docker:~$ gcloud compute ssh –project my-project –zone us-central1-f friism-test-manager-1

Welcome to Docker!
friism-test-manager-1:~$
You can now start deploying apps and services on Docker. Docker CE for GCP has the same load-balancer integration as Docker CE for AWS and Azure so, any service that publishes ports is immediately available. For example, if you start an nginx service exposed on port 80, that will be immediately available on port 80 on the IP address of the loadbalancer displayed in the deployment output:
docker service create -p 80:80 nginx
You can use Docker CE for GCP directly from the Cloud Shell or use the `gcloud` command-line tools to set up an SSH tunnel to more easily deploy projects from your local machine.
An even simpler way to access your GCP Docker install is by using the new beta Docker Cloud fleet management feature. Simply register the swarm with Docker Cloud from the Cloud Shell:

Now the swarm is available for use on Docker for Mac and Windows, and you can easily share access with team members by adding their Docker Ids.

To try out Docker CE for GCP sign up at https://beta.docker.com. We’re busy improving the beta based on user input and we’re looking forward to your feedback. Later in the year, we’ll also add Docker Enterprise Edition (EE)  support so stay tuned for more!
Learn More about Docker Community Edition (CE) for Google Cloud Platform (GCP):

Sign up here at beta.docker.com
Check out the docs for Docker for GCP
Check out Docker CE for AWS and Azure
Learn More about Docker Community and Enterprise Edition

[Tweet &;Sign Up for the for Google Cloud Beta ]
The post Beta Docker Community Edition for Google Cloud Platform appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
As Network Functions Virtualization (NFV) technology matures, multiple NFV orchestration solutions have emerged and 2016 was a busy year. While some commercial products were already available on the market, multiple open source initiatives were also launched, with most  delivering initial code releases, and others planning to roll-out software artifacts later this year.
With so much going on, we thought we&;d provide you with a technical overview of some of the various NFV orchestration options, so you can get a feel for what&8217;s right for you. In particular, we&8217;ll cover:

Open Source MANO (OSM)
OPEN-O
CORD
Gigaspaces Cloudify

In addition, multiple NFV projects have been funded under European Union R&D programs. Projects such as OpenBaton, T-NOVA/TeNor and SONATA have their codebases available in  public repos, but industry support, involvement of external contributors and further  sustainability might be a challenging for these projects, so for now we&8217;ll consider them out of scope for this post, where we&8217;ll review and compare orchestration projects across the following areas:

General overview and current project state
Compliance with NFV MANO reference architecture
Software architecture
NSD definition approach
VIM and VNFM support
Capabilities to provision End to End service
Interaction with relevant standardization bodies and communities

General overview and current project state
We’ll start with a general overview of each project, along with, its ambitions, development approach, the involved community, and related information.
OSM
The OpenSource MANO project was officially launched at the World Mobile Congress (WMC) in 2016. Starting with several founding members, including Mirantis, Telefónica, BT, Canonical, Intel, RIFT.io, Telekom Austria Group and Telenor, the OSM community now includes 55 different organisations. The OSM project is hosted at ETSI facilities and targets delivering an open source management and orchestration (MANO) stack closely aligned with the ETSI NFV reference architecture.
OSM issued two releases,Rel 0 and Rel 1, during 2016. The most recent at the time of this writing, OSM Rel. 1, has been publicly available since October, 2016, and can be downloaded from the official website. Project governance is managed via several groups, including the Technical Steering group responsible for OSM&8217;s technical aspects, the Leadership group, and the End User Advisory group. You can find more details about OSM project may be found at the official Wiki.
OPEN-O
The OPEN-O project is hosted by the Linux foundation and was also formally announced at 2016 MWC. Initial project advocates were mostly from Asian companies, such as Huawei, ZTE and China Mobile. Eventually, the project project got further support from Brocade, Ericsson, GigaSpaces, Intel and others.
The main project objective is to enable end-to-end service agility across multiple domains using unified platform for NFV and SDN orchestration. The OPEN-O project delivered its first release in November, 2016 plans to roll-out future releases in a 6 month cycle. Overall project governance is managed by the project Board, with technology-specific issues managed by the Technical Steering Committee. You can find more general details about the OPEN-O project may be found at the project web-site.
CORD/XOS
Originally CORD (Central Office Re-architected as a Datacenter) was introduced as one of the use cases for the ONOS SDN Controller, but it grew-up into a separate project under ON.Lab governance. (ON.Lab recently merged with the Open Networking Foundation.)
The ultimate project goal is to combine NFV, SDN and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. The reference implementation of CORD combines commodity servers, white-box switches, and disaggregated access technologies with open source software to provide an extensible service delivery platform. CORD Rel.1 and Rel.2 integrate a number of open source projects, such as ONOS to manage SDN infrastructure, OpenStack to deploy NFV workloads, and XOS as a service orchestrator. To reflect use cases&8217; uniqueness, CORD introduces a number of service profiles, such as Mobile (M-CORD), Residential (R-CORD), and Enterprise (E-CORD).  You can find more details about CORD project can be found at the official project web site.   
Gigaspaces Cloudify
Gigaspaces’ Cloudify is the open source TOSCA-based cloud orchestration software platform.  Originally introduced as a pure cloud orchestration solution (similar to OpenStack HEAT), the platform was further expanded to include NFV-related use cases, and the Cloudify Telecom Edition emerged.  
Considering its original platform purpose, Cloudify has an extensible architecture and can interact with multiple IaaS/PaaS providers such as AWS, OpenStack, Microsoft Azure and so on. Overall, Cloudify software is open source under the Apache 2 license and the source code is hosted in a public repository. While the Cloudify platform is open source and welcomes community contributions, the overall project roadmap is defined by Gigaspaces. You can find more details about the Cloudify platform at the official web site.
Compliance with ETSI NFV MANO reference architecture
At the time of this writing, a number of alternatives and specific approaches, such as Lifecycle Service Orchestration (LSO) from Metro Ethernet Forum, have emerged, huge industry support and involvement has helped to promote ETSI NFV Management and Orchestration (MANO) as the de-facto reference NFV architecture. From this standpoint, NFV MANO provides comprehensive guidance for entities, reference points and workflows to be implemented by appropriate NFV platforms (fig. 1):

Figure 1 &; ETSI NFV MANO reference architecture
OSM
As this project is hosted by ETSI, the OSM community tries to be compliant with the ETSI NFV MANO reference architecture, respecting appropriate IFA working group specifications. Key reference points, such as Or-Vnfm and Or-Vi might be identified within OSM components. The VNF and Network Service (NS) catalog are explicitly present in an OSM service orchestrator (SO) component. Meanwhile, a lot of further development efforts are planned to reach feature parity with currently specified features and interfaces.  
OPEN-O
While the OPEN-O project in general has no objective to be compliant with NFV MANO, the NFVO component of OPEN-O is aligned with an ETSI reference model, and all key MANO elements, such as VNFM and VIM might be found in an NFVO architecture. Moreover, the scope of the OPEN-O project goes beyond just NFV orchestration, and as a result goes beyond the scope identified by the ETSI NFV MANO reference architecture. One important piece of this project relates to SDN-based networking services provisioning and orchestration, which might be further used either in conjunction with NFV services or as a standalone feature.
CORD
Since its invention, CORD has defined its own reference architecture and cross-component communication logic. The reference CORD implementation is very OpenFlow-centric around ONOS, the orchestration component (XOS), and whitebox hardware. Technically, most of the CORD building blocks might be mapped to MANO-defined NFVI, VIM and VNFM, but this is incidental; the overall architectural approach defined by ETSI MANO, as well as the appropriate reference points and interfaces were not considered in scope by the CORD community. Similar to OPEN-O, the scope of this project goes beyond just NFV services provisioning. Instead, NFV services provisioning is considered as one of the several possible use cases for the CORD platform.
Gigaspaces Cloudify
The original focus of the Cloudify platform was orchestration of application deployment in a cloud. Later, when the NFV use case emerged, the Telecom Edition of the Cloudify platform was delivered. This platform combines both NFVO and generic VNFM components of the MANO defined entities (fig. 2).

Figure 2 &8211; Cloudify in relation to a NFV MANO reference architecture
By its very nature, Cloudify Blueprints might be considered as the NS and VNF catalog entities defined by MANO. Meanwhile, some interfaces and actions specified by the NFV IFA subgroup are not present or considered as out of scope for the Cloudify platform.  From this standpoint, you could say that Cloudify is aligned with the MANO reference architecture but not fully compliant.
Software architecture and components  
As you might expect, all NFV Orchestration solutions are complex integrated software platforms combined from multiple components.
OSM
The Open Source MANO (OSM) project consists of 3 basic components (fig. 3):

Figure 3 &8211; OSM project architecture

The Service Orchestrator (SO), responsible for end-to-end service orchestration and provisioning. The SO stores the VNF definitions and NS catalogs, manages workflow of the service deployment and can query the status of already deployed services. OSM integrates the rift.io orchestration engine as an SO.
The Resource Orchestrator (RO) is used to provision services over a particular IaaS provider in a given location. At the time if this writing, the RO component is capable of deploying networking services over OpenStack, VMware, and OpenVIM.  The SO and RO components can be jointly mapped to the NFVO entity in the ETSI MANO architecture
The VNF Configuration and Abstraction (VCA) module performs the initial VNF configuration using Juju Charms. Considering this purpose, the VCA module can be considered as a generic VNFM with a limited feature set.

Additionally, OSM hosts the OpenVIM project, which is a lightweight VIM layer implementation suitable for small NFV deployments as an alternative to heavyweight OpenStack or VMware VIMs.
Most of the software components are developed in python, while SO, as a user facing entity, heavily relies on a JavaScript and NodeJS framework.
OPEN-O
From a general standpoint, the complete OPEN-O software architecture can be split into 5 component groups (Fig.4):

Figure 4 &8211; OPEN-O project software architecture

Common service: Consists of shared services used by all other components.
Common TOSCA:  Provides TOSCA-related features such as NSD catalog management, NSD definition parsing, workflow execution, and so on; this component is based on the ARIA TOSCA project.
Global Service Orchestrator (GSO): As the name suggests, this group provides overall lifecycle management of the end-to-end service.
SDN Orchestrator (SDN-O): Provides abstraction and lifecycle management of SDN services; an essential piece of this block are the SDN drivers, which provide device-specific modules for communication with a particular device or SDN controller.
NFV Orchestrator (NFV-O): This group provides NFV services instantiation and lifecycle management.

The OPEN-O project uses a microservices-based architecture, and consists of more than 20 microservices. The central platform element is the Microservice Bus, which is the core microservice of the Common Service components group. Each platform component should register with this bus. During registration, each microservice specifies exposed APIs and endpoint addresses. As a result, the overall software architecture is flexible and can be easily extended with additional modules. OPEN-O Rel. 1 consists of both Java and python-based microservices.   
CORD/XOS
As mentioned above, CORD was introduced originally as an ONOS application, but grew into a standalone platform that covers both ONOS-managed SDN regions and service orchestration entities implemented by XOS.
Both ONOS and XOS provide a service framework to enable the Everything-as-a-Service (XaaS) concept. Thus, the reference CORD implementation consists of both a hardware Pod (consisting of whitebox switches and servers) and a software platform (such as ONOS or XOS with appropriate applications). From the software standpoint, the CORD platform implements an agent or driver-based approach in which XOS ensures that each registered driver used for a particular service is in an operational state (Fig. 5):

Figure 5 &8211; CORD platform architecture
The CORD reference implementation consists of Java (ONOS and its applications) and python (XOS) software stacks. Additionally, Ansible is heavily used by the CORD for automation and configuration management
Gigaspaces Cloudify
From the high-level perspective, platform consists of several different pieces, as you can see in figure 6:

Figure 6 &8211; Cloudify platform architecture

Cloudify Manager is the orchestrator that performs deployment and lifecycle management of the applications or NSDs described in the templates, called blueprints.
The Cloudify Agents are used to manage workflow execution via an appropriate plugin.

To provide overall lifecycle management, Cloudify integrates third-party components such as:

Elasticsearch, used as a data store of the deployment state, including runtime data and logs data coming from various platform components.
Logstash, used to process log information coming from platform components and agents.
Riemann, used as a policy engine to process runtime decisions about availability, SLA and overall monitoring.
RabbitMQ, used as an async transport for communication among all platform components, including remote agents.

The orchestration functionality itself is provided by the ARIA TOSCA project, which defines the TOSCA-based blueprint format and deployment workflow engine. Cloudify “native” components and plugins are python applications.
Approach for NSD definition
The Network Service Descriptor (NSD) specifies components and the relations between them to be deployed on the top of the IaaS during the NFV service instantiation. Orchestration platforms typically use some templating language to define NSDs. While the industry in general considers TOSCA as a de-facto standard to define NSDs, alternative approaches are also available across various platforms.
OSM
OSM follows the official MANO specification, which has definitions both for NSDs and VNF Descriptors (VNFD). To define NSD templates, YAML-based documents are used.  NSD is processed by the OSM Service Orchestrator to instantiate a Network Service, which itself might include VNFs, Forwarding Graphs, and Links between them.  A VNFD is a deployment template that specifies a VNF in terms of deployment and operational behaviour requirements.  Additionally VNFD specifies connections between Virtual Deployment Units (VDUs) using the internal Virtual Links (VLs). Each VDU in an OSM presentation relates to a VM or a Container.  OSM uses archived format both for NSD and VNFD. This archive consists of the service/VNF description, initial configuration scripts and other auxiliary details. You can find more information about OSM NSD/VNFD structure at the official website.
OPEN-O
In OPEN-O, the TOSCA-based  templates is used to describe the NS/VNF Package. Both the TOSCA general service profile and the more recent NFV profile can be used for NSD/VNFD, which is further packaged according to the the Cloud Service Archive (CSAR) format.   
A CSAR is a zip archive that contains at least two directories: TOSCA-Metadata and Definitions. The TOSCA-Metadata directory contains information that describes the content of the CSAR and is referred to as the TOSCA metafile. The Definitions directory contains one or more TOSCA Definitions documents. These Definitions documents contain definitions of the cloud application to be deployed during CSAR processing. More details about OPEN-O NSD/VNFD definitions may be found at the official web site.
CORD/XOS
To define a new CORD service, you need to define both TOSCA-­based templates and Python-based software components. Particularly when adding a new service, depending on its nature, you might alter one of several platform elements:

TOSCA service definition files, appropriate models, specified as YAML text files
REST APIs models, specified in Python
XOS models, implemented as a django application
Synchronizers, used to ensure the Service instantiated correctly and transitioned  to the required state.

The overall service definition format is based on the TOSCA Simple Profile language specification and presented in the YAML format.
Gigaspaces Cloudify
To instantiate a service or application, Cloudify uses templates called “Blueprints” which are effectively orchestration and deployment plans. Blueprints are specified in the form of TOSCA YAML files  and describe the service topology as a set of nodes, relationships, dependencies, instantiation and configuration settings, monitoring, and maintenance. Other than the YAML itself, a Blueprint can include multiple external resources such as configuration and installation scripts (or Puppet Manifests, or Chef Recipes, and so on) and basically any other resource required to run the application. You can find more details about the structure of Blueprints here.
VNFM and VIM support
NFV service deployment is performed on the appropriate IaaS, which itself is a set of virtualized compute, network and storage resources.  The ETSI MANO reference architecture identifies a component to manage these virtualized resources. This component is referred to as the Virtual Infrastructure Manager (VIM). Traditionally, the open source community treats OpenStack/KVM as a “de-facto” standard VIM. However, an NFV service might be span across various VIM types and various hypervisors. Thus multi-VIM support is a common requirement for an Orchestration engine.
Additionally, a separate element in a NFV MANO architecture is the VNF Manager, which is responsible for lifecycle management of the particular VNF. The VNFM component might be either generic, treating the VNF as a black box and performing similar operations for various VNFs, or there might be a vendor-specific VNFM that has unique capabilities for management of a given VNF. Both VIM and VNFM communication are performed via appropriate reference points, as defined by the NFV MANO architecture.
OSM
The OSM project was initially considered a multi-VIM platform, and at the time of this writing, it supports OpenStack, Vmware and OpenVIM. OpenVIM is a lightweight VIM implementation that is effectively a python wrapper around libvirt and a basic host networking configuration.
At the time of this writing, the OSM VCA has limited capabilities, but still can be considered a generic VNFM based on JuJu Charms. Further, it is possible to introduce support for vendor-specific VNFMs,  but additional development and integration efforts might be required on the Service Orchestrator (Rift.io) side.
OPEN-O
Release 1 of the  OPEN-O project supports only OpenStack as a VIM. This support is available as a Java-based driver for the NFVO component. For further releases, support for VMware as a VIM is planned.
The Open-O Rel.1 platform has a generic VNFM that is based on JuJu Charms. Furthermore, the pluggable architecture of the OPEN-O platform can support any vendor-specific VNFM, but additional development and integration efforts will be required.
CORD/XOS
At the time of this writing the reference implementation of the CORD platform is architectured around OpenStack as a platform to spawn NFV workloads. While there is no direct relationship to the NFV MANO architecture, the XOS orchestrator is responsible for VNF lifecycle management, and thus might be thought of as the entity that provides VNFM-like functions.
Gigaspaces Cloudify
When Cloudify was adapted for the NFV use case, it inherited plugins for OpenStack, VMware, Azure and others that were already available for general-purpose cloud deployments. So we can say that Cloudify has MultiVIM support and any arbitrary VIM support may be added via the appropriate plugin. Following Gigaspaces’ reference model for NFV, there is a  generic VNFM that can be used with a Cloudify NFV orchestrator out of the box. Additional vendor-specific VNFM can be onboarded, but appropriate plugin development is required.
Capabilities to provision end-to-end service
NFV service provisioning consists of multiple steps, such as VNF instantiation, configuration, underlay network provisioning, and so on.  Moreover, an NFV service might span multiple clouds and geographical locations. This kind of architecture requires complex workflow management by an NFV Orchestrator, and coordination and synchronisation between infrastructure entities. This section provides an overview of the various orchestrators&8217; abilities to provision end-to-end service.
OSM
The OSM orchestration platform supports NFV service deployment spanning multiple VIMs. In particular, the OSM RO component (openmano) stores information about all VIMs available for deployment, while the Service Orchestrator can use this information during the NSD instantiation process. Meanwhile, underlay networking between VIMs should be preconfigured. There are plans to enable End-to-End network provisioning in future, but OSM Rel. 1 has no such capability.
OPEN-O
By design, the OPEN-O platform considers both NFV and SDN infrastructure regions that might be used to provision end-to-end service. So technically, you can say that Multisite NFV service can be provisioned by OPEN-O platform. However, the OPEN-O Rel.1 platform implements just a couple of specific use cases, and at the time of this writing, you can&8217;t use it to provision an arbitrary Multisite NFV service.
CORD/XOS
The reference implementation of the CORD platform defines the provisioning of a service over a defined CORD Pod. To enable Multisite NFV Service instantiation, an additional orchestration level on the top of CORD/XOS is required. So from this perspective, at the time of this writing, CORD is not capable of instantiating a Multisite NFV service.
Gigaspaces Cloudify
As Cloudify originally supported application deployment over multiple IaaS providers, technically it is possible to create a blueprint to deploy an NFV service that spans across multiple VIMs. However underlay network provisioning might require specific plugin development.
Interaction with standardization bodies and relevant communities
Each of the reviewed projects has strong industry community support. Depending on the nature of each community and the priorities of the project, there is a different focus on collaboration with an industry, other open source projects and standardization bodies.
OSM
Being hosted by ETSI, the OSM project closely collaborates with the ETSI NFV working group and follows the appropriate specifications, reference points and interfaces. At the time of this writing there are no collaborations between OSM in the scope of the OPNFV project, but it is under consideration by the OSM community. The same relates to other relevant open source projects, such as OpenStack and OpenDaylight; these projects are used “AS-IS” by the OSM platform without cross collaboration.
OPEN-O
The OPEN-O project aims to integrate both SDN and NFV solutions to provide end-to-end service, so there is formal communication to the ETSI NFV group, while the project itself doesn’t strictly follows interfaces defined by the ETSI NFV IFA working group. On other hand there is strong integration effort with the OPNFV community via initiation of the OPERA project, which aims to integrate the OPEN-O platform as a MANO orchestrator for the OPNFV platform.  Additionally there is strong interaction between OPEN-O and MEF as a part of the OpenLSO platform, and the ONOS project towards seamless integration and enabling end-to-end SDN Orchestration.  
CORD/XOS
Having originated at the On.LAB (recently merged with ONF) this project follows the approach and technology stack defined by ONF. As of the time of this writing, the CORD project has no formal presence in OPNFV. Meanwhile, there is communication with MEF and ONF towards requirements gathering and use cases for the CORD project. In particular, MEF explicitly refers to E-CORD and its applicability for defining their OpenCS MEF project.
Gigaspaces Cloudify
While the Cloudify platform is an open source product, it is mostly developed by a single company, thus the overall roadmap and community strategy is defined by Gigaspaces. This also relates to any collaboration with standardisation bodies: GigaSpaces participates in ETSI-approved NFV PoCs where Cloudify is used as a service orchestrator, and in an MEF-initiated LSO Proof of Concept, where Cloudify is used to provision E-Line EVPL service, and so on.  Additionally, the Cloudify platform is used separately by the OPNFV community in the FuncTest project for vIMS test cases, but this mostly relates to Cloudify use cases, rather than vendor-initiated community collaboration.
Conclusions
Summarising the current state of the NFV orchestration platforms, we may conclude the following:
The OSM platform is already suitable for evaluation purposes, and has relatively simple and straightforward architecture. Several sample NSDs and VNFDs are available for evaluation in the public gerrit repo. As a result, the platform can be easily installed and integrated with an appropriate VIM to evaluate basic NFV capabilities, trial use cases and PoCs. The project is relatively young, however, and a number of features still require development and will be available in upcoming releases. Furthermore, lack of support for end-to-end NFV service provisioning across multiple regions, including underlay network provisioning, should be considered in relation to your desired use case. Considering mature OSM community and close interaction with ETSI NFV group this project might emerge as a viable option for production-grade NFV Orchestration.
At the time of this writing, the main visible benefit of the OPEN-O platform is the flexible and extendable microservices-based architecture. The OPEN-O approach considers End-to-End service provisioning spanning multiple SDN and NFV regions from the very beginning. Additionally, the OPEN-O project actively collaborates with the OPNFV community toward tight integration of the Orchestrator with OPNFV platform. Unfortunately, at the time of this writing, the OPEN-O platform requires further development to be capable of providing arbitrary NFV service provisioning. Additionally a lack of documentation makes it hard to understand the microservice logic and the interaction workflow. Meanwhile, the recent OPEN-O and ECOMP merge under the ONAP project creates powerful open source community with strong industry support, which may reshape the overall NFV orchestration market.
The CORD project is the right option when OpenFlow and whiteboxes are the primary option for computing and networking infrastructure. The platform considers multiple use cases, and a large community is involved in platform development.  Meanwhile, at the time of this writing, the  CORD platform is a relatively “niche” solution around OpenFlow and related technologies pushed to the market by ONF.
Gigaspaces Cloudify is a platform that already has a relatively long history, and at the time of this writing emerges as the most mature orchestration solution among the reviewed platforms. While the NFV use case for a Cloudify platform wasn’t originally considered, Cloudify&8217;s pluggable and extendable architecture and embedded workflow engine enables arbitrary NFV service provisioning. However, if you do consider Cloudify as an orchestration engine, be sure to consider the risk of having the decision-making process regarding the overall platform strategy controlled solely by Gigaspaces.
References

OSM official website
OSM project wiki
OPEN-O project official website
CORD project official website
Cloudify platform official website
Network Functions Virtualisation (NFV); Management and Orchestration
Cloudify approach for NFV Management & Orchestration
ARIA TOSCA project
TOSCA Simple Profile Specification
TOSCA Simple Profile for Network Functions Virtualization
OPNF OPERA project
OpenCS project   
MEF OpenLSO and OpenCS projects
OPNFV vIMS functional testing
OSM Data Models; NSD and VNFD format
Cloudify Blueprint overview

The post What is the best NFV Orchestration platform? A review of OSM, Open-O, CORD, and Cloudify appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis