Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A

The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
One of Mirantis’ most popular webinars in 2018 was one we presented with Cloudify as part of our launch of MCP Edge, a version of Mirantis Cloud Platform software tuned specifically for edge cloud infrastructure. In case you missed the webinar, you can watch the recording and view the Q&A below.
Is the low latency characteristic of the edge cloud mainly a function of the cloud being close to the user?
Satish Salagame (Mirantis): The user’s proximity to the edge and avoiding multiple network hops is certainly a key component. However, the edge infrastructure design should ensure that unnecessary delays are not introduced, especially in the datapath. This is where EPA (Enhanced Platform Awareness) features like NUMA aware scheduling, CPU-pinning, huge pages all help. Also, data plane acceleration techniques such as SR-IOV and DPDK help in accelerating the data plane. This is why the Edge cloud infrastructure has lot of commonality with NFVI.
Shay Naeh (Cloudify): There are many use cases that require low latency, and the central cloud as we see it today is going to be broken into smaller edge clouds for use cases like connected cars and augmented reality, which require latency of less than 20ms. Latency is only one reason for this in the edge. The second reason for the edge itself is you don’t want to transfer all the enormous data points to the central clouds, and I call it a data tsunami of infromation for IoT, for connected cars, etc.
Satish: So you want to process everything locally, aggregate it, and send it to the central cloud just for learning, and this emanates the learning information from edge to edge. Let’s say you go to special use cases, one of the edges, so you can teach the other edges about it, and they will be informed, even though their use case was learned in another edge. So the two main reasons are the new application use cases that require low latency and the enormous data points that will be available now with 5G, IoT, and new scenarios.
Does Virtlet used in Mirantis Cloud Platform Edge solve all the problems associated with VNFs?
Satish: Virtlet is certainly one critical building block in solving some of the VNF problems we talked about. It allows a VM-based VNF to run unmodified in a k8s environment. However, it doesn’t solve all the problems. For example, if we have a complex VNF with multiple components, each running as separate VMs, and a proprietary VNFM designed for OpenStack or some other VIM, it takes some effort to adopt this VNF to the k8s/Virtlet environment. However, there will be many use cases where Virtlet can be used to design a very efficient, small footprint k8s edge cloud. Also, it provides a great transition path as more and more VNFs become containerized and cloud-native.
How does Virtlet compare with Kubevirt?
Satish: See our blog on the topic.
How does the MCP Master of Masters work with Cloudify?
Satish: The MCP Master of Masters is focused on the deployment and lifecycle management of infrastructure. The key differentiation here is that the MCP Master of Masters is focused on infrastructure orchestration and infrastructure management, whereas Cloudify is more focused on workload orchestration. In the edge cloud case, that includes edge applications and VNFs. That’s the fundamental difference between the two, and working together, they complement each other and make a powerful edge stack.
Shay: It’s not only VNFs, it can be any distributed application that you would like to run, and you can deploy it on multiple edges and manage it using Cloudify. The MCP Master of Masters will provide the infrastructure, and Cloudify will run on top of it and provision the workloads on the edges.
Satish: Obviously the MCP Master of Masters will have to communicate with Cloudify in terms of providing inventory information to the orchestrator and providing profile information for each edge cloud being managed by MCP, so that the orchestrator has all the required information to launch the edge applicaitons and VNFs appropriately in the correct edge environment.
What is the business use case for abstracting away environments with Cloudify?
Ilan Adler (Cloudify): The use cases are reducing transformation cost, reusing existing investments and components (software and hardware) to enable native and Edge, and using a Hybrid Stack to allow a smoother transition to Cloud Native Edge by allowing integration of the existing network services with new cloud native edge management based on Kubernetes.
How is this solution different from an access/core cloud management solution for a telco?
Satish: The traditional access/aggregation telco networks focused on carrying the traffic to the core for processing. However, with Edge computing, there are two important aspects:

The Edge clouds which are close to the user are processing data in the edge itself
Backhauling the data to the core cloud is prevented

Both are critical as we move to 5G.
Have you considered using a lightweight (small footprint) fast containerized VM approach like Kata Containers? The benefits are VMs with the speed of containers, that act and look like a container in K8S.
Satish: We briefly looked at Kata Containers. Our focus was on key networking capabilities and the ability to handle VNF workloads that need to run as VMs. Based on our research we found Virtlet to be the best candidate for our needs.
What’s the procedure to import a VM into a Virtlet?
Nick Chase (Mirantis): Virtlet creates VM pods that run regular qcow2 images, so the first step is to create a qcow2 image for your VM. Next, host it at an HTTPS URL, then create a pod manifest just as you would for a Docker container, specifying that the pod should run on a machine that has virtlet installed. Also, the image URI has a virtlet.cloud prefix indicating that it’s VM pod. Watch a demo of MCP Edge with Virtlet.
Regarding the networking part, do you still use OvS or proceed with the SR-IOV since it supports interworking with Calico (as of the new version of MCP)?
Satish: In the architecture we showed today, we are not using OvS. It’s a pure Kubernetes cluster with CNI-Genie, which allows us to use multiple CNIs; CNI-SRIOV for data plane acceleration; and Calico or Flannel. Our default is Calico for the networking.
From your experience in real-world scenarios, is the placement, migration (based on agreed-on SLA and user mobility), and replication of VNFs a challenging task? If yes, Why? Which VNF type is more challenging?
Satish: Yes, these are all challenging tasks, especially with complex VNFs that:

Contain multiple VNF components (VMs)
Require multiple tenant networks (Control, Management, Data planes)
Have proprietary VNF managers
Require complex on-boarding mechanisms.

Does the Cloudify entity behave as a NFVO? or an OSS/BSS?
Shay: Cloudify can also work as a NFVO, VNFm, and Service Orchestrator. In essence it’s all a function of what blueprints you choose to utilize. Cloudify is not an OSS/BSS system.
Does the Service Orchestrator include NFVO?
Shay: Yes
In “Edge Computing Orchestration” slide, there is a red arrow pointing to the public cloud. What type of things is it orchestrating in a public cloud?
Satish: It could orchestrate pretty much everything in the public cloud as well applications, networking, managed services, infrastructure, etc.
SO and e2e orchestrator are the same?
Satish: Yes
In the ETSI model, is Mirantis operating as the NFVi and ViM? And Cloudify acting as the VNFM and NFVO?
Shay: Yes. Mirantis provides the infrastructure and the capability to run workloads on top of it. Cloudify manages the lifecycle operations of each one of the VNFs (this is the role of the VNFM or VNF Manager), and it also creates the workloads and service chaining between the VNFs. This translates into a service which is the responsibility of the NFVO, which is to stitch in together multiple capabilities to provide a service. This service can be complex and span multiple edges, multiple domains and if needed connect it to some core backends, etc.
Satish: As we move to 5G and we start dealing with network slicing and complex applications, this becomes even more critical, having an intelligent orchestrator like Cloudify orchestrating the required VNFs and doing the service function chaining and doing it in a very dynamic fashion. That will be an extremely powerful thing to combine with MCP.
What is your view on other open source orchestration platforms like ONAP, OSM?
Satish: See our blog comparing different NFV orchestration platforms. Also see SWOT analyses and scorecards in our Open NFV Executive Briefing Center.
What is the function of the end to end orchestrator?
Shay: When you’re going to have multiple edges and different types of edges, you’d like to have one easy, centralized way to manage all those edges. In addition to that, you need to run different operations on different edges, and there are different models to do this. You can have a master orchestrator that can talk to a local orchestrator, and just send commands, and the local orchestrator is a control point for the master orchestrator, but still you need the master orchestrator.
Another more advanced way to do it is to have an autonomous orchestrator, that the master only delegates work to, but when there is no connection to a master orchestrator, it will work on its own, and manage the lifecycle operations of the edge, including healing, scaling, etc., autonomously and independently. When there is no connection, it will run as a local orchestrator, and when the connection resumes, it can aggregate all the information and send it to the master orchestrator.
So you need to handle many edges, possibly hundreds or thousands of edges, and you need to do it in a very efficient way that is acceptable by the use case that you are trying to orchestrate.
For the OpenStack edge deployment, what is the minimal footprint? A single node?
Satish: A single node is possible, but it is still a work in progress. Our initial goal for MCP Edge is to support a minimum of 3 – 6 nodes.
With respect to service design (say using TOSCA model), can we define a service having a mix of k8s pods and VM pods?
Nick: I would assume yes because the VM pods are treated as first-class containers, right?
Shay: Yes, definitely. Moreover, Cloudify can actually be the glue that can create a service chain between Kubernetes workloads, pods and VMs, as well as external services like databases and others. We implement the service broker interface, which provides a way for cloud-native Kubernetes services and pods to access external services as if they were internal native services. This is using the service broker API, and tomorrow you can bring the service into Kubernetes, and it will be transparent, because you implemented it in a cloud-native way. The service provider exposes a catalog, which can access an external service, for example one on Amazon that can run a database. That should be very easy.
How is a new edge site provisioned/introduced? Is some automation possible by the Master of Masters?
Satish: Yes, provisioning of a new edge cloud and subsequent LCM will be handled by the Master of Masters in an automated way. The Master of Masters will have multiple edge cloud configurations and using those configurations (blueprints), it will be able to provision multiple edge clouds.
Would this become an alternative to OpenStack, which manages VMs today? If not, how would OpenStack be used with Edge cloud?
Satish: Depending on the use cases, an edge cloud may consist of any of the following:

Pure k8s cluster with Virtlet
Pure OpenStack cluster
combination of OpenStack + k8s clusters

The overall architecture will depend on the use cases and edge applications to be supported.
NFVO can be part of the end to end orchestrator?
Satish: Yes
Is the application orchestration dynamic?
Satish: Yes, you can have it be dynamic based on inputs in the TOSCA blueprint.
How do you ensure an End-to-end SLA for a critical application connecting between Edge clouds?
Satish: One way to do this is by creating a network slice with the required end-to-end SLA characteristics and launch the critical edge application in that slice.
The post Unified Edge Cloud Infrastructure for PNFs, VNFs, Mobile Edge — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

python-tempestconf’s journey

For those who are not familiar with the python-tempestconf, it’s a tool for generating a tempest configuration file, which is required for running Tempest tests against a live OpenStack cluster. It queries a cloud and automatically discovers cloud settings, which weren’t provided by a user.
Internal project
In August 2016 config_tempest tool was decoupled from Red Hat Tempest fork and the python-tempestconf repository under the github redhat-openstack organization was created. The tool became an internal tool used for generating tempest.conf in downstream jobs which were running Tempest.
Why we like `python-tempestconf`
The reason why is quite easy. We at Red Hat were (and still are) running many different OpenStack jobs with different configurations which execute Tempest. And there python-tempestconf stepped in. We didn’t have to implement the logic for creating or modifying tempest.conf within the job configuration, we just used python-tempestconf which did that for us. It’s not only about the generating tempest.conf itself, because the tool also creates basic users, uploads an image and creates basic flavors which all of them are required for running Tempest tests.
Usage of python-tempestconf was also beneficial for engineers who liked the idea of not struggling with creating a tempest.conf file from scratch but rather using the tool which was able to generate it for them. The generated tempest.conf was sufficient for running simple Tempest tests.
Imagine you have a fresh OpenStack deployment and you want to run some Tempest tests, because you want to make sure that the deployment was successful. In order to do that, you can run the python-tempestconf which will do the basic configuration for you and will generate a tempest.conf, and execute Tempest. That’s it, isn’t it easy?
I have to admit, when I joined Red Hat and more specifically OpenStack team, I kind of struggled with all the information about OpenStack and Tempest, it was too much new information. Therefore I really liked when I could generate a tempest.conf which I could use for running just basic tests. If I had to generate the tempest.conf myself, my learning process would be a little bit slower. Therefore, I’m really grateful that we had the tool at that time.
Shipping in a package
At the beginning of 2017 we started to ship python-tempestconf rpm package. It’s available in RDO repositories from Ocata and higher. python-tempestconf package is also installed as a dependency of openstack-tempest package. So if a user installs openstack-tempest, also python-tempestconf will be installed. At this time, we also changed the entrypoint and the tool is executed via discover-tempest-config command. However, you could have already read all about it in this article.
Upstream project
By the end of 2017 python-tempestconf became an upstream project and got under OpenStack organization.
We have significantly improved the tool since then, not only its code but also its documentation, which contains all the required information for a user, see here. In my opinion every project which is designed for wider audience of users (python-tempestconf is an upstream project, so this condition is fulfilled), should have a proper documentation. Following python-tempestconf’s documentation should be any user able to execute it, set wanted arguments and set some special tempest options without any bigger problems.
I would say that there are 3 greatest improvements. One of them is the user documentation, which I’ve already mentioned. The second and third are improvements of the code itself and they are os-client-config integration and refactoring of the code in order to simplify adding new OpenStack services the tool can generate config for.
os-client-config is a library for collecting client configuration for using an OpenStack cloud in a consistent way. By importing the library a user can specify OpenStack credentials by 2 different ways:

Using OS_* environment variables, which is maybe the most common way. It requires sourcing credentials before running python-tempestconf. In case of packstack environment, it’s keystonerc_admin/demo file and in case of devstack there is openrc script.
Using –os-cloud parameter which takes one argument – name of the cloud which holds the required credentials. Those are stored in a cloud.yaml file.

The second code improvement was the simplification of adding new OpenStack services the tool can generate tempest.conf for. If you want to add a service, just create a bug in our storyboard, see python-tempestconf’s contributor guide. If you feel like it, you can also implement it. Adding a new service requires creating a new file, representing the service and implementing a few required methods.
To conclude
The tool has gone through major refactoring and got significantly improved since it was moved to its own repository in August 2016. If you’re a Tempest user, I’d recommend you try python-tempestconf if you haven’t already.
Quelle: RDO

Merlin uses IBM Cloud Garage to fast-track new cybersecurity technology

The cybersecurity marketplace is crowded. There are hundreds of vendors with an amazing array of solutions flooding the space, yet many organizations still struggle to stay ahead.
In many companies, everyone is working as hard as they can to plug holes, but there is still a lack of knowledge about how to manage and understand all the tools and how they interact. IT executives are finding that the tools are confusing, too diverse and susceptible to attack. They are left evaluating dozens of tools without the ability to look toward a future roadmap of how they might integrate them.
Building a comprehensive cybersecurity offering
Merlin International, which provides software and solutions for the US federal government, saw the frustration of its clients in spending tremendous capital and time without getting any better at protecting themselves. Clients witnessed the number of tools quadruple without a commensurate ability to see what was coming, prioritize activity or tie back to overall security remediation processes.
That is why Merlin is building a comprehensive cybersecurity offering to improve how security operation centers (SOCs) respond to threats. The solution is based on security operations and analytics platform architecture (SOAPA) and will translate relationships with security software vendors into an ecosystem that will incrementally go after the gaps that exist in most large scale SOCs.
The platform architecture helps with flexibility and speed across multiple applications to address and solve legacy problems.
Building with the IBM Cloud Garage
Merlin partnered with the IBM Cloud Garage to define and build the first minimum viable product (MVP) on IBM Cloud Private (ICP). We chose ICP because it had ready-built functionality the company could use. Future components of the cybersecurity solution will incorporate the resident automation and AI functionality of ICP.
The cybersecurity solution focuses on user-centric designs to provide improved access to actionable data. For instance, the solution started with the concept of augmented asset visibility to enable a security supervisor to quickly gain understanding of the protection status (current and historical) of key threats and vulnerabilities such as anti-virus, malware, DNS, firewall and privileged access. 
The IBM Cloud Garage provided a venue for Merlin to ideate and hypothesize with a talented team of experts that included architects and designers along with stakeholders from across our company. The IBM Design Thinking approach used agile methodology and lean startup techniques to help us visualize our ideas, and our own product development team was able to adopt the tools we learned.
Six weeks to MVP
The MVP build engagement lasted just six weeks and focused on laying a solid foundation for both the user experience and the technical underlying framework. Merlin developed a browser-based dashboard to display data of near-real-time and historical cybersecurity events through various metrics and data visualizations. Users can also drill down into specific data points using dynamic graphs and charts.
In building the cybersecurity solution, we aimed to create scaffolding for an ecosystem that will use clients’ existing toolsets against each other to solve specific use cases. Instead of boiling the ocean, we started with endpoint security, thereby making a junior analyst confident in what is a threat, what action they need to take and how best to take it while leaving a detailed history for compliance.
The IBM Cloud Garage engagement helped us bring a very new, difficult and previously unvalidated technology to market. The cybersecurity solution is expected to be announced and available in the first quarter of 2019.
Explore how the IBM Cloud Garage can help your company.
The post Merlin uses IBM Cloud Garage to fast-track new cybersecurity technology appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

CloudForms 4.7 announcement

CloudForms 4.7 is ready to be deployed on your systems, you can find more information in the Press Release and you can download the packages from the official Red Hat site
It provides the following key enhancements:

Improved Ansible Tower Integration: support for Workflows as a Catalog Item, support for Organization Structure.
Updates for managing OpenStack platforms: Storage refresh improvement, OverCloud Dashboard, Private Key Download.
Additional management capabilities for Red Hat Virtualization: Refresh performance improvements, Dynamic Sysprep support.
User Experience Enhancement: Multi-appliance configuration sharing, global region reporting tenants with the same name.
Lenovo Xclarity update: Inventory of racks, blades, chassis and switches, topology updates better handling of status, out-of-the-box dashboard.
New Provider: Redfish compliant standard hardware can be discovered and inventoried. Detailed views and relationships. Power operations and topology views. Tech preview
New provider for Nuage, which gives the ability to manage its networking services from CloudForms

 
Quelle: CloudForms

New IBM Cloud agreement with Smart Energy Water set to provide global infrastructure

IBM and Smart Energy Water, a water and energy cloud platform provider serving more than 150 utilities around the world, have signed a multi-million-dollar, five-year agreement to bring IBM Cloud infrastructure to SEW’s enterprise web and mobile applications.
IBM Cloud will facilitate common global infrastructure to help with “customer engagement and mobile workforce engagement apps, online bill processing, energy efficiency and demand response apps, reward programs as well as tools for real-time data collection and management in the field,” Computer Business Review reports.
Some of the goals of the agreement include SEW gaining more value from its data and speeding up the development of new tools to help improve efficiency and better engage customers.
“Together with IBM, we can deliver solutions at large scale that help utilities lower the cost-to-serve by moving customers from the call center to lower-cost digital channels, personalize service to increase overall customer satisfaction, and target customers for the right opportunities for value-added programs and services,” said Harman Sandhu, president of Smart Energy Water.
The SEW agreement news comes just as IBM also unveiled new technology that can help energy companies predict where falling vegetation may threaten power lines, thereby preventing outages. The Weather Company Vegetation Management – Predict was built using IBM PAIRS Geoscope and in collaboration with Oncor, the largest utility company in Texas.
To find out more about the IBM Cloud agreement with Smart Energy Water, read the full story at Computer Business Review.
The post New IBM Cloud agreement with Smart Energy Water set to provide global infrastructure appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mirantis Joins OpenStack Foundation’s Airship to Bring Kubernetes to Telcos

The post Mirantis Joins OpenStack Foundation’s Airship to Bring Kubernetes to Telcos appeared first on Mirantis | Pure Play Open Cloud.
Company extends its upstream collaboration with AT&T and announces Airship integration
into Mirantis Cloud Platform to support NFV infrastructure based on Kubernetes

CAMPBELL, CA — February 7, 2019 — Today
Mirantis
announced that it is joining
Airship
, a project originally founded by
AT&T, SKT and Intel
and launched as a pilot Open Infrastructure Project under the
OpenStack Foundation
in May 2018. One key use case for
Airship
is enabling telcos to take advantage of on-premises Kubernetes infrastructure to support their SDN infrastructure
builds.

Mirantis will collaborate with
AT&T
and other core contributors to develop critical features in support of the Airship community roadmap. This work will
be rapidly deployed in production at scale via AT&T’s Airship, Kubernetes and OpenStack based
Network Cloud
infrastructure.

Mirantis will primarily focus on:

Integration between Drydock and Ironic to provision bare metal Kubernetes clusters
Streamlining initial configuration experience of deploying Kubernetes-native services on premise, making it
simpler for telcos to adopt

Support for multiple operating systems, to broaden the choice of VNFs and minimize lock-in

“Replacing VM-based infrastructure with cloud-native, open technologies based on containers and Kubernetes yields
order-of-magnitude efficiency improvements for telco network environments and beyond,” said Adrian Ionel, Mirantis
Co-Founder and CEO. “Working so closely with AT&T in the Airship community will accelerate the delivery of the
benefits of Kubernetes to the broad ecosystem of telecommunications providers.”

Airship takes advantage of Kubernetes to define a unified, declarative and cloud-native way for operators to manage
containerized software delivery of cloud infrastructure services. At the OpenStack Summit in Berlin, AT&T shared its
plans to roll Kubernetes on-premise based on Airship to underpin its 5G Network infrastructure.

“As we roll out Network Cloud for 5G, our goal at AT&T is to run infrastructure based on open standards like
Kubernetes and OpenStack,” said Ryan van Wyk, AVP — Network Cloud Software Engineering at AT&T. “Mirantis has a
long track record of contributing to open source and we are glad to have them collaborate with us on the Airship
project.”

Aside from contributing to Airship upstream and collaborating with AT&T on key roadmap features, Mirantis is
integrating much of the code into
Mirantis Cloud Platform
(MCP), Mirantis’s core product that empowers telcos and enterprises to efficiently run Kubernetes on-premises.
Mirantis will be demonstrating the benefits of adopting Kubernetes-based open infrastructure for 5G in a series of
customer workshops at

Mobile World Congress in Barcelona

.

About the OpenStack Foundation (OSF)

The OpenStack Foundation (OSF) supports the development and adoption of open infrastructure globally, across a
community of 100,000 individuals in 187 countries, by hosting open source projects and communities of practice,
including datacenter cloud, edge computing, NFV, CI/CD and container infrastructure.

About Mirantis

Mirantis is the flexible infrastructure company harnessing open source to free application owners from operations
concerns. The company employs a unique build-operate-transfer approach to deliver two distinct products:

Mirantis Cloud Platform, which is based on Kubernetes and OpenStack and helps services providers and enterprises
run highly tunable private clouds powered by infrastructure-as-code and based on open standards.

Mirantis Application Platform, which is based on Spinnaker and helps enterprises adopt cloud native continuous
delivery to realize cloud ROI at scale.

To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest
open clouds in the world. Its customers include iconic brands such as Adobe, AT&T, Comcast, Reliance Jio, State
Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.

Contact information:

Joseph Eckert for Mirantis

jeckertflak@gmail.com
The post Mirantis Joins OpenStack Foundation’s Airship to Bring Kubernetes to Telcos appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis