Mirantis Boosts NFV Efforts, Joins Open Network Automation Platform Project to Accelerate Adoption of Open Standards for SDN/NFV Automation

The post Mirantis Boosts NFV Efforts, Joins Open Network Automation Platform Project to Accelerate Adoption of Open Standards for SDN/NFV Automation appeared first on Mirantis | Pure Play Open Cloud.
With customers like AT&T, Telstra, Vodafone, Saudi Telecom and China Mobile, Mirantis extends leadership in real-world NFV implementations

SUNNYVALE, CA – June 7, 2017 – Mirantis, the managed open cloud company, announced today that it has joined the Open Network Automation Platform project (ONAP) to help accelerate adoption of open standards for Software Defined Network (SDN)/Network Functions Virtualization (NFV) automation. With customers like AT&T, Telstra, Vodafone, Saudi Telecom and China Mobile Mirantis is a recognized leader in real-world NFV implementations.

Recently formed through the merger of open source ECOMP and Open Orchestrator Project (OPEN-O), two of the largest open source networking initiatives, the Open Network Automation Platform project is focused on creating a harmonized and comprehensive framework for real-time, policy-driven software automation of virtual network functions. The ONAP Project, which includes participation by prominent networking suppliers and industry-leading service providers from around the world, enables software, network, IT, and cloud providers and developers to rapidly create new services.

“SDN/NFV is the future of telecom, and is actually rapidly becoming the present with companies like Telstra already using SDN/NFV in production,” said Randy DeFauw, senior director of Product Management. “ONAP is doing important work bringing the industry together to support open standards in SDN/NFV.”

Enhanced functionality in Mirantis Cloud Platform (MCP) combined with third-party technology integrations enable optimized NFV deployments. Dataplane acceleration features such as DPDK, SR-IOV, CPU pinning and NUMA increase performance to meet demanding NFV requirements at scale.

Mirantis departs from the traditional software-centric method that revolves around licensing and support subscriptions. Instead, the company is pioneering an operations-centric approach, where open infrastructure is continuously delivered with an operations SLA through a managed service or by the customer themselves. This way, software updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis and with no down time.

Mirantis DriveTrain sets the foundation for DevOps-style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/CD pipeline. DriveTrain enables increased day 1 flexibility to customize the reference architecture and configurations during initial software installation. It aso ensures greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes and seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

About ONAP
The Open Network Automation Platform (ONAP) Project brings together top global carriers and vendors with the goal of allowing end users to automate, design, orchestrate and manage services and virtual functions. ONAP unites two major open networking and orchestration projects, open source ECOMP and the Open Orchestrator Project (OPEN-O), with the mission of creating a unified architecture and implementation and supporting collaboration across the open source community. The ONAP Project is a Linux Foundation project. For more information, visit https://www.onap.org.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com

The post Mirantis Boosts NFV Efforts, Joins Open Network Automation Platform Project to Accelerate Adoption of Open Standards for SDN/NFV Automation appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Ready for data to change your world? Join us in Munich.

We’re living through the third great revolution in modern business. First came economies of scale, which we harnessed with the Industrial Revolution, the assembly line, and the creation of global markets. Second was network effects, seen most obviously in the rise of the Internet and the Web. Third—right now—we are entering the age of data.
At our Fast Track Your Data event, coming up on June 22 in Munich, Germany, we’ll help you become one of the winners in this revolution.
Disrupt yourself before others disrupt you
We’ve been doing serious analytics with data for 20+ years, but our abilities have increased radically in just the past few years as we’ve made data science pervasive. Today, forward- looking organizations are using new ways to handle their proprietary data—the crown jewels that give them unique competitive advantages—as they embed machine learning and other data science applications into their businesses.

We at IBM are giving them the tools and support they need to help them do that. At the Munich event, we are bringing together top business and technology leaders to share success stories and discuss how to use data to disrupt your company and your industry—before others disrupt you.
Sharing success stories from data leaders worldwide
These leaders are driving great results across a wide range of industries and contexts. A big healthcare provider is creating better patient outcomes while improving cost-effectiveness. A transportation giant has reinvented its data retention to radically streamline back-office operations. Our European customers are using cognitive analytics to get ahead of their sweeping GDPR compliance requirements coming in 2018. And on and on.
This event will show you the areas where the data revolution is enabling companies as they:

Optimize current operations to intelligently automate processes, reduce costs, and increase speed
Increase customer intimacy with greater personalization to make each interaction more relevant and engaging
Speed up customer service with smart feedback loops that improve the entire customer experience continuously

In Munich, we’ll go into detail about how you can achieve these results via the four pillars of:
 

Hybrid Data Management
Unified Governance
Analytics & Visualization
Data Science & Machine Learning

Working alongside our customers in these areas, we are using open source technologies and increasing the simplicity and elegance of tools for use across your organization, not just by data scientists, coders, or other technical pros.
How to Participate
The breakthroughs just keep coming, and we want your success story to be next on the list. That’s why we invite you to join the conversation in Munich. There are three ways to participate:

Attend in person to hear from leaders from IBM and our client companies who are driving real-world results with data and analytics
Join the interactive livestream
Access the online archive of the event

The journey into the age of data is just beginning. Register now so you can Fast Track Your Data with us in Munich on June 22.

A version of this article was originally published on the IBM Big Data and Analytics Hub.
The post Ready for data to change your world? Join us in Munich. appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Behind the scenes with IBM Cloud Automation Manager

A glimpse of the overall functional architecture of IBM Cloud Automation Manager.
Many businesses have adopted a hybrid cloud approach to manage cloud-enabled and cloud-native workloads spanning across multiple locations, from on-premises data centers to private, dedicated and public clouds.
On a recent webcast, we took a deep dive into the architectural principles of IBM Cloud Automation Manager (CAM). IBM experts shared the architectural principles of CAM and how it supports the complete lifecycle management of both cloud-enabled and cloud-native workloads. Here are the most important takeaways.
Your complex multicloud environment requires central management so you can effortlessly manage workloads and their underlying resources across all your clouds. IBM Cloud Automation Manager is a hybrid cloud management platform that helps automate, accelerate and ultimately enhance your overall delivery of cloud infrastructure.
Simplifying the user experience for multicloud deployments

IBM CAM has a powerful provisioning engine that can deploy composite workloads on public clouds as well as on-premises virtual environments. This engine leverages the Terraform technology, a very rich workload-provisioning engine that can support IBM Cloud and other cloud vendors, as an external “manage to” cloud. It includes of a rich library of content with high-quality application packs to automate deployment and ongoing operation of many production workloads.
The orchestration platform enables automation of workflows. With its authoring environment you can easily create automation content and self-service offerings. The self-service portal can help standardize the deployment of cloud services. CAM also has an operational console for running instances of deployed services so you can manage ongoing operations. The operational dashboard provides visibility on cloud resources, complimented with cognitive insights.
These capabilities will all be delivered as IBM CAM evolves through the year. Want to explore more? Watch the webcast replay to get a detailed understanding on the functional architecture of IBM Cloud Automation Manager.
The post Behind the scenes with IBM Cloud Automation Manager appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A 5-stage model for shared blockchain processes

In our previous post, we outlined four major challenges of the interorganizational processes blockchain enables. To address those challenges, organizations should consider these five steps, which in many cases directly extend existing industry practice for private processes.

Let’s return to our example of health records on a blockchain. An electronic health record (EHR) stored on a blockchain would securely share only the data that each party is permitted to see. Adding new test results to the record might automatically schedule an appointment with the patient depending on the test result. Once the appointment is completed, the notes and follow-up actions are added indelibly to the EHR for future reference. This example can illustrate each of the patterns:
1. Discover network processes
With processes that exist across organizational boundaries, there is a need for participants to work together to design flows so that there is an agreement about their responsibilities and points of interaction. This is very similar to the way that process discovery happens within a single organization today. For example, IBM Blueworks Live can equally be applied to interorganizational processes. In particular, the collaborative modeling capabilities of a cloud-based platform fit well with the needs of process discovery between organizations.
2. Control blockchain interaction
There are many scenarios in which a private process — one that is internal to a single organization — needs to interact with a blockchain. Typically, these interactions involve submitting a transaction to the ledger or responding to a change in state. For example, how does a medical lab attach test results to a patient record, and how is a hospital notified of new results? Technically, a process automation platform, such as IBM Business Process Manager, interacts with the blockchain as it would any other enterprise application or system of record. Integrating IBM BPM with IBM Blockchain is performed by remote system calls or by responding to event notifications.
3. Automate with smart contracts
This class of solutions deals with situations where the smart contract encodes a digital business process that is executed on the blockchain. Many of the existing process automation paradigms are directly applicable to this idea. For example, decision automation technologies, such as IBM Operational Decision Manager, allow a business policy to be defined in a pseudo-natural language as opposed to code. This means that business experts or lawyers can author and amend process logic. Using this idea, the rule encoding that determines what to do for certain medical test results can be understood by non-programmers.
4. Monitor the whole network
Accepting that a business process now spans both internal systems and a blockchain, there is a need to connect the dots. Suppose an organization wants to monitor the average time taken from a biopsy is taken at a hospital until results are shared with the patient. The end-to-end view of a process, spanning hospital and lab systems, can be achieved by using a process analytics capability. This enables identification of exceptional situations that require action orwith real-time management information about the status of a process.
5. Optimize and govern the network
Finally, blockchains introduce some new business processes that didn’t exist before. First of all, the ledger is a trusted source that can be used to optimize the network or one’s private interactions with it. For example, one might be able to tweak task routing rules based on prior performance. Secondly, new blockchain support processes are expected. For example, how does a new medical lab join the network? One would expect certain legal and professional proofs to be supplied before they could start work.
Enterprises should learn this new model of ecosystem-based processes. It’s important to start now to develop a suitable process architecture that is ready to embrace blockchain.
Learn more about IBM Blockchain.
The post A 5-stage model for shared blockchain processes appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The reality of DevOps: ECommerce on OpenStack using Mirantis Cloud Platform

The post The reality of DevOps: ECommerce on OpenStack using Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
We can talk about the benefits of a DevOps environment all we want, but for the people who are directly involved, the reality is much more complicated.

The Internet Mall, Mall.cz, is an ecommerce site located in the Czech Republic, selling all manner of things, from children’s toys to dog food to auto supplies, and pretty much everything in between.  As such, the site is a complicated piece of machinery, an application called eShop consisting of dozens of microservices that all must work together flawlessly so the whole thing doesn’t come crashing down.
From VMware to open source
For years, Internet Mall’s head of the Linux platform team, Martin Olexa, has been running the site mainly on a VMware cluster. This arrangement had all of the benefits of virtualization, of course, including the ability to make effective use of their hardware.
It also had a significant price tag.
The price wasn’t necessarily the main obstacle, however.  “Our big problems were that with the VMware cluster, we can’t effectively create and destroy resources as we need them, so it has the effect of slowing down development. Also, because it’s a closed system, we can’t make modifications the way we might like to.”
What Internet Mall needed, IT managers realized, was an open source system that developers could use, but one they could also modify if necessary. This way, developers could create resources as they needed them.
This kind of self-service arrangement would enable developers to move at their own speed, without being constrained by operations and what they could get done at any given time — and with a 10-1 developer-to-operator ratio, that was an important consideration, particularly after the company was acquired by Rockaway group in 2016, almost tripling the size of their infrastructure.
In the fast-moving world of ecommerce, they knew that If they couldn’t find a way to solve this problem, competitors would be able to exploit their vulnerability, perhaps opening a gap in the market from which Internet Mall wouldn’t be able to recover.
Docker or OpenStack?
To start, Martin and his team decided to experiment with several options.
First, they tried Docker containers. After all, containers are convenient for developers to use, they’re well-suited to microservices, the team was familiar with the purpose of containers, and let’s face it: they’re hot right now.
Unfortunately, containers turned out to be “just too big a leap from our current processes,” Martin said.  “Docker just didn’t have enough of an ecosystem.  We couldn’t imagine implementing everything we needed on our own.”
After checking out several options, Martin and his team decided they needed an OpenStack cloud. OpenStack provided the self-serve nature that was missing in their original VMware cluster, it was open source so they could make any modifications they needed, and the ecosystem was large enough to provide everything that they needed.
Moving forward with OpenStack
Now they needed an implementer.
Local to Internet Mall was Mirantis Czech (then known as tcp cloud), which was providing MK, a very early version of what would become Mirantis Cloud Platform. MK would enable them to not only use OpenStack, but to customize it for what the needed. Mirantis worked with Internet Mall to work out the various terms and assumptions, and together they built a Proof of Concept just to make sure that this was the direction in which the company wanted to go.
With the new cloud in place, it was time to make use of it. This process, however, was simpler said than done.
The tough part of moving to DevOps
The hardest part, Martin says, was moving all of their development and operations to the “cloud way” of doing things.  “All of our people have to change their mindset,” he says.  “We’re moving tasks to developers that they wouldn’t normally do, such as choosing technology. For example, developers now have to think about networking and so on, where they used to just rely on operations.”
The company was very open with their developers.  “We told them, ‘we’re giving this to you, and we’re here to help you if you need us.  You can choose your own architecture; you’re not dependent on ops, here are the benefits of various ways to do this, and so on.’”
Fortunately, the advantages, such as increased development velocity and greater control for developers proved to greatly outweigh the discomfort some developers had trying to shift to the DevOps model.  For example, while developers may not be comfortable creating infrastructure at first, because the DevOps model treats infrastructure as code, it’s easier for them to adjust to — and easy to recover from should there be a problem.
After several months of running the PoC, Internet Mall decided to go ahead and build a production cloud and later they initiated the upgrade from MK to MCP.  “We did experience the typical issues that go with a major upgrade,” Martin says, “but there was nothing really critical, and Mirantis made sure that everything got fixed.”
The people factor
That’s not to say that all of Internet Mall’s challenges are resolved.  Despite having a developer staff of more than 40 people, the company has only 4 operators — and not all of them were entirely on board with the change.
“Of the four of us, two were all in favor, and two are sure that the whole thing is going to come crashing down,” Martin laughs.  “We don’t know for sure yet who’s right, but so far there haven’t been major problems.”
It’s a fundamentally different paradigm; these operators, who are used to controlling the infrastructure, see developers spinning up processes for development, test, and so on.  There’s a reduced dependency on operations from developers, and some ops feel like the devs are “stealing” their stuff and will misuse it, even if it’s unintentional.  The organization is also adjusting to this new chain of command, and who’s responsible for what.
To get people on board, Martin says, they basically compared it to changes that have already happened with regard to old school processes and lines of control.  First you cut out physical machines to get to virtual.  Now, he explains, you’re cutting the next line of control, which was the operations obstacle.
It’s said that developers are in the business of promoting progress creating change, and operators are in the business of promoting stability by preventing change. In this respect, they work against each other, but now that the new cloud is in place and developers are using it — and DevOps practices — Internet Mall staff finds that they are working in the same direction, all looking at the big picture.
What comes next
Internet Mall has created a situation where they can now shift away from having a Single Point of Failure into more of a “cattle” situation, in which software runs using open source on commodity hardware. Nothing is vendor-dependent, and when components die they can simply be replaced — whether they’re hardware or software.
Even though there was originally some concern on the part of some of the operations team, the most important change Internet Mall has seen is still people-related. “The biggest benefit of this project,” Martin says, “is cutting that dependency between developers and operations, making the developers more self-sustaining.”
These days, eShop is one big microservices-based application with approximately 50 individual services. Of those 20 or so on OpenStack,10 or so on VMware only, and the rest run on both, with approximately 90% of all appropriate production eShop code running on OpenStack. Going forward, Internet Mall is adding more resources, and learning more about the platform.  The plan is that once they’ve stabilized their current operations, they will look at moving everything over to the MCP cloud, with a focus on more containerized components to run on the Kubernetes portion that runs alongside OpenStack.
The post The reality of DevOps: ECommerce on OpenStack using Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Multiple Deployment Methods for OpenShift

Deploying an application to OpenShift is easy. The challenge is finding the best way to do it. Because OpenShift is so adaptable and flexible, it offers multiple ways to deploy an application. Learn about 6 different methods to deploy the same application.
Quelle: OpenShift

Containers aren’t a game: industry gets serious about Kubernetes development with Draft and Istio

The post Containers aren’t a game: industry gets serious about Kubernetes development with Draft and Istio appeared first on Mirantis | Pure Play Open Cloud.
As the infrastructure market settles down, more attention is being paid to what happens after you have your cloud up and running. This week we saw the announcement of not one, but two frameworks aimed at developers of Kubernetes-based applications.  Microsoft announced Draft, which provides an easy way for developers to build and deploy Kubernetes and other cloud-based applications, and Istio, announced by Google, IBM, and Lyft, provides a service mesh framework for running, monitoring, and controlling the multiple microservices that make up a cloud-native application.

The industry’s focus is starting to move up the stack; things are getting serious.
Draft: Simplifying development and deployment of cloud-native apps
When you think of Kubernetes, the first name to come to mind probably isn’t “Microsoft”, but Redmond’s been trying to catch up lately. One of the ways they’ve been doing that is with the acquisition in April of Deis, the container platform developed by Engine Yard.

Although there was concern — Microsoft hasn’t exactly been a bastion of open source championship — the Deis team, which is also behind the Helm deployment tool, this week announced Draft, an open source tool for developing and deploying cloud-native applications.

Draft simplifies the job of developers who want to write containerized applications by building the appropriate scaffolding for them so they don’t have to worry about anything but the application itself.  Brendan Burns, Director of Engineering at Microsoft Azure and Kubernetes co-founder, wrote, “When you first run the draft tool, it automatically discovers the code that you are working on and builds out the scaffolding to support containerizing your application. Using heuristics and a variety of pre-defined project templates draft will create an initial Dockerfile to containerize your application, as well as a Helm Chart to enable your application to be deployed and maintained in a Kubernetes cluster. Teams can even bring their own draft project templates to customize the scaffolding that is built by the tool.”

Techcrunch reports that Draft automatically detects whether code is written in Python, Node.js, Java, Ruby, PHP or Go. “It should be pretty easy to integrate this code with existing continuous integration pipelines,” they add.

The software can also code synchronized with Kubernetes, enabling developers to edit code locally but still have it run on the server.
Istio: A service mesh framework enhancing microservices
Istio (Greek for “sailing”, to keep with the Kubernetes theme) provides a way for developers to monitor, secure, and control microservices-based applications.  It works on the Kubernetes Service construct, proxying network transactions with the Lyft-created Envy proxy, so it can even be used with existing applications — no rewriting required.  

The idea is that developers create their application as usual, then run its deployment through Istio, which takes care of setting up all of the pieces. Once that’s done, you can see what’s going on with the application by simply looking at the Grafana dashboard.

Istio currently works with Kubernetes, and can be installed locally or on a public or private cloud.  The eventual goal, however, is to make it work with non-Kubernetes-related clouds, including those running on Mesos, as well as Google’s Cloud Endpoints.

TechCrunch points out that “It’s worth noting that this isn’t all that different from linkerd, a similar project that is now part of the Cloud Native Computing Foundation, the home of the Kubernetes project.” Linkerd supports Kubernetes, Docker, and Mesosphere’s DC/OS.
Reading between the lines
For the past several years, we’ve been focused on getting infrastructure up and running, but industry has now reached the point where that infrastructure is simply assumed; now the attention is on outcomes.

In the case of Draft and Istio, it’s about making development easier, enabling companies to get to those outcomes faster, but it’s about more than that: it’s about disaggregation.

In a move that echoes a greater emphasis on managed open cloud, both of these tools separate the developer from the worry of dealing with the fine points of architecture that aren’t directly related to what they’re doing. In this way, developers can more easily create applications that are not just more cloud-native, but also more cloud-independant, opening the door for hybrid cloud applications and a further emphasis on what needs to be done, rather than how to do it.The post Containers aren’t a game: industry gets serious about Kubernetes development with Draft and Istio appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis