LogDNA and IBM find synergy in cloud

You know what they say: you can’t fix what you can’t find. That’s what makes log management such a critical element in the DevOps process. Logging provides key information for software developers on the lookout for code errors.
While working on their third startup in 2013, Chris Nguyen and Lee Liu realized that traditional log management was wholly inadequate for addressing data sprawl in the modern, cloud-native development stack. That epiphany was the impetus for LogDNA, a twist on a logging platform that could respond and scale in dynamic cloud environments.
Pivoting to meet DevOps need
What was straightforward when writing code for one server became unwieldy as virtualization, with multiple servers in a single machine, moved into the data center and exploded the amount of log files. As applications grow, so do code issues. And as pressure mounts on IT for zero downtime, developers need real-time visibility and an easy way to chase data activity. Containers help isolate issues, but they still largely depend on the IT infrastructure team who are unfamiliar with the applications to manage logging, and that can tax limited resources.
Building on top of the popular elasticsearch, LogDNA set out to create a solution modernized enough to provide DevOps intelligence and automatically organize all that data. Fortunately, the LogDNA team saw the writing on the wall: Kubernetes. The timing was perfect. Developers were starting to adopt the lightweight, open source platform for managing containerized workloads. LogDNA seized on the advanced orchestration capabilities of Kubernetes in cloud environments and spun out an integrated, managed software-as-a-service (SaaS) solution that began to gain traction.
Looking for a strong partner
This work caught the attention of the IBM Cloud team, which itself was shifting focus to Kubernetes services and DevOps and working actively in the open source community. We quickly found synergy between our efforts. That synergy helped weave the LogDNA logging platform into the IBM global ecosystem, binding the two companies together as partners to deliver innovative, managed Kubernetes services.
While LogDNA is in the business of logging, it is also a storage and big data company. When the team initially looked at the IBM Cloud Kubernetes Service, they worried that it wasn’t going to meet our demands. Fortunately, several IBM distinguished engineers introduced the LogDNA team to the IBM Cloud Kubernetes Service bare metal offering.
The flexibility of a bare metal option allowed LogDNA to get the IOPS (input/output operations per second) we need to read and write quickly out of storage, and at a less-expensive price than network-based storage.
The IBM-LogDNA relationship is an example of true collaboration. IBM has quickly responded to product developments recommended by the LogDNA team. However it’s not only IBM delivering enhancements; LogDNA has adjusted some process and infrastructure planning based on suggestions from the IBM team. The refinement of the logging-related offerings is a chance for the LogDNA and IBM teams to work hand-in-hand to build better services.
Getting to the heart of DevOps
LogDNA’s product is heavily focused on the DevOps space. It provides better insights, better observability into development stacks and better building tools that help developers. It offers the convenience of a very robust log management tool without the inconvenience of having to manage or configure anything.
In a world that churns out disruptive direct-to-consumer applications (think Uber), there is a growing market for a level of scalability not being addressed by others in the market today.
With the combination of IBM and LogDNA, we can help clients no matter where they are on their journey — public clouds, on-premises or hybrid. The goal is to provide a log management tool that optimizes a developer’s data. The LogDNA tool focuses heavily on things like automatic parsing. Any data that comes into LogDNA is automatically taken care of, since the tool can recognize the exact kind of incoming logs. Our tool bundles its services for simplicity and ease of use, so developers don’t need to worry about logs.
Partnering around the world
The IBM global footprint is enabling LogDNA to deliver this service consistently around the world. As the preferred logging service, LogDNA is available in the IBM Cloud Service Catalog today and will be available in all IBM service regions. IBM customers can select and order LogDNA services by identifying that they want logging; this pulls LogDNA directly into their order.
In addition to customer-requested orders, IBM plans to use LogDNA for all of its internal systems, which further extends the relationship.
Ultimately, the joint effort of IBM and LogDNA is helping our customers stay focused on their priorities, and leave the logging to us.
To learn the full story, read the case study.
The post LogDNA and IBM find synergy in cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Microservices, and the Observability Macroheadache

Moving to a microservice architecture, deployed on a cloud platform such as OpenShift, can have significant benefits. However, it does make understanding how your business requests are being executed, across the potentially large numbers of microservices, more challenging.
If we wish to locate where problems may have occurred in the execution of a business request, whether due to performance issues or errors, we are potentially faced with accessing metrics and logs associated with many services that may have been involved. Metrics can provide a general indication of where problems have occurred, but not specific to individual requests. Logs may provide errors or warnings, but cannot necessarily be correlated to the individual requests of interest.
Distributed tracing is a technique that has become indispensable in helping users understand how their business transactions execute across a set of collaborating services. A trace instance documents the flow of a business transaction, including interactions between services, internal work units, relevant metadata, latency details and contextualized logging. This information can be used to perform root cause analysis to locate the problem quickly.
How does a OpenShift Service Mesh help
The OpenShift Service Mesh simplifies the implementation of services by delegating/moving some capabilities into the platform, such as circuit breaking, intelligent routing, etc. These capabilities include the ability to report tracing data associated with the HTTP interactions between services.
This means that the service is not required to support distributed tracing directly itself – the sidecar proxy will handle sampling decisions, creation of spans (the building blocks of a trace instance) and ensuring that consistent metadata is reported.
The only responsibility that cannot be handled by the OpenShift Service Mesh is the propagation of the trace context between inbound and outbound requests within the service itself. This needs to be implemented by the service – either by copying relevant headers from the inbound request to the outbound request, or using a suitable library to handle it.
Jaeger to the Rescue
Instrumenting the service mesh and your business application is only one part of the story. Presenting this data in a way that is easy to consume and understand is the role of a tracing solution. That’s why OpenShift Service Mesh bundles a component called Jaeger, that can be used to collect, store, query and visualize the tracing data.
The Jaeger UI/console allows users to search for trace instances that meet certain criteria, including service name, operation name, tag names/values, a time frame and containing spans that have a max/min duration.

The UI shows a scattergraph of the trace instance durations to enable users to focus in on performance issues. The list also highlights trace instances that represent error situations.
Once a trace instance of interest is selected, the UI will show the individual spans in a gantt chart style. Each line represents a unit of work, typically called a ‘span’ in the distributed tracing world, color coded based on the service it represents, with a length that identifies the time duration. This enables a user to focus in on the services and operations where most time is spent for the business transaction.

When a span is selected, it will be expanded to show further details, including tag names/values and log entries. This can provide additional information that may help diagnose issues.
It is also possible to compare the structure of trace instances against each other, by selecting multiple trace instances on the search page and pressing the “Compare Traces” button.

This feature is useful to narrow down the search space for traces with large number of spans. The visualization highlights added or missing operations in two trace instances.
One Less Headache for your Microservices Journey
While distributed tracing on its own is not the monitoring panacea that devops teams require, it is a prerequisite for understanding the root cause of problems that will arise in complex and distributed architectures. When used in conjunction with other observability signals, such as metrics and logging, it can help diagnose problems and provide a more comprehensive view of the health of our business applications.
The post Microservices, and the Observability Macroheadache appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

RDO is ready to ride the wave of CentOS Stream

The announcement and availability of CentOS Stream has the potential to improve RDO’s feedback loop to Red Hat Enterprise Linux (RHEL) development and smooth out transitions between minor and major releases. Let’s take a look at where RDO interacts with the CentOS Project and how this may improve our work and releases.
RDO and the CentOS Project
Because of tight coupling with the operating system, RDO project joined the CentOS SIGs initiative from the beginning. CentOS SIGs are smaller groups within the CentOS Project community focusing on a specific area or software type. RDO was a founding member of the CentOS Cloud SIG that is focusing on cloud infrastructure software stacks and is using the CentOS Community BuildSystem (CBS) to build final releases.
In addition to Cloud SIG OpenStack repositories, during release development RDO Trunk repositories provide packages for new commits in OpenStack projects soon after they are merged upstream. After commits are merged a new package is created and a YUM repository is published in RDO Trunk server, including this new package build and the latest builds for the rest of packages in the same release.This enables packagers to identify packaging issues almost immediately after they are introduced, shortening the feedback loop to the upstream projects.
How CentOS Stream can help
A stable base operating system, on which continuously changing upstream code is built and tested, is a prerequisite. While CentOS Linux did come close to this ideal, there were still occasional changes in the base OS that were breaking OpenStack CI, especially after a minor CentOS Linux release where it was not possible to catch those changes before they were published.
The availability of rolling-release CentOS Stream, announced alongside CentOS Linux 8,  will help enable our developers to provide earlier feedback to the CentOS and RHEL development cycles before breaking changes are published. When breaking changes are necessary, it will help us adjust for them ahead of time.
A major release like CentOS Linux 8 is even more of a challenge, RDO has managed to transition from EL6 to EL7 during the OpenStack Icehouse cycle by doing two distributions in parallel – five years ago, with a much smaller package set than it is now.
For the current OpenStack Train release in development, the RDO project started preparing for the Python 3 transition using Fedora 28, which helped to get this huge migration effort going, at the same time it was only a rough approximation for RHEL 8/CentOS Linux 8 and required complete re-testing on RHEL.
Since CentOS Linux 8 is released very closely to the OpenStack Train release, the RDO project will be able to provide RDO Train initially only on EL7 platform and will add CentOS Linux 8 support in RDO Train soon after.
For the future releases, the RDO project is looking forward to be able to start testing and developing against CentOS Stream updates as they are developed, to provide feedback, and help stabilize the base OS platform for everyone!
About The RDO Project
The RDO project is providing a freely-available, community-supported distribution of OpenStack that runs on Red Hat Enterprise Linux (RHEL) and its derivatives, such as CentOS Linux. RDO also makes the latest OpenStack code available for continuous testing while the release is under development.
In addition to providing a set of software packages, RDO is also a community of users of cloud computing platforms on Red Hat-based operating systems where you can go to get help and compare notes on running OpenStack.
Quelle: RDO

PaaS vs. KaaS: A Primer

The post PaaS vs. KaaS: A Primer appeared first on Mirantis | Pure Play Open Cloud.
The post PaaS vs. KaaS: A Primer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Embedded Ansible – Part 1, Built-In Ansible Roles

The introduction of Embedded Ansible with CloudForms 4.5 enabled administrators to create CloudForms services that use Ansible playbooks instead of Ruby methods to perform tasks. Successive versions of CloudForms have expanded this capability, and automation developers can now construct complex CloudForms Automate workflows using Ansible playbooks alongside traditional Ruby methods.

Part 1 of this series will discuss two Ansible roles that have been included in CFME 5.9.2 and later. The roles enable an embedded Ansible playbook to integrate closely with CloudForms and participate in an automation workflow. They are:
 

manageiq-core.manageiq-automate – this role provides a simple way of accessing Automate workspace objects. It is usable from playbook methods only.

 

manageiq-core.manageiq-vmdb – this role provides a simple way of manipulating VMDB objects that are exposed via the API. It is usable from both playbook methods and playbook services.

 
Usage of both roles is demonstrated throughout the remainder of this series of articles.
The manageiq-automate Role
The manageiq-core.manageiq-automate role can be used by playbook methods (but not services) to interact with CloudForms automation. This allows a playbook method to participate in an automation workflow and share data with other objects in the workflow’s workspace.
 
The following is an example of how the role can be used:
 
  vars:

  – manageiq_validate_certs: false

  roles:

  – manageiq-core.manageiq-automate

 tasks:

  – name: Get the “rhel_subscription_pool” attribute from a configuration domain instance

    manageiq_automate:

      workspace: “{{ workspace }}”

      get_attribute:

        object: “/Configuration/OpenShift/Parameters”

        attribute: “rhel_subscription_pool”

    register: get_attribute_result
 
The role uses a manageiq_validate_certs internal variable to determine whether to validate the SSL connection back to the API. The default for this value is true, but must be overridden to false if the default CloudForms self-signed SSL certificate is used. This can be done using a playbook var or input parameter (i.e. extra_var) called manageiq_validate_certs with the boolean value of false, for example:

Functions
The manageiq-automate module contains many functions that can be broadly divided into four categories, as follows.
Working with Input Parameters
A playbook method’s input parameters can be queried and retrieved using the following functions:
method_parameter_exists

get_method_parameters

get_method_parameter

get_decrypted_method_parameter
Working with Workspace Objects
Workspace objects (instances that are already loaded in the $evm workspace) can be queried using the following functions:
object_exists

get_object_names

get_object_attribute_names

get_vmdb_object
Working with Object Attributes
Workspace object attributes can be queried, retrieved and set using the following functions:
attribute_exists

get_attribute

get_decrypted_attribute

set_attribute

set_attributes

set_encrypted_attribute
Working with State Machines
A state machine’s state variables can be retrieved or set using the following functions when a playbook method is running in the state machine:
get_state_var_names

state_var_exists

get_state_var

set_state_var

set_retry
Return Values
The get_* and *_exists functions return their values into a variable using the register command, for example:
– name: Get the “rhel_subscription_pool” attribute from a configuration domain

  manageiq_automate:

    workspace: “{{ workspace }}”

    get_attribute:

      object: “/Configuration/OpenShift/Parameters”

      attribute: “rhel_subscription_pool”

  register: get_attribute_result

– debug: msg=”Result:{{ get_attribute_result }}”
 
The return value from these commands is typically a hash, and the value of interest has the value key in this hash. For example the debug line above will print the following:
 
ok: [localhost] => {

    “msg”: “Result:{‘changed': False, ‘value': ‘Red Hat OpenShift Container Platform, Premium*’, ‘failed': False}”

}

If the variable is to be used elsewhere in the playbook it should therefore be referenced using the value key, for example {{ get_attribute_result.value }}.
 
Some of the functions return an object, for example get_decrypted_attribute returns a hash whose value key itself points to a further object hash, for example:
    
– name: Get the “ldap_user_password” encrypted attribute from a configuration domain

  manageiq_automate:

    workspace: “{{ workspace }}”

    get_decrypted_attribute:

      object: “/Configuration/OpenShift/Parameters”

      attribute: “ldap_user_password”

  register: get_decrypted_attribute_result

– debug: msg=”Result:{{ get_decrypted_attribute_result }}”

ok: [localhost] => {

    “msg”: “Result:{‘changed': False, ‘value': {‘object': ‘/Configuration/OpenShift/Parameters’, ‘attribute': ‘ldap_user_password’, ‘value': ‘so_secret’}, ‘failed': False}”

}

If the variable is to be used elsewhere in the playbook it should be referenced with the value.value key, for example {{ get_decrypted_attribute_result.value.value }}.
The manageiq-vmdb Role
VMDB objects that are accessible via the RESTful API can be accessed using functions defined in the manageiq-core.manageiq-vmdb role, documented here. The role is usable from both playbook services and playbook methods. 
 
Like the manageiq-automate role, this role also uses a manageiq_validate_certs internal variable to determine whether to validate the SSL connection back to the API. The default for this value is true, and this must be overridden to false if the default CloudForms self-signed SSL certificate is used. 
 
The following is an example of how the role can be used:
 
  vars:

  – manageiq_validate_certs: false

  roles:

  – manageiq-core.manageiq-vmdb

  tasks:

  – name: Get the service object

    manageiq_vmdb:

      href: “{{ manageiq.service }}”

    register: service

  – name: Get the VM object

    manageiq_vmdb:

      href: “vms/18″

   register: vm

  – name: Rename the service

    manageiq_vmdb:

      vmdb: “{{ service }}”

      action: edit

      data:

        name: “New Engineering VM”

       description: “VM created on {{ lookup(‘pipe’, ‘date +%Y-%m-%d %H:%M’) }}”

  – name: Add the VM to the service

    manageiq_vmdb:

      vmdb: “{{ service }}”

      action: add_resource

      data:

        resource:

          href: “{{ vm.href }}”
 
The actions available to an object are those described in the API reference guide.
CFME Appliance Hostname
If no appliance hostname has been set from appliance_console, the embedded Ansible provider may fail to connect back to itself when running these roles. As error such as the following may be logged:
 
MIQ(ManageIQ::Providers::EmbeddedAnsible::Provider#authentication_check_no_validation) type: [“default”] for [2] [Embedded Ansible] Validation failed: error, Failed to open TCP connection to https:443 (getaddrinfo: Name or service not known)
 
To prevent this error, always ensure that the appliance hostname has been defined.
Summary
This article has described two very useful Ansible roles that are supplied out-of-the-box with CloudForms. The following articles in this series illustrate further how the roles can be used by a playbook that is run as part of a CloudForms automation workflow.
 
Quelle: CloudForms

Quantum-safe cryptography: What it means for your data in the cloud

Quantum computing holds the promise of delivering new insights that could lead to medical breakthroughs and scientific discoveries across a number of disciplines. It could also become a double-edged sword, as quantum computing may also create new exposures, such as the ability to quickly solve the difficult math problems that are the basis of some forms of encryption. But while large-scale, fault-tolerant quantum computers are likely years if not decades away, organizations that rely on cloud technology will want cloud providers to take steps now to help ensure they can stay ahead of these future threats. IBM Research scientists and IBM Cloud developers are working on the forefront to develop new methods to stay ahead of malicious actors.
Hillery Hunter, an IBM Fellow, Vice President and CTO of IBM Cloud, explains how IBM is bringing together its expertise in cloud and quantum computing with decades of cryptographic research to ensure that the IBM Cloud is providing advanced security for organizations as powerful quantum computers become a reality.
It’s probably best to start this conversation with a quick overview of IBM history in cloud and quantum computing.
IBM offers one of the only clouds that provides access to real quantum hardware and simulators. Our quantum devices are accessed through the IBM Q Experience platform, which offers a virtual interface for coding a real quantum computer through the cloud, and Qiskit, our open source quantum software development kit. We first made these quantum computers available in 2016. As of today, users have executed more than 30 million experiments across our hardware and simulators on the quantum cloud platform and published over 200 third-party research papers.
As a pioneer in quantum computing, we are taking seriously both the exciting possibilities and the potential consequences of the technology. This includes taking steps now to help businesses keep their data secure in the cloud and on premises.
How does security play into this? Why is it important to have a cloud that has security for quantum-based threats?
An organization’s data is one of their most valuable assets, and studies show that a data breach can cost $3.92 million on average. We recognized early that quantum computing could pose new cybersecurity challenges for data in the future. Specifically, the encryption methods used today to protect data in motion and at rest could be compromised by large quantum computers with millions of fault tolerant quantum bits or qubits. For perspective, the largest IBM quantum system today has 53 qubits.
To prepare for this eventuality, IBM researchers are developing a lattice cryptography suite called CRYSTALS. The algorithms in that suite are based on mathematical problems that have been studied since the 1980s and have not yet succumbed to any algorithmic attacks (that have been made public), either through classical or quantum computing. We’re working on this with academic and commercial partners.
These advancements build on the leading position of IBM in quantum computing, as well as decades of research in cryptography to protect data at rest and in motion.
How is IBM preparing its cloud for the post-quantum world?
We can advise clients today on quantum security and we’ll start unveiling quantum-safe cryptography services on our public cloud next year. This is designed to better help organizations keep their data secured while it is in-transit within IBM Cloud. To accomplish this, we are enhancing TLS and SSL implementations in IBM Cloud services by using algorithms designed to be quantum-safe, and leveraging open standards and open-source technology. IBM is also evaluating how we can provide services that include quantum-safe digital signatures, a high expectation in e-commerce.
While that work is underway, IBM Security is also offering a quantum risk assessment to help businesses discern how their technology may fare against threats and steps they can take today to prepare for future threats.
IBM also contributed CRYSTALS to the open source community. How will this advance cryptography?
Open-source technology is core to the IBM Cloud strategy. That’s why IBM developers and researchers have long been working with the open-source community to develop the technology that’s needed to keep data secured in the cloud.
It will take a community effort to advance quantum-safe cryptography and we believe that, as an industry, quantum-safe algorithms must be tested, interoperable and easily consumable in common security standards. IBM Research has joined OpenQuantumSafe.org and is contributing CRYSTALS to further develop open standards implementations of our cryptographic algorithms. We have also submitted these algorithms to the National Institute of Standards and Technology for standardization.
Some organizations might not worry about these security risks until quantum computing is widespread. Why should they be acting now?
Although large-scale quantum computers are not yet commercially available, tackling quantum cybersecurity issues now has significant advantages. Theoretically, data can be harvested and stored today and potentially decrypted in the future with a fault-tolerant quantum computer. While the industry is still finalizing quantum-safe cryptography standards, businesses and other organizations need to get a head start.
To get a head start, who better to partner with than a cloud company with real quantum hardware, leading cryptographers, open-source technology and an open-source standards commitment?
 
Resources:

Read the quantum security announcement in the IBM Newsroom.
Learn more about the quantum data risk assessment service from IBM Security.
Check out the seminar in the IBM Research Security Subscription service.
Download the IBM Institute for Business Value report: Wielding a double-edged sword: Preparing cybersecurity now for a quantum world.

 
IBM statements regarding its plans, directions and intent are subject to change or withdrawal without notice at the sole discretion of IBM. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release and timing of any future features or functionality described for our products remains at our sole discretion.
 
The post Quantum-safe cryptography: What it means for your data in the cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Democratizing Connectivity with a Containerized Network Function Running on a K8s-Based Edge Platform — Q&A

The post Democratizing Connectivity with a Containerized Network Function Running on a K8s-Based Edge Platform — Q&A appeared first on Mirantis | Pure Play Open Cloud.
Last week Mirantis’ Boris Renski joined Meghana Sahasrabudhe, Product Lead at Facebook Connectivity for a webinar about the Magma project, which makes it possible to run Containerized Network Functions in an Edge Cloud environment, thus opening up a whole range of capabilities for providers that may otherwise be limited in what they can provide.  We had so many questions we didn’t get to answer them all, so we’re providing all of the questions and answers here.
You can also view the complete webinar and see the discussion of how Magma works and what it does.
Who is maintaining the Magma code?
The Connectivity team at Facebook is actively working to develop and maintain the code base.
Over time, we hope to engage more developers and partners to contribute to the project and add new features.
Where can I find the Magma code base?
Magma is available on GitHub at https://github.com/facebookincubator/magma.
Have you deployed Magma with other partners?
We’re currently working with multiple operators on this effort. We’re looking forward to expanding our work with more partners over time.
Are you trying to replace incumbent vendors?
No. Our goal with Magma is to expand the range of tools and systems available to new and incumbent mobile operators. With Magma, we are helping to enable operators to bring more people online who may not have access to affordable high-quality connectivity.
Why did you choose to open source the Magma project? Did you consider trying to sell it?
We are not in the business of becoming a hardware or software vendor. By sharing our code, we’re helping move the industry forward while giving other companies and individuals an opportunity to contribute to the project, and thereby ultimately drive greater, industry-wide impact.
How much Docker implementation efficiency is lost when running Virtlet for VM-based VNFs? Wouldn’t it be more efficient to re-write the VNFs natively using containers?
Yes, in a perfect world, all VNFs would use the microservices architecture and run in containers. But in reality most VNFs are still VMs. Sadly, a large portion of VNFs really replicate the software that ran in the proprietary physical appliance. So it’s not easy to containerize all the VNFs and run them in containers.
So at least for the short term, we have to live with VNFs running in VMs and requiring very specific environments. That’s why Virtlet becomes an important part of this overall architecture. Also, it provides a very nice transition path, so customers can use Virtlet in a Kubernetes environment, run VM-based VNFs, and then as more and more VNFs transition to containers, then the overall edge architecture doesn’t have to change.
Would this become an alternative to OpenStack, which manages VMs today? If not, how would OpenStack be used with Edge cloud?
Depending on the use cases, an edge cloud may consist of any of the following:

Pure k8s cluster with Virtlet
Pure OpenStack clusters
A combination of OpenStack and k8s clusters

The overall architecture will depend on the use cases and edge applications to be supported.
What impact will this architecture have for usage with virtlet once we introduce containerd instead of dockerd/docketshim layer?
Virtlet already provides support for containerd, so there will be no impact.
Is Magma evolving to 5G SBA?
Yes, Magma is evolving to support 5G as well. That’s something that we’re evaluating right now; it’s on our roadmap.
Does the Magma solution have a containerized version as well ?
Its partially containerized. Like orc8r is fully containerized, feg/agw are partial containerized.  FeG/AGW run as a VM, but services inside the VM, such as radius, aaa etc are containerized.
If PCRF is not available in the core network, what element can I use to replace it, is it the Access Gateway?
Magma provides a PCRF-like database which implements the PCRF function, with the only caveat that instead of Gx, which is based on Diameter, it supports an optimized interface between the Federation Gateway and this PCRF function. For talking to a PCRF, the Federation Gateway supports the Gx interface.
It’s not clear from the Magma architecture, is the orchestrator solution shown here a centralized deployment for all other edge clouds, or does each edge cloud use 3 servers for the control plane?
Each edge location has a minimum of 3 servers for the control plane. So if you have 10 edge locations, and inside each of those 10 edge locations you have with an instance of Magma running, then you would have 10 edge locations with a minimum of 3 servers running everywhere.
What CNI are we using here for proving SCTP services?
It is the bridge CNI plugin. https://github.com/containernetworking/plugins/tree/master/plugins/main/bridge
What about the resiliency of federation and the orchestrator in case that particular node goes down? How are you handling node label changes for other nodes, and which component handles that?
The orchestrator component of Magma is basically a K8s pod, and the entire orchestrator app is actually a stateless app, so it’s probably the easiest one to manage the resilience for. So as long as your Kubernetes is running, which should be the case, because it’s running HA across three minimum nodes, then the K8s controller will be responsible for scaling and maintaining the resilience for orchestrator component of Magma.
For FeG , Magma currently supports Active-Standby. On failover orc8r will do switchover to standby FeG.
Does the Orchestrator use some sort of ETSI framework for LCM functions of container VNF’s?
I don’t think there is a super established ETSI framework for what a CNF should look like. There is as close to a consensus that is possible that is now emerging around what a CNF friendly NFVi layer should look like, and that’s basically K8s running on bare metal and OpenStack and other stuff running on top of Helm charts. There are a bunch of diagrams that ETSI, CNCF, OPNFV — basically all of these bodies that dictate the telco standards — have. The diagrams look very similar, but when it comes to the actual architecture of the CNF, I don’t think there is a common ETSI standard for what it should look like.
How does the client detect the degradation of service and when it is time to fall back?
There’s two layers of orchestration happening here. There’s orchestration done by the orchestrator component of Magma that monitors the Magma components such as CWAGs or Access Gateways, and some sort of degradation is happening at that layer. The orchestrator is responsible for that. And then there’s infrastructure level degradation. For example, if there is a physical node that died, or some part like the Virtlet service hung up — some things that are invisible to the orchestrator, that is managed by MCP Edge, specifically by the StackLight component.
StackLight deploys monitoring probes for every single service that runs in MCP Edge and continuously reports anything that is wrong. Moreover, there are automatic triggers that are configured for restarts or redeploys that DriveTrain is capable of executing to proactively remediate some of the problems at the infrastructure level.
Have you measured the data plane throughout between SGW – PGNGW and PGW-Internet? If yes, what is it and how do you simulate the traffic?
We have a collapsed S&PGW in Magma with the goal of distributing the entire EPC function vs parts of the EPC function scaling out independently. We have an extensive lab to baseline performance with industry standard tools like TM-500 and IXIA to baseline our performance.
So the Federation Gateway and Access Gateway are VM-based Pods managed by Kubernetes?
Yes, Virtlet is a CRI implementation, so all VMs are defined as Kubernetes Pods.
What is the roadmap for Magma to support NB-IoT?
A number of 5G features are currently on the roadmap for Magma, including Narrow Band Internet of Things.
Magma is described as “5g ready.” What does that mean and, more importantly, what specifically is still missing in Magma to make it possible to deploy with 5G RAN? 
By “5G ready” we mean that Magma is already built with 5G principles of distributed core, control plane/user plane separation, service based architecture, and so on. Magma can today already work in the 5G NSA mode. Once the 5G SA architecture is out at the end of the year, we will look into adding 5GC features to Magma. Those will be on the roadmap in 2020.
The post Democratizing Connectivity with a Containerized Network Function Running on a K8s-Based Edge Platform — Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Quay V3.1 Release Update with Bill Dettelback and Tom McKay – OpenShift Commons Briefing

 
In this briefing, Bill Dettelback, Red Hat’s Quay Engineering Manager and Tom McKay, Engineer Lead for Quay walk through Quay v3.1’s features, give a short demo of the new features and discuss the road map for future Quay releases, including a progress update on the open sourcing of Quay.
Slides: Quay 3.1 OpenShift Briefing
About Quay
Quay is a container image registry that enables you to build, organize, distribute and deploy containers. Quay gives you security over your repositories with image vulnerability scanning and robust access controls. Fully OCI compliant, Quay provides a scaleable platform to host container images across any size organization.
Quay works out of the box as a standalone container registry, requiring only a database and reliable storage for your container images. With minimal infrastructure requirements, Quay is designed to scale easily as your container image volumes grow and can run on everything from a laptop to enterprise-class servers. If you are looking for enterprise-level support, Red Hat offers Quay.io as a hosted service, or Red Hat Quay for on-premise or private-cloud deployments.
Quay was first released in 2013, as the first enterprise hosted registry. Six years later, we’re celebrating the first major release of the container registry since it joined the Red Hat portfolio of products through the acquisition of CoreOS in 2018 and watch for an announcement soon of the coming Open Source release of Project Quay!
Join the Quay Community
Join the conversation and collaborate with the Quay Open Source community by joining the QUAY Special Interest Group on Google group here: https://groups.google.com/forum/#!forum/quay-sig
This group is for discussion, best practices and help deploying, integrating and implementing quay.
Additional Resources:
Quay documentation
About OpenShift Commons
OpenShift Commons builds connections and collaboration across OpenShift and OKD communities, upstream projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. OpenShift Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this, we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
To stay abreast of all the latest announcements, briefings and events, please join the OpenShift Commons and join our mailing lists & slack channel.
Join OpenShift Commons today!
The post Quay V3.1 Release Update with Bill Dettelback and Tom McKay – OpenShift Commons Briefing appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Announcing Speaker Line-Up for Openshift Commons Gathering on Artificial Intelligence and Machine Learning (Oct 29/San Francisco)

Co-located with ODSC/West, this OpenShift Commons Gathering brings together the OpenShift, Kubernetes, Open Data Hub, OpenShift Machine Learning and AIOps SIG and Operator communities for a day-long OpenShift Commons Gathering on Artificial Intelligence and Machine Learning on October 28th, 2019 at the San Francisco Airport Marriott Waterfront.
Register Now!
The Bay Area event will focus on enabling Machine Learning and AI workloads on OpenShift, Red Hat’s Kubernetes platform, and will feature guest speakers from across the AI/ML ecosystem as well as key Red Hat AI/ML project leads and data scientists.
 
Speaker Line Up Announced!

Keynote: Baking AI Ethics into Your AI Infrastructure Today

Futurist Daniel Jeffries (Pachyderm)

Machine-Learning-As-A-Service Platform Deep Dive into OpenDataHub.io

Sherad Griffin (Red Hat)

Delivering On-Demand Analytics Environments for Data Scientists on OpenShift

Brandon Harris and Anirudh Pathe (Discover Financial Services)

Deploying Deep Learning Workloads with NVIDIA GPUs on OpenShift

Mehnaz Mahbub (Super Micro) and Mayur Shetty (Red Hat)

AI/ML and Operators Case Study: Databricks, Azure and Kubernetes

Azadeh Khojandi (Microsoft)

and that’s just the morning!
In the afternoon, we’ll be hosting another Round Robin of AI/ML Workload Production Case Studies; a panel on “Building Operators for Machine Learning Workloads”, a panel on “The Future and Ethics of AI” and closing the day with the ever popular “Ask-Me-Anything” (AMA) live Q&A session with  Red Hat engineers, upstream project leads and data scientists.
The day includes an Evening Reception and Networking Event hosted by members of the OpenShift Commons community.
Full Schedule Venue Register Now!
 
About OpenShift Commons Gatherings
The OpenShift Commons Gatherings bring together experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects. These events are designed for optimal peer-to-peer networking as ample time is given for networking with speakers and all the attendees.
To learn more about OpenShift Commons, please visit https://commons.openshift.org and join us soon!

And One More Thing! Take Part in a UX Design Research Workshop!

 
Co-located at this OpenShift Commons Gathering, the Red Hat UX Research team will be hosting a hands-on OpenShift UX Design workshop that will focus on the OpenShift Console. SO if you are interested in participating reach out soon and request an invite, space is limited to 20 existing OpenShift users. To request an invite schizari@redhat.com
The post Announcing Speaker Line-Up for Openshift Commons Gathering on Artificial Intelligence and Machine Learning (Oct 29/San Francisco) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Which button should I click on?

Which button should I click on?
 
As a CloudForms user, do you ever get frustrated, wondering which button to click on or how to get to your selected location?
 
If your answer to any of these or similar questions is yes, then let me tell you that your life is going to get easier. Why does CloudForms need to be that complicated? The truth is that it doesn’t and we (with your help) are working on it.
 
Have you already noticed any design changes in CloudForms? Has your menu changed or some reports make more sense now? Sometimes small and at first sight irrelevant changes can bring you peace of mind without you even noticing it. We are working on getting rid of all the unnecessary steps in your workflow to let you just focus on your tasks. We are eliminating painful illogical constraints, unnecessary information inputs and more, to reduce questions like which button to click on. Our job may be invisible but that means that we are doing it right. 
 
Who are we? 
We are part of the Red Hat User Experience Design team (UXD team) which is working very hard to make CloudForms and other Red Hat products more user-friendly. We are interaction designers, visual designers, user researchers, use-case architects, and front end developers from all around the world. We promote and enable a consistent user experience and are continuously working on improvements.
 
How do we do it? 
“Design in the open” 
We apply open source ideas to the design, testing, and development of products and technology solutions across our entire portfolio, not just CloudForms. We work in upstream communities, like ManageIQ, where innovation benefits from this open collaboration. We also created PatternFly, an open-source design system that provides common resources to design and build responsive, accessible user experiences.
PatternFly design system
Creating a cohesive user experience within a product as complex as CloudForms (not to mention an entire portfolio) is a major challenge. With PatternFly we can increase the consistency of the user experience in CloudForms and other Red Hat products while decreasing the time spent on UX design and front-end development.
 

 
Customer feedback
Great user experiences are never designed inside a bubble. A crucial part of the iterative design process is collaboration. Without input and feedback from our users about requirements, designs, or implementation, we would be creating in the dark. That is why transparency and collaboration with our users and engineers are key factors in creating a successful user experience. 
 
Let’s make CloudForms better together!
If you would be interested in joining us for user testing or user interviews, let us know: https://forms.gle/xqPpyc6XUCjAWKkUA.
 
Quelle: CloudForms