Running Istio Service Mesh on OpenShift
Learn how to install and run Istio.io, an open platform that provides a uniform way to connect, manage, and secure microservices, on OpenShift.
Quelle: OpenShift
Learn how to install and run Istio.io, an open platform that provides a uniform way to connect, manage, and secure microservices, on OpenShift.
Quelle: OpenShift
Does your business understand process automation on the cloud? You’ve probably heard the buzz around concepts like machine learning, but are these advancements a threat to your business or an opportunity for future success?
It depends on you. The history of business technology is in many ways a history of automation. From the first assembly lines to revolutionary cognitive systems, people have always seen the potential for automating processes to do things faster, smarter and better.
Let’s take a look at the current revolution in business process automation and consider three key points for understanding this opportunity.
1. We’re entering a new wave in process automation
A combination of three forces are pushing a new wave of process automation across multiple industries: technological advancements, increased pressures from businesses and growing awareness for automation best practices.
While the spotlight often shines on areas like machine learning, other technologies such as big data and predictive analytics also play an important role. Cognitive intelligence’s ability to adapt to changes and respond to unstructured data can enable nothing short of a revolution.
2. Process automation lets you focus on your mission
Even companies in the same space can have vastly different missions. The real benefit of automation is that by improving processes it frees your team to work on what’s important to your business.
For example, the UK National Health Service Blood and Transplant automated parts of the system managing its organ donation and selection process to ramp efficiency. Not only did this improve the system, it also freed the agency to refocus on bringing care to the UK. What can you accomplish when automation frees up more of your resources?
3. Automation needs a strategy
The innovation in automation technology has brought numerous new tools to the market— many that have the potential to reshape your industry. However, without a clear strategy for making the most of these tools, identifying the best tasks to automate and managing changes within your organization, you may not be able to capture the full value of the opportunity. A good automation strategy can as important as the tools themselves.
To learn more about process automation, take a look at a few IBM clients’ success stories. If you’d like to continue the conversation or ask me a question, please leave a comment or connect with me on LinkedIn.
The post Three things businesses should know about process automation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud
Trust in cloud computing is essential for it to reach its full potential.
Surveys conducted on the European cloud market by IDC and others have identified concerns with security and data protection as the main inhibitors to cloud adoption. The European Commission was very much aware of the need to build trust when it launched the 2012 Communication on cloud computing. The commission rightly identified that data protection is core to developing trust and embarked on a strategy to build that trust. One action the Commission took, working with industry, was to create industry working groups on different topics. Among those was a working group on data protection and the development of a European Code under the Cloud Select Industry Group Code of Conduct.
The EU Code of Conduct has been developed to align with the EU’s General Data Protection Regulation (GDPR). As the May 2018 deadline approaches for EU member states to implement GDPR, signing up to the code sends a strong signal possible that an organization is well on its way to prepare for the new regulation.
The EU Data Protection Code of Conduct for cloud service providers is the result of more than 4 years hard work. It is a voluntary code of conduct foreseen under the GDPR which provides guarantees over and above the minimum legal requirement for the protection of data in the cloud.
The principle behind the code is to improve and simplify the relationship between cloud vendors and cloud users. When cloud service providers sign up to the EU Code of Conduct, they commit to implementing robust data privacy and security policies that will stand up to the changing privacy landscape ahead. The EU Code of Conduct is a quality seal: “trusted cloud made in Europe.”
Interestingly, such codes are now being perceived in the market as a way to approach GDPR — one way for cloud service providers to approach GDPR is to start early. Once GDPR is implemented it will be one of the ways to demonstrate compliance.
Such codes are more than mere marketing slogans. They are developed with transparency and independent governance in mind, just like an international accredited standard such as ISO. They must ensure access to all to avoid anti-competitive behavior, for example. They also have to be accessible to small and medium-sized companies who may not be able to afford full-blown certification. And rather like open standards, they have to be transparent in the way they are governed.
On the other hand, a code such as the EU Code of Conduct actually sits at a level above such technical certification requirements. It requires that objectives such as providing adequate security based on the risk profile are already met, for instance, through the ISO 27001 security standard or the ISO 27018 privacy norms.
Of course, the code does not replace the service contracts that cloud providers draw up with their customers. If you like, it sits alongside the contract as a sort of health check for the user, ensuring that important topics relating to data protection in the cloud are properly addressed. If the cloud provider puts items into the contract that are against the code, they violate the code.
The EU Code of Conduct is uniquely positioned:
It is the only code that covers the full spectrum of cloud services, from software and platform through to infrastructure.
It’s the only code which EU authorities have been involved in developing. It is the result of four years of collaboration between the European Commission and the cloud community, including industry. The EU’s Article 29 Working Party, representing national Data Protection Authorities, gave input to the code.
It’s independently governed. Declarations are overseen by SCOPE Europe, an independent code monitoring body. SCOPE Europe will scrutinize cloud service provider applications to the code to check that they are compliant and monitor services continually that are certified against the code in line with GDPR requirements.
The EU Code of Conduct is open to cloud service providers of all sizes and from all cloud sectors who can commit to adhering to scrupulous data protection safeguards. Three of the global top five cloud service providers are members of the General Assembly and are working towards declaring services as the latest version of the code is published. Also, a prominent European SME Cloud Services Provider is at the heart of the committee. We cater for excellence, whatever your organization’s cloud delivery model or size.
Learn more about IBM regulatory compliance solutions.
The post How the EU Cloud Code of Conduct helps build cloud trust in Europe appeared first on Cloud computing news.
Quelle: Thoughts on Cloud
Get started using OpenShift on Windows using the CLI (Command Line Interface) with the OpenShift Origin Client Tools (OC Tools). You can access powerful commands with this alternative to the web console for working with OpenShift locally or remotely.
Quelle: OpenShift
In this session, Brian Brazil, Founder, RobustPerception.io and core Prometheus developer, explains the the core ideas behind Prometheus, and describes how to get useful metrics from your applications, how to process that data, getting alerts on what matters, and creating dashboards to aid debugging on OpenShift.
Quelle: OpenShift
Doug Hellmann talks about release management in OpenStack Ocata, at the OpenStack PTG in Atlanta.
Quelle: RDO
Next year, IBM is bringing together InterConnect, World of Watson, Edge, Amplify, Connect and Vision to form one massive global conference: think. This new event merges IBM AI, Cloud, Data, Internet of Things (IoT), Systems, Security and Services to reflect the way we all work.
As part of this unification, IBM is hosting an online jam to give clients, IBM Business Partners, IBMers and tech enthusiasts of all types the opportunity to share their opinions and ideas on everything from exposition to sessions, entertainment, keynotes and so much more.
How would you describe your ideal tech conference? This is your chance to make it a reality.
What is a jam?
An online jam is a business collaboration tool that engages everyone from interns all the way up to the CEO, unleashing the brainpower of an entire enterprise, and in this case across enterprises, to solve the hardest problems.
Only a jam enables decision makers to derive actionable insights from thousands of participants contributing ideas, evolving perspectives, taking polls and forging new relationships. When you’re looking for collective intellect, there are few better options.
Get involved
Register now for the jam, then join up at any time 20 through 22 June to make your voice heard. You can participate for just three minutes, three hours or all three days. It’s up to you.
Once the jam is over, event organizers will analyze the findings to build a better event that truly fits the needs and desires of the community.
In the meantime, learn more about think.
The post Join the jam to help the IBM think conference meet your needs appeared first on Cloud computing news.
Quelle: Thoughts on Cloud
Explore three patterns for integrating monitoring agents with your OpenShift applications and learn how Init Containers can help you manage your applications in almost any imaginable scenario.
Quelle: OpenShift
The post Writing VNFs for OPNFV appeared first on Mirantis | Pure Play Open Cloud.
[NOTE: This article is an excerpt from Understanding OPNFV, by Amar Kapadia. You can also download an ebook version of the book here.]
The entire OPNFV stack, ultimately, serves one purpose: to run virtual network functions (VNFs) that in turn constitute network services. We will look at two major considerations: how to write VNFs, and how to onboard them. We’ll conclude by analyzing how a vIMS VNF, Clearwater, has been onboarded by OPNFV.
Writing VNFs
We looked at three types of VNF architectures in Chapter 2: cloud hosted, cloud optimized, and cloud native. As a VNF creator or a buyer, your first consideration is to pick the architecture.
Physical network functions that are simply converted into a VNF without any optimizations are likely to be cloud hosted. Cloud hosted applications are monolithic and generally stateful. These VNFs require a large team that may or may not be using an agile development methodology. These applications are also dependent on the underlying infrastructure to provide high availability, and typically cannot be scaled out or in. In some cases, these VNFs may also need manual configuration.
Some developers refactor cloud hosted VNFs to make them more cloud friendly, or “cloud optimized”. A non-disruptive way to approach this effort is to take easily separable aspects of the monolithic application and convert them into services accessible via REST APIs. The VNF state may then be moved to a dedicated service, so the rest of the app becomes stateless. Making these changes allows for greater velocity in software development and the ability to perform cloud-centric operations such as scale-out, scale-in and self-healing.
While converting an existing VNF to be fully cloud native may be overly burdensome, all new VNFs should be written exclusively as cloud native if possible. (We have already covered cloud native application patterns in Chapter 2.) By using a cloud native architecture, developers and users can get much higher velocity in innovation and a high degree of flexibility in VNF orchestration and lifecycle management. In an enterprise end-user study conducted by Mirantis and Intel, the move to cloud native programming showed an average increase of iterations/year from 6 to 24 (4x increase) and a typical increase in the number of user stories/iteration of 20-60%. Enterprise cloud native apps are not the same as cloud native VNFs, but these benefits should generally apply to NFV as well.
Ultimately, there is no right or wrong architecture choice for existing VNFs (new VNFs should be designed as cloud native). The chart below shows VNF app architecture trade-offs.
Trade-offs between VNF Architectures
VNF Onboarding
The next major topic to consider when integrating VNFs into OPNFV scenarios is VNF onboarding. A VNF by itself is not very useful; the MANO layer needs associated metadata and descriptors to manage these VNFs. The VNF Package, which includes the VNF Descriptor (VNFD), describes what the VNF requires, how to configure the VNF, and how to manage its lifecycle. Along with this information, the VNF onboarding process may be viewed in four steps.
VNF Onboarding Steps
A detailed discussion of these steps is out of scope for this book and instead we will focus on the VNF package.
For successful VNF onboarding, the following types of attributes need to be specified in the VNF package. This list is by no means comprehensive; it is meant to be a sample. The package may include:
Basic information such as:
Pricing
SLA
Licensing model
Provider
Version
VNF packaging (tar or CSAR etc.)
VNF configuration
NFVI requirements such as:
vCPU
Memory
Storage
Data plane acceleration
CPU architecture
Affinity/anti-affinity
VNF lifecycle management:
Start/stop
Scaling
Healing
Update
Upgrade
Termination
Currently, the industry lacks standards in the areas of VNF packaging and descriptors. Each MANO vendor or MANO project and each NFV vendor has its own format. By the time, you add VIM-specific considerations, you get an unmanageably large development and interop matrix. It could easily take months of manual work for a user to onboard VNFs to their specific MANO+VIM choice because the formats have to be adapted and then tested. Both users and VNF providers find this process less than ideal. Both sides are always wondering which models to support, and what components to proactively test against.
The VNF manager (VNFM) might further complicate the situation. For simple VNFs, a generic VNFM might be adequate. For more complex VNFs such as VoLTE (voice over LTE), a custom (read: proprietary) VNFM might be needed, and would be provided by the VNF vendor. Needless to say, the already complex interop matrix becomes even more complex in this case.
In addition to manual work and wasted time, there are other issues exposed by the lack of standards. For example, there is no way for a VNF to be sure it will be provided resources that match its requirements. There may also be gaps in security, isolation, scaling, self-healing and other lifecycle management phases.
OPNFV recognizes the importance of standardizing the VNF onboarding process. The MANO working group, along with the Models project (see Chapter 5) is working on standardizing VNF onboarding for OPNFV. The projects address multiple issues including VNF package development, VNF package import, VNF validation/testing (basic and in-service), VNF import into a catalog, service blueprint creation, and VNFD models. The three main modeling languages being considered are: UML, TOSCA-NFV simple profile, and YANG:
UML: The Unified Modeling Language (UML) is standardized by the Object Management Group (OMG) and can be used for a variety of use cases. ETSI is using UML for standardizing their VNFD specification. At a high level, UML could be considered an application-centric language.
TOSCA-NFV simple profile: TOSCA is a cloud-centric modeling language. A TOSCA blueprint describes a graph of node templates, along with their connectivity. Next, workflows specify how a series of actions occur, which can get complex when considering various dependencies. Finally, TOSCA also allows for policies that trigger workflows based on events. The TOSCA-NFV simple profile specification covers an NFV-specific data model using the TOSCA language.
YANG: YANG is a modeling language standardized by IETF. Unlike TOSCA or UML, YANG is a network-centric modeling language. YANG models the state data and configurations of network elements. YANG describes a tree of nodes and relationships between them.
OPNFV is considering all three approaches, and in some cases hybrid approaches with multiple modeling languages, to solve the VNF onboarding problem. Given the importance of this issue, there is also considerable collaboration with outside organizations and projects such as ETSI, TMForum, OASIS, ON.Lab, and so on.
Clearwater vIMS on OPNFV
Clearwater is a virtual IP multimedia system (vIMS) software VNF project, open-sourced by Metaswitch. It is a complex cloud native application with a number of interconnected virtual instances.
Clearwater vIMS
For OPNFV testing, TOSCA is used as the VNFD modeling language. The TOSCA blueprint first describes each of the nodes and their connectivity. A snippet of this code is shown below:
VNF Descriptor for Homestead HSS Mirror
homestead_host:
type: clearwater.nodes.MonitoredServer
capabilities:
scalable:
properties:
min_instances: 1
relationships:
– target: base_security_group
type: cloudify.openstack.server_connected_to_security_group
– target: homestead_security_group
type: cloudify.openstack.server_connected_to_security_group
homestead:
type: clearwater.nodes.homestead
properties:
private_domain: clearwater.local
release: { get_input: release }
relationships:
– type: cloudify.relationships.contained_in
target: homestead_host
– type: app_connected_to_bind
target: bind
Next, the TOSCA blueprint describes a number of workflows. The workflows cover full lifecycle management of Clearwater. Finally, the blueprint describes policies that trigger workflows based on events.
Clearwater TOSCA Workflows
The TOSCA code fragment below shows a scale up policy based on a threshold, that then triggers a workflow to scale up Sprout SIP router instances from the initial 1 to a maximum of 5.
TOSCA Scaleup Policy and Workflow for Sprout SIP
policies:
up_scale_policy:
type: cloudify.policies.types.threshold
properties:
service: cpu.total.user
threshold: 25
stability_time: 60
triggers:
scale_trigger:
type: cloudify.policies.triggers.execute_workflow
parameters:
workflow: scale
workflow_parameters:
scalable_entity_name: sprout
delta: 1
scale_compute: true
max_instances: 5
Initial Deployment of the Clearwater VNF
Once the blueprint is complete, an orchestrator needs to interpret and act upon the TOSCA blueprint. For purposes of testing Clearwater, OPNFV uses Cloudify, a MANO product from Gigaspaces available in both commercial and open source flavors. Cloudify orchestrates each of the workflows described in the above blueprint. Specifically, the workflow to deploy the VNF looks like this:
Running this entire series of steps in an automated fashion in Functest requires the following:
Step 1: Deploy VIM, SDN controller, NFVI
Step 2: Deploy the MANO software (could be Heat, Open-O or Cloudify, which is the current choice). For testing purposes, it is possible to use the full MANO stack (NFVO + VNFM) or just the VNFM.
Step 3: Test the VNF. For project Clearwater, Functest runs more than 100 default signaling tests covering most vIMS test cases (calls, registration, redirection, busy, and so on).
We have talked about a specific VNF, but this approach is pragmatic enough to be applied to other VNFs – open source or proprietary. Using OPNFV as a standard way to onboard VNFs brings great value to the industry because of the complexity of the VNF onboarding landscape. No one vendor or user has the resources or time to perform testing against a full interop matrix. But as a community, this is eminently possible.
At this point, it is worth taking a bit of a detour to illustrate the power of open source. The initial project Clearwater testing work was done by an intern at Orange. The work became quite popular, and has been adopted by numerous vendors, influenced the OPNFV MANO working group, and even convinced some operators to use OPNFV as a VNF onboarding vehicle.
In summary, we saw how VNFs can target different application architectures, what is involved in onboarding VNFs, and a concrete example of how the Clearwater vIMS VNF has been onboarded by OPNFV for testing purposes. In the next chapter, we will discuss how you can benefit from and get involved with the OPNFV project.
[NOTE: This article is an excerpt from Understanding OPNFV, by Amar Kapadia. You can also download an ebook version of the book here.]
The post Writing VNFs for OPNFV appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).
Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.
Time to Validate!
The openstack-tripleo-validations RPM provides all the validations. You can find them in /usr/share/openstack-tripleo-validations/validations/ on the director host. Here’s a quick look, but check them out on your deployment as well.
With Red Hat OpenStack Platform we ship over 20 playbooks to try out, and there are many more upstream. Check the community often as the list of validations is always changing. Unsupported validations can be downloaded and included in the validations directory as required.
A good first validation to try is the ceilometerdb-size validation. This playbook ensures that the ceilometer configuration on the Undercloud doesn’t allow data to be retained indefinitely. It checks the metering_time_to_live and event_time_to_live parameters in /etc/ceilometer/ceilometer.conf to see if they are either unset or set to a negative value (representing infinite retention). Ceilometer data retention can lead to decreased performance on the director node and degraded abilities for third party tools which rely on this data.
Now, let’s run this validation using the command line in an environment where we have one of the values it checks set correctly and the other incorrectly. For example:
[stack@undercloud ansible]$ sudo awk ‘/^metering_time_to_live|^event_time_to_live/’ /etc/ceilometer/ceilometer.conf
metering_time_to_live = -1
event_time_to_live=259200
Method 1: ansible-playbook
The easiest way is to run the validation using the standard ansible-playbook command:
$ ansible-playbook /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml
So, what happened?
Ansible output is colored to help read it more easily. The green “OK” lines for the “setup” and “Get TTL setting values from ceilometer.conf” tasks represent Ansible successfully finding the metering and event values, as per this task:
– name: Get TTL setting values from ceilometer.conf
become: true
ini: path=/etc/ceilometer/ceilometer.conf section=database key={{ item }} ignore_missing_file=True
register: config_result
with_items:
– “{{ metering_ttl_check }}”
– “{{ event_ttl_check }}”
And the red and blue outputs come from this task:
– name: Check values
fail: msg=”Value of {{ item.item }} is set to {{ item.value or “-1″ }}.”
when: item.value|int < 0 or item.value == None
with_items: “{{ config_result.results }}”
Here, Ansible will issue a failed result (the red) if the “Check Values” task meets the conditional test (less than 0 or non-existent). So, in our case, since metering_time_to_live was set to -1 it met the condition and the task was run, resulting in the only possible outcome: failed.
With the blue output, Ansible is telling us it skipped the task. In this case this represents a good result. Consider that the event_time_to_live value is set to 259200. This value does not match the conditional in the task (item.value|int < 0 or item.value == None). And since the task only runs when the conditional is met, and the task’s only output is to produce a failed result, it skips the task. So, a skip means we have passed for this value.
For even more detail you can run ansible-playbook in a verbose mode, by adding -vvv to the command:
$ ansible-playbook -vvv /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml
You’ll find an excellent and interesting amount of information is returned and worth the time to review. Give it a try on your own environment. You may also want to learn more about Ansible playbooks by reviewing the full documentation.
Now that you’ve seen your first validation you can see how powerful they are. But the CLI is not the only way to run the validations.
Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including free samples from every title!
In the final part of the series we introduce validations with both the OpenStack scheduling service, Mistral, and the director web UI. Check back soon!
The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.
Quelle: RedHat Stack