How to put the IBM built-in data scientist to work for you

In modern IT operations teams, one of the biggest challenges is monitoring an increasingly complex environment—across many different tools—with fewer people. On top of that, teams face more pressure to avoid outages. And due to the immediacy of social media, outages can become very public, very quickly, negatively affecting customer sentiment of the company’s brand.
Some companies are choosing to employ data scientists to help them overcome challenges like these. The data scientist can use machine learning libraries to build a custom solution to help monitor their environment for potential problems.
There’s a better option if you do not want to be in the business of building and maintaining custom tools. You could choose an automated data scientist. It’s a tool that can learn the normal behavior of your time series data to help you avoid service impacting outages. It can also unify your performance monitoring systems into a single pain of glass, discover mathematical relationships to help perform root cause analysis and consolidate multiple anomalies into one problem.
With IBM Operations Analytics, a cognitive data scientist is essentially built into the product. The cognitive data scientist automatically creates and maintains the most suitable data models for your monitoring data. It intercepts analytic output, tests it and only notifies your team of high-confidence anomalies. To help operations teams take action, the technology delivers insights that include forecasts, discovered relationships, correlations and anomaly history.
How does the built-in data scientist help IT operations?
First, the team doesn’t need to focus on how the insights were achieved (no new hires, no new skill-sets, no statistical headaches). They can focus on what they do best: delivering great services, assisted by machine learning. Because the “data scientist” is in the code, actionable insights can be achieved in real-time and at scale. When IT environments change, the IBM technology will simply adapt and learn the “new normal,” avoiding the need to manually adapt data models and thresholds.
Perhaps the biggest bang for your buck is what IBM calls the “performance manager of managers.” Typically, centralized operations teams have between 20 to 40 performance managers, each requiring domain knowledge and configuration settings to create alerts. The IBM technology takes feeds from any performance manager and provides a single solution to dynamically set and maintain thresholds across your entire infrastructure and applications. And because the baselines can be highly seasonal, they are consistently more effective than traditional manual methods. The IBM technology can actually reduce noise while delivering increased efficiency.
The data scientist in practice: Banking
One real-world example comes from the banking industry. One IBM banking client is using   IBM Operations Analytics technology to manage their online banking application. The solution helps them identify performance anomalies which the bank’s operations team uses to take action.
Over a three month period, the team successfully reduced major incidents on the banking application by 85 percent, from 20 to three as measured in a three-month period. Think about the value this team achieved through machine assisted proactive operations:

85 percent fewer interruptions to the online banking service
85 percent fewer chances of revenue loss
85 percent less chance of brand-damaging feedback circulating on social media

Stay tuned for more IBM Operations Analytics insights
In this post I highlighted one of my favorite client value stories and explained how the unique IBM approach can help you achieve similar results without specialised skill sets.
In the next post, Ian Manning, lead developer for IBM Operations Analytics, will take us under the hood. He will explain how IBM differs from competitors, and most importantly how scalable proactive operations is enabled through actionable insights on performance data.
In the third post, Kristian Stewart, senior technical staff member for IBM Analytics and Event Management will explain how our approach delivers effectiveness and efficiency gains, at massive scale, through actionable insights from event data.
Finally, to complete the series, Jim Carey, offering manager for Netcool and BSM products will discuss how IBM is meeting the need to shift to DevOps. He’ll demonstrate strong new value for cognitive and agile operations.
Interested in learning more? Check out what’s possible for your business with IBM Operations Analytics.
The post How to put the IBM built-in data scientist to work for you appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mirantis Doubles Down on NFV; Optimizing Mirantis Cloud Platform for Telcos

The post Mirantis Doubles Down on NFV; Optimizing Mirantis Cloud Platform for Telcos appeared first on Mirantis | Pure Play Open Cloud.
AT&T, Vodafone, Saudi Telecom, China Mobile rely on Mirantis to easily deploy and update NFV via DriveTrain

SUNNYVALE, Calif., July 27, 2017 (GLOBE NEWSWIRE) — Mirantis today announced a series of innovative NFV-focused updates to Mirantis Cloud Platform (MCP), optimized for easy deployment, operations and updates via DriveTrain.

“MCP now includes significant new enhancements for NFV, available for customers to consume via the DriveTrain toolchain,” said Boris Renski, Mirantis co-founder and CMO. “Leading Communications companies are selecting Mirantis to enable their VNFs and unlock a ‘disaggregated’ NFV stack that’s tuned for high performance and based on open source standards and non-proprietary infrastructure hardware.”

Mirantis continues to add capabilities supporting NFV for telecom operators, cable providers and enterprises. These new capabilities include significant new functionality for NFV, providing the VIM (including SDN controller) + NFVi layers of the ETSI NFV reference architecture. Specifically, they include:

OVS-DPDK over bonded interfaces: Allows users to consume higher bandwidth over a single link aggregated interface.
VLAN-aware VMs: Enables users to consume significantly fewer vNICs, where previously a separate vNIC was required for each VLAN. This dramatically reduces the networking complexity of the virtualized environment.
Per-VF QoS: Bandwidth capping on a per-virtual-function level permits fine-grained traffic shaping and prevents noisy-neighbor syndromes.

With MCP, Mirantis departs from the traditional software-centric method that revolves around licensing and support subscriptions. Instead, the company is pioneering an operations-centric approach, where open infrastructure is continuously delivered with an operations SLA through a managed service or by the customer themselves. This way, software updates no longer happen once every 6-12 months, but are introduced in minor increments on a bi-weekly basis, and with no down time.

Announced in April, Mirantis Cloud Platform includes leading open source software such as OpenStack and Kubernetes, continuously delivered via the DriveTrain Continuous Integration / Continuous Delivery (CI/CD) pipeline and provided to customers in a unique build-operate-transfer delivery model that ensures successful hybrid cloud operations at scale.

Mirantis Cloud Platform is:

Open Cloud Software — provides a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis Cloud Platform to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN).
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain — sets the foundation for DevOps-style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility in customizing the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight — enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards. Stacklight:

Avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
Includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
Focuses on SLA. The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

The build-operate-transfer model provides a turnkey experience, with Mirantis operating the cloud for customers for a period of at least six months with up to four nines SLA prior to offboarding operational responsibility to the customer’s team, if desired. This delivery model ensures that not just the software, but also the customer’s team and process are aligned with DevOps best practices.

To learn more about Mirantis Cloud Platform, watch an overview video and sign up for a live demo at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Doubles Down on NFV; Optimizing Mirantis Cloud Platform for Telcos appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Increasing NFV agility in Mirantis Cloud Platform

The post Increasing NFV agility in Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
With the onset of the digital age, the need for agility has become paramount. The network connectivity offered by telecoms is no longer a premium service, and instead has become an on-demand service with instantaneous setup and tear down. The premium service is the set of services offered on top of the network as an application. With the need for flexibility paramount, all of that  purpose-built equipment, which was a strength in the past, has become the biggest hurdle to competing in the digital age, contributing to decreasing revenues and increasing cost.
So change was a necessity, and thus was born Network Functions Virtualization (NFV). NFV makes it possible to do most of what the telcos were doing using specialized hardware, but with disaggregated network functions made up of software that could be adapted for new situations on COTS hardware.
All of this requires fast evolving infrastructure resource management, of course, and for that reason, NFV has become virtually inseparable from the OpenStack cloud platform. Mirantis, along with partners and competitors, has been working steadily with early adopters to make sure that OpenStack has what it needs to be well suited for NFV deployments.  In particular, for the past several years, Mirantis has worked within the telco community to support adoption of OpenStack as the primary Network Functions Virtualization Infrastructure (NFVi) and Virtual Infrastructure Management (VIM).   
These early adopters of NFV and SDN for telco network transformation have spoken about the initial successes of this approach, as well as the areas that need to be addressed in order for NFV to reach the next level — in order words, massive adoption across the globe.
Some areas our customers and partners identified as crucial include:

CI/CD for the infrastructure layer: One of the biggest hurdles to NFV is that NFV and its ecosystem are continuously evolving, so operators need a proven path to seamlessly absorb new innovations into every component of NFV, including the infrastructure layer. To solve that problem, we need to build Infrastructure as Code to enable infrastructure lifecycle management (LCM).
Future proof the infrastructure layer: It’s not enough to be able to manage the infrastructure; we need to make sure that we’re avoiding the need for forklift upgrades when major changes come along, such as the move to support container based, cloud native VNFs.
End-to-End automation including VNF onboarding and monitoring: This is a key requirement for business agility which is critical for lowering time to market and revenue acceleration. It enables optimal resource utilization and prevents stranded/stolen assets.
Strong open source communities:  No single organization can afford to innovate at speeds essential for the transformation of extremely complex telco network infrastructures, so our customers recognize the need for strong community support for the components of the NFV architecture, especially the management and orchestration (MANO) and virtualization layers of NFVI within the ETSI NFV reference architecture.

Our customers include service providers all over the world, so these problems have been top of mind for us for some time, and we’ve been working to solve them.  For example:

Mirantis Cloud Platform (MCP) includes DriveTrain, which provides a platform for managing virtualized networks using infrastructure as code
In addition to DriveTrain, MCP includes the ability to easily add Kubernetes and containers, making it an ideal future-proof platform for telcos. What’s more, DriveTrain makes it possible to add the “next big thing” in a manageable way.
Mirantis is actively contributing to both ONAP and OPNFV, and currently working on solutions for VNF onboarding and monitoring.
Mirantis is an open source company, and as such, all the components are built on open source tools, so service providers can lean on a global pool of resources for innovation in the infrastructure area.

In fact, the latest version of MCP focuses on the specific needs of NFV workloads, including their operationalization, or orchestration and automation within the context of a telco network. For example, MCP includes:

Capacity management of SR-IOV NICs through QoS controls. Bandwidth capping on a per virtual function level permits fine-grained traffic shaping and prevents noisy-neighbor syndrome.
Better reliability, higher bandwidth, and improved load balancing with OVS-DPDK support on bonded NICs. This also enables operators take advantage of existing assets. For example, you can utilize 10G NICs, when available, instead of investing in 40G NICs.
Improved performance for DPDK by pinning individual queues to cores with NUMA affinity
The ability to run telco VNFs that require simultaneous connectivity to multiple networks through support for VLAN aware VMs.

One thing that we know for certain is that telcos and service providers can’t afford to ignore NFV.  As addicted to their phones as many people are now, the “unlimited” network capability that is expected with the upcoming 5G standard has the potential to make connectivity seem like it is the fourth essential ingredient for human survival (after water, air and shelter). And as complex as 5G will be, NFV is critical for it to become a reality. 5G requires a dynamic hierarchical architecture; between that and requirements for network slicing, and cloud-based radio access networks (C-RAN), virtualization of the networking infrastructure is essential.
Accordingly, Mirantis has a rich roadmap that focuses heavily on NFV for 5G enablement over the next 6 to 18 months, and we can’t wait to share it with you.
For further information about NFV or any of the topics mentioned here, please contact Mirantis.
The post Increasing NFV agility in Mirantis Cloud Platform appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Poseidon Project teaches effective crop irrigation using IBM Cloud

According to Planet Herald, one of the top five environmental challenges of our time is lack of clean, fresh water. UNICEF has shared some startling worldwide facts:

1,400 children die every day from disease linked to poor water
768 million people use unsafe drinking water
2.5 billion people lack sanitation facilities

Many people take clean water for granted, but for others, it’s a luxury. Water should be a basic human right, not a commodity.
Farming wastes water
Excessive water use for agriculture is leaving rivers, lakes and underground water sources dry in many irrigated areas, according to the World Wildlife Fund. The agricultural sector uses 70 percent of the world’s accessible fresh water, which is three times more than industry and more than eight times more than municipalities use.
If only farmers could learn how to water their crops more efficiently.
Now they can. The Dutch Courage Foundation teamed with the IBM Benelux Center for Advanced Studies to create the Poseidon Project, an initiative to reduce agricultural water usage worldwide.
The Poseidon Project aims to raise awareness about the overuse of water in global agriculture and develop low-cost technology for farmers in developing countries to support them in consuming water in smart, efficient and effective ways.
The Poseidon Project uses soil moisture sensors to help farmers know when to irrigate their crops.

The internet of plants
The Poseidon Project works with school children in an eight-week curriculum to educate them about the inefficient use of water in global agriculture. It then develops affordable technologies for farmers in developing countries to reduce water usage. The foundation sends kits to participating schools where children plant mustard seeds in a miniature farm, connect a wireless sensor through Bluetooth to the IBM Cloud and measure plant health with an infrared camera.
Each plant uses a Raspberry Pi computer to monitor soil moisture, temperature and air pressure. Then, using the MQTT protocol, data is uploaded to the cloud. The Watson Internet of Things (IoT) Foundation is the message broker between the plants and the IBM Bluemix cloud-based application platform, which stores the data in an IBM Cloudant database. The plants send notifications to users’ mobile phones as well as Twitter. Messages include:

“I’m thirsty! #poseidon.”
“Please give me water now. It’s not going to rain tomorrow! #poseidon”
“I’m happy today! #poseidon”

They also connect with the Weather Channel API to know whether it is going to rain or not, so students know how much water to give their plants.
There is an element of gamification with Twitter for the students: which plant has the most prominent voice? (Which can get the most followers, likes or retweets?) This helps to raise awareness about the Poseidon Project and the global water shortage.
The data that is generated about the mustard plants is analyzed and applied to agriculture.
IBMers join the Poseidon Project: plants at the IBM Amsterdam office
Reducing worldwide water consumption
Irrigation is essential to grow the crops needed for food and clothing, but rain can’t replenish the river basins of the world fast enough. For example, the Aral Sea was at one time among the largest lakes in the world, but it has been drying up during the last 50 years since the rivers that fed it were diverted for irrigation. Because of the increasing demand for agricultural products, most of the river basins on the planet are heading in the same direction. This causes extreme climate change. Southern Siberia, where the Aral Sea is located, went from a relatively modest climate to extremely cold (as low as negative 50 degrees Celsius) in the winter.
It’s said that faith the size of a mustard seed can move mountains. How apt that the Poseidon Project has chosen tiny mustard seeds to be the seeds of hope. Today’s students will inherit our planet.
The Poseidon Project has begun testing the technology in fields in the Netherlands and Russia. It will roll out the equipment free of charge to farmers in Africa and Central Asia. It is ultimately expected to reduce worldwide water consumption by 30 percent.
To get involved with Poseidon project, contact info@dutchcourage.org and follow @PoseidonBM.
The post Poseidon Project teaches effective crop irrigation using IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Increase the business value of APIs

Application Programming Interfaces, or APIs, define the data contract between two pieces of code. APIs used to be exclusively the domain of developers. In this post, I’ll show you how APIs provide the freedom for systems to evolve independently and create new opportunities for business growth. While APIs can exist at all levels of an application, I’ll focus here on the APIs that enterprise systems expose and consume.
APIs decouple the client
A key advantage to building an API layer is decoupling client applications from the back-end systems they require. This allows developers to create independent versions of applications and core systems. In fact, a well-designed API layer can allow a core system to be fully replaced with limited risk, as our company recently experienced with an insurance client of ours.
We didn’t set out to replace this client’s system of record, which was built and maintained by a third party. As we partnered with the client to transform their digital experience on desktop, tablet and mobile, we determined that a Node.js layer between the core system and the mobile applications would facilitate their mediation, aggregation and data transformation needs. The API layer was also made available to the client’s business partners, who found it more consumable than direct access to their core system. Importantly, it also allowed the client to replace their core system with one from a different vendor.
Migrating systems can be a very difficult and expensive undertaking. Many companies will stay on systems that have become antiquated or inadequate to support the growing needs of their business. While the API layer itself needed to be updated to integrate with the new core system, the client’s mobile and web applications—as well as their business partners’ integrations—continued to operate with few or no changes.
Just as the API layer provides the freedom to enhance core systems, it also allows client applications to iterate as needed to meet the needs of the end user. Abstracting the core systems from the front-end helps businesses to take a user-centric approach.
APIs building on existing investments
When a company provides access to its existing product or business processes through APIs, it can open up paths for new business.
New partnerships can emerge when a business can more widely expose their expertise. For example, the aforementioned insurance company was able to increase the distribution of their homeowners product by partnering with an insurance company that focused primarily on auto insurance. By making a quote API available, the auto insurance company could include our client’s quotes for home insurance. Together they could offer home and auto insurance together as a bundle. This bundled offering allowed the homeowners insurance company to reach a new set of potential customers.
APIs allow businesses to expose their services to new markets with the help of third-party developers. Take, for example, a brick-and-mortar photo printing company. Creating an internal API for various client applications (perhaps the website and in-store kiosks) to order photo prints in a uniform manner makes sense architecturally, but doesn’t necessarily drive new business through the door. However, by using a revenue-sharing model to encourage third-party developers to integrate their applications through an API, the company can dramatically reduced the friction for users to order photo prints from the apps they already use.
Single point of entry with APIs
Finally, building an API layer allows a single point of entry into disparate systems. Client applications can depend on an API layer with consistent patterns and conventions, even though the underlying systems are often built with different technologies, years apart and by different teams.
A common entry point allows a variety of client applications across to behave consistently across digital touchpoints. When your client applications only need to concern themselves with the modern API instead of an assortment of legacy core systems, development teams can focus on the features that provide the most value to users and the business.
Most businesses have legacy systems they’d replace if they had the opportunity. But often cost, complexity and risk prevent needed changes from happening. Building an API layer as the single point of entry between these systems and client applications limits risk when updating or replacing these core systems. Large-scale changes to core systems likely won’t happen overnight, but an API can serve as a layer of protection and flexibility when your business is ready.
Have questions about how your company could benefit from an API-driven business strategy? Download the full “Executing Digital Transformation” study to learn more.
The post Increase the business value of APIs appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpsTools on RDO

OpsTools for RDO

CentOS SIG

In the CentOS community there are Special Interest Groups (SIG) that focus on specific issues such as cloud, storage, virtualization, or operational tools (OpsTools). These special interest groups are created to either to create awareness or to tackle the development of that subject with focus. Among the groups there is the Operational tools group (OpsTools) that focus on

Performance Monitoring
Availability Monitoring
Centralized Logging

OpenStack Operational Tools

While the OpsTools are created for the CentOS community, it is also applicable and available for RDO. More information can be found at GitHub.

Centralized Logging

The centralized logging has the following components:

A Log Collection Agent (Fluentd)
A Log relay/transformer (Fluentd)
A Data store (Elasticsearch)
An API/Presentation Layer (Kibana)

With the minimum hardware requirement:

8GB of Memory
Single Socket Xeon Class CPU
500GB of Disk Space

Detailed instruction to install can be found here.

Availability Monitoring

The Availability Monitoring has the following components:

A Monitoring Agent (sensu)
A Monitoring Relay/proxy (rabbitmq)
A Monitoring Controller/Server (sensu)
An API/Presentation Layer (uchiwa)

With the minimum hardware requirement:

4GB of Memory
Single Socket Xeon Class CPU
100GB of Disk Space

Detailed instruction to install can be found here.

Performance Monitoring

The Performance Monitoring has the following components:

A Collection Agent (collectd)
A Collection Aggregator/Relay (graphite)
A Data Store (whisperdb)
An API/Presentation Layer (grafana)

With the minimum hardware requirement:

4GB of Memory
Single Socket Xeon Class CPU
500GB of Disk Space

Detailed instruction to install can be found here.

Ansible playbooks for deploying OpsTools

Besides manually install the OpsTools, there are Ansible roles and playbooks to automate the installation process and instructions can be found here.
Quelle: RDO