How IT operations teams can thrive in the age of OSS transformation

When everything at your business changes so quickly, how can IT operations teams be effective and responsive to maximize customer service?
We will try to answer this question when we talk about our approach to operational systems support (OSS) in an age of OSS transformation at TM Forum Live in Nice, France next week.
Analysts agree the next frontier of IT operations is to directly support the business in a fast-moving, agile world. Our approach to operations management starts with greater comprehension of the environment. We are building operations solutions that are dynamic, cognitive, and hybrid-oriented to tell you what is happening as it happens—or even before it happens.
IBM is introducing dynamic service management to support lifecycle management of your operational applications and services. You’ll get the operational context you need to understand and act on emerging problems. The agile service manager starts with a scan of your total infrastructure and then interacts with the change agents in your environment, incrementally updating topology information based on changes—whether or not you intended them. Knowing what the environment looks like right now combined with knowing when important changes occurred can radically speed problem remediation. It is the key to mastering automated lifecycle solutions.
The IBM Netcool family of solutions also remains critical to communications service providers across the globe. Recent IBM innovations in cognitive computing enable the solution to hit what I call the Three Es:

Effectively identify important problems
Allow teams to operate efficiently
Detect emerging problems before they impact customers

These analytics solutions process event and performance information, learn about your environment and identify emerging problems and opportunities for operations improvement.
All these solutions are as hybrid as the organizations using them. They manage applications and services distributed across on premise equipment as well as private and public cloud-based solutions.
Come talk to us about how Netcool and the Agile Service Manager lay the groundwork for real-time OSS, speeding operations response and enabling fully automated orchestration. Learn more about why IBM cognitive, dynamic, and hybrid solutions are right for managing your business. We would love to discuss with you. To request a meeting during the week at TM Forum, click here.
For more information on IBMs Point of View check out this white paper from Analysys Mason. Or take a look at the most recent Gartner OSS Magic Quadrant.
The post How IT operations teams can thrive in the age of OSS transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Measuring flood resilience with IBM Bluemix

According to the United Nations Office for Disaster Risk Reduction, almost half of all losses due to natural disasters, both humanitarian and financial, are caused by floods. Yet 87 percent of funding available is allocated to post-disaster relief, recovery and cleanup, as stated in a jointly funded report from the Global Facility for Disaster Reduction and Recovery at the World Bank and the Overseas Development Institute.
Floods can literally wash away community development that has taken years to build.
The Z Zurich Foundation, funded by Zurich Insurance Company Ltd. and Zurich Life Insurance Company Ltd., initiated a global flood resilience program to advance knowledge, develop expertise and design strategies to help communities better manage flood risks, bringing together different organizations in a Flood Resilience Alliance.
The Alliance defines flood resilience as the community’s ability to pursue its social, ecological and economic development and growth objectives while managing disaster risk over time in a mutually reinforcing way.
Photo credit: M.Szoenyi
Measuring flood resilience
To effectively direct interventions (and measure their impact), the Alliance needed to gather information from communities and the wider environment in which those communities were located. To make that happen, there was a need to pull all the data together into a measurement tool in a centrally accessible location. However, there was no existing toolkit that fit the requirements.
Because Zurich has a longstanding relationship with IBM, when the Alliance identified the scope of its needs for the flood resilience initiative, it chose to work with IBM Global Business Services to develop an IBM Bluemix Public solution.
The Bluemix solution comprises questionnaires that are administered through a mobile app or web-based interface and a secure platform for data analysis.
Through discussions, research and on-the-ground experience in communities, combined with Zurich risk expertise, the foundation developed more than just a measurement tool. It has created a holistic framework that fosters understanding of flood resilience and how it can be developed.
Focusing on pre-event resilience building
Research from the program suggests that there’s a 5:1 benefit to budgeting resources prior to a flood rather than after. This means that for every dollar spent before the event, $5 in costs can be saved after the event.
The Bluemix measurement framework and tools help the program to deliver tangible evidence that demonstrates communities should shift their priorities from post-event funding to pre-event funding. Doing so will have a significant impact on improving flood resilience.
Learn more about how the Zurich Foundation is measuring flood resilience.
The post Measuring flood resilience with IBM Bluemix appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenStack Summit – Mirantis Activities for May 11

The post OpenStack Summit – Mirantis Activities for May 11 appeared first on Mirantis | Pure Play Open Cloud.

 

Don’t forget to power up with Mirantis for the last day of OpenStack Summit:

Booth Activities

10:30am-11:00am
Meet the Neutron Expert: Kevin Benton, Sr. Software Engineer

1:00pm-1:30pm
Meet the Kubernetes Expert: Ihor Dvoretskyi, Kubernetes Community Engineer – T2

4:00pm-4:30pm
Book Giveaway*: Understanding OPNFV (*while supplies last)

Presentations

Thursday, 11:00am-11:40am
Level: Intermediate
Scheduler Wars: A New Hope
(Jay Pipes, Mirantis)

Thursday, 11:30am-11:40am
Level: Beginner
Saving one cloud at a time with tenant care
(Bryan Langston, Mirantis; Comcast)

Thursday, 3:10pm-3:50pm
Level: Advanced
Scheduler Wars: Revenge of the Split
(Jay Pipes, Mirantis)

Thursday, 5:00pm-5:40pm
Level: Intermediate
Terraforming OpenStack Landscape
(Mykyta Gubenko, Mirantis)

 The post OpenStack Summit – Mirantis Activities for May 11 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Using OpenStack: Leveraging Managed Service Providers

Since 2011, when OpenStack was first released to the community, the following and momentum behind it has been amazing. In fact, it quickly became one of the fastest growing open source projects in the history of open source. Now, with nearly 700 community sponsors, over 600 different modules, and over 50,000 lines of code contributed, OpenStack has become the default platform of choice for much of the private and public cloud infrastructure.
This kind of growth doesn’t happen by chance. It’s because businesses and organizations alike have quickly experienced *real* benefits OpenStack has delivered, whether it be creating greater efficiency, faster time to market, automated infrastructure management, or simply saving them money, just to name a few.
However, as OpenStack technology and the cloud market matures, how OpenStack is delivered to customers by vendors, as well as how businesses choose to consume the technology, has introduced many new methodologies and options. These new options simply provide customers the flexibility to determine the best consumption method for their unique business that allows them to reap all the benefits of OpenStack, but with minimal disruption – all while adhering to their IT operational goals, policies, and staff capabilities.
There are many ways that users can consume OpenStack to help benefit their IT business, whether it’s built on premises or off. However, one option that has come from this maturity, is the option for a “managed” cloud, being delivered by a managed service provider (or MSP). This option allows customers to maintain a private cloud, either on premises or off, but leave the burden of deployment, configuration, and day-to-day management to a hired, experienced team of experts. And while this does cost you a monthly/annual subscription to retain their services, it relieves you from the complexities of having to do this yourself. Many businesses may find that their internal IT teams may be understaffed, unskilled, or simply better off utilizing their resources elsewhere.
In this case, businesses might want to consider an OpenStack managed service provider to help move their business into the digital age and create modern cloud services to offer their internal end users or external customers.
At Red Hat, we believe OpenStack is a key component to digital transformation and helping move organizations to a modern cloud solution stack. And we’ve worked hard to establish Red Hat OpenStack Platform as an industry standard for private and public cloud infrastructure. As a result, we have hundreds of customers including the likes of BBVA; Cambridge University; FICO; NASA’s Jet Propulsion Laboratory; Paddy Power Betfair; Produban; Swisscom; UKCloud; and Verizon to name a few. In addition, we’ve spent years working with our partner ecosystem to establish deep, engineering-level partnerships with our partners to provide a robust, enterprise-level cloud that is capable of standing up to the rigors of production deployments.
In particular, we’ve been working to establish strong partnerships with our managed service providers, that include engineering and product-level integration, as a way for our customers to maintain a consistent and high level of quality, regardless of how they choose to consume OpenStack. However, we recognized that businesses around the globe operate at different levels and have their own unique preferences for specific and strategic technology partners. So rather than work with only one global provider, we wanted to stick to what Red Hat does best and provide choice to our large, global customers.
First, we started with the original creators of OpenStack themselves – Rackspace. If Rackspace recognized the quality and open source leadership Red Hat maintains, we knew we would be able to make OpenStack’s benefits more accessible to customers. And after years of collaboration, it made sense to come together on an OpenStack managed service offering. Then we continued our collaboration efforts, working more closely with Cisco to release Cisco Metacloud (formerly called Metapod) powered by Red Hat OpenStack Platform, as we know many companies rely on Cisco for their datacenter infrastructure and service needs. And more recently, we announced a joint offering with IBM and their BlueMix Private Cloud with Red Hat technology, which also includes Red Hat Ceph Storage to help customers meet their storage needs at scale.
And while some customers may choose a global service provider like the three I just mentioned, we also recognize that some customers prefer smaller, regional service providers; whether it be to adhere to security policies or maybe just to support local businesses. Regardless, we’ve established a large and growing ecosystem of regional managed service providers like UKCloud (UK), Detacon (Saudi Arabia), Swisscom (Switzerland), Epbi (Netherlands), Blackmesh (North America public sector), NEC America (public sector), and more. These regional providers can help you establish a foothold into the digital age by moving to a scalable and more secure private cloud to meet the demands of your customers and support the future growth of your business.
In addition to our expanding ecosystem of MSP partnerships, we’ve also empowered our existing customers with the flexibility of utilizing their existing Red Hat subscriptions with these certified managed service providers, should they choose to. Existing customers can utilize our Cloud Access program to help maintain business continuity with their current Red Hat Subscriptions.
Our goal is to help businesses like yours with their digital transformation journey to the cloud. Regardless of how you choose to consume the latest software technologies like Red Hat OpenStack Platform or Red Hat Ceph Storage, we work hard to ensure we’re always putting our customer needs first, providing long-term stability with minimal disruption to your business, and building everything on open standards and APIs to provide the flexibility and choice you need to meet the demands of your growing business. To learn more about Red Hat’s cloud technologies or find a certified managed service provider near you, reach out to us anytime. We look forward to helping you achieve your digital transformation goals!
Quelle: RedHat Stack

With Four New Partners, China Mobile Opens NFV Testlab, Partners with Mirantis to Promote NFV Development

The post With Four New Partners, China Mobile Opens NFV Testlab, Partners with Mirantis to Promote NFV Development appeared first on Mirantis | Pure Play Open Cloud.
Mirantis joins ARM, Cavium and Enea to test multiple NFV scenarios and services

OpenStack Summit, Boston – May 10, 2017 – China Mobile announced that they’ve signed memorandum of understanding (MoU) with UMCloud (Mirantis in China), ARM, Cavium and Enea, the third batch of companies to partner with the China Mobile Open NFV lab. These four partners will carry out a series of tests with multiple NFV typical scenarios and services.

China Mobile Open NFV Lab was established in early 2015 to provide an international and open test environment for the industry. The first of its kind in Asia, the lab has been certified by OPNFV as part of its testing infrastructure. In July 2015, China mobile signed an MoU with the first batch of 9 partners, including Huawei, Red Hat, and WindRiver. In January 2016, China Mobile signed an MoU with the second batch of 4 partners, including Ericsson and H3C. Now the lab has 17 partners, ranging from IT providers to Communication Technology companies, including chip manufacturers, hardware vendors, NFV platform vendors, virtual network function (VNF) manufacturers and test instrument manufacturers.

“Since launching Mirantis Cloud Platform just 3 weeks ago, we have made significant announcements with Vodafone, Fujitsu, and now China Mobile,” said Boris Renski, Mirantis CMO and co-founder. “Companies are rapidly ‘cloudifying’ their businesses and Telcos are in the forefront of this trend. OpenStack is making it possible.”

The lab has made big progress since its foundation; it has finished vIMS, vEPC and other tests of multi-vendors. It has also promoted the integration and testing of the OPNFV open source platform and multiple commercial NFV platforms. The test results of this lab are also output to the China Mobile NovoNet test network to carry out large-scale tests and validation.

Mirantis Managed OpenStack departs from the traditional software-centric method that revolves around licensing and support subscriptions. Instead, Mirantis is pioneering an operations-centric approach, where open infrastructure is continuously delivered with operations service level agreements (SLAs) owned by either Mirantis or the customer. Now, software updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis and with no down time.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post With Four New Partners, China Mobile Opens NFV Testlab, Partners with Mirantis to Promote NFV Development appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Advantages and complexities of integrating Hadoop with object stores

Object storage is the ultimate solution for storing unstructured data today.
It is low cost, secure, fault tolerant, and resilient. Object stores expose a RESTful API, where each object has a unique URL, enabling users to easily manipulate object stores.  Major cloud vendors like IBM, Amazon, and Microsoft all provide cloud-based object storage services.
Fortunately, Hadoop integrates easily with object stores; not just the Hadoop Distributed File System (HDFS). The Hadoop code base contains a variety of storage connectors that offer access to different object stores.
All the connectors share the file system interface and can easily be integrated into various MapReduce flows. These connectors have become the standard choice for other big data engines, such as Apache Spark, that interact with object stores.

Data locality no longer exists with this approach, comparing to the HDFS. But the abilities to scale compute and storage independently means greatly reduced operational costs. There is no need to copy data into the HDFS cluster, and Hadoop can access object stores directly. Typical use cases employ object stores to store Internet of Things (IoT) or archive data, then use the Hadoop ecosystem to run analytic flows directly on the stored data.
Hadoop file system shell operations and why object storage is not a file system
While Hadoop file system shell operations are widely used with HDFS, most of them are not built to work with object stores, or object stores generally don’t behave as expected. Shell operations treat object storage like a file system, which is a common mistake.
Directories in HDFS are the core component used to group files into different collections. For example, the following will recursively create three directories “a,” ”a/b” and “a/b/c.”

This enables an upload into “a/b/c” via:

Object stores, however, have different semantics. The data object URI consists of the bucket and the object name, where an object name may contain delimiters. The subsequent bucket listing may use both delimiter and prefix, and thus retrieve only relevant data.
This has a certain analogy with listing directories in file systems. However, write operation patterns in object stores are very different compared to file systems.
For example, in an object store, a single RESTful PUT request will create an object “a/b/c/data.txt” in “mybucket” without having to create “a/b/c” in advance.

This happens because object stores support hierarchical naming and operations without the need for directories.
Move command is another interesting example:

Move command internally uses rename. In a file system, rename is an atomic operation. Normally, any new file is first written into a temp file and upon completion is renamed to the final name. This allows the file system to be consistent and stable when it comes to failures, and ensures that only complete files exist.
The rename operation is an integral part of any Hadoop write flow. On the other hand, object stores don’t provide an atomic rename. In fact, rename should be avoided in object storage altogether, since it consists of two separate operations: copy and delete.
Copy is usually mapped to a RESTful PUT request or RESTful COPY request and triggers internal data movements between storage nodes. The subsequent delete command maps to the RESTful DELETE request, but usually relies on the bucket listing operation to identify which data must be deleted. This makes a rename highly inefficient in object stores, and the lack of atomicity may leave data in a corrupted state.
Hadoop file system shell operations are part of the Hadoop ecosystem, but many of the operations, such as creating directories and rename operations, are better avoided with object stores. In fact, all write flows from Hadoop shell operations should be avoided with object stores. Object stores provide a CLI interface, which is preferable to Hadoop shell operations.
In my next post, I will explain the actual costs of shell operations and some of the issues addressed by the Stocator project. Stocator offers superior performance compared to other connectors, and it’s being used as part of the IBM Data Science Experience.
To hear more, attend my joint talk with Trent Gray-Donald: “Hadoop and object stores: Can we do it better?” at the next Strata Data Conference, 23 – 25 May 2017, in London. I will be also presenting with Graham Mackintosh: “Very large data files, object stores, and deep learning – lessons learned while looking for signs of extra-terrestrial life” at Spark Summit, San Francisco, 5 – 7 June 2017.
The post Advantages and complexities of integrating Hadoop with object stores appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenStack Summit – Mirantis Activities for May 10

The post OpenStack Summit – Mirantis Activities for May 10 appeared first on Mirantis | Pure Play Open Cloud.

 
What’s new in OpenStack? Hear from our resident PTLs at OpenStack Summit. See what we’re up to for OpenStack Summit Day 3

Booth Activities

10:30am-11:00am
Meet the Training Expert: Chad Miller, Senior Technical Instructor

10:30am-11:00am
Demo: MCP’s Service Orchestration: (More Than) Infrastructure-as-Code

4:00pm-4:30pm
Book Giveaway*: Understanding OPNFV (*while supplies last)
Didn’t make it to Boston? Download the OPNFV e-book for free.

Presentations

Wednesday, 9:50am-10:30am
Level: Intermediate
Project Update – Neutron
(Kevin Benton, Mirantis)

Wednesday, 11:00am-11:40am
Level: Intermediate
Project Update – Nova
(Jay Pipes, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Kuryr-Kubernetes: The seamless path to adding Pods to your datacenter networking
(Ilya Chukhnakov, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
OpenStack: pushing to 5000 nodes and beyond
(Dina Belova and Georgy Okrokvertskhov, Mirantis)

Wednesday, 4:30pm-5:10pm
Level: Intermediate
Project Update – Rally
(Andrey Kurilin, Mirantis)

 The post OpenStack Summit – Mirantis Activities for May 10 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Deploying CloudForms at Scale

One of the challenges of deploying CloudForms to manage a large environment is knowing how to tune it – what knobs to turn and dials to watch for.
Red Hat’s Systems Engineering team have just completed a document entitled “Deploying CloudForms at Scale”. This describes the architectural components that affect large-scale deployment, and details the monitoring, troubleshooting and scaling measures that can be taken to optimally tune each component.

The document is divided into three sections:
Part I – Architecture and Design

Architecture discusses the principal architectural components that influence scaling: appliances, server roles, workers and messages.
Regions and Zones discusses the considerations and options for region and zone design.
Database Sizing and Optimization presents some guidelines for sizing and optimizing the PostgreSQL database for larger-scale operations.

Part II – Component Scaling

Inventory Refresh discusses the mechanism of extracting and saving the inventory of objects – VMs, hosts or containers for example – from an external management system.
Capacity and Utilization explains how the three types of C&U worker interact to extract and process performance metrics from an external management system.
Automate describes the challenges of scaling Ruby-based automate workflows, and how to optimize automation methods for larger environments.
Provisioning focuses on virtual machine and instance provisioning, and the problems that sometimes need to be addressed when complex automation workflows interact with external enterprise tools.
Event Handling describes the three workers that combine to process events from external management systems, and how to scale them.
SmartState Analysis takes a look at some of the tuning options available to scale SmartState Analysis in larger environments.
Web User Interface discusses how to scale WebUI appliances behind load balancers.
Monitoring describes some of the in-built monitoring capabilities, and how to setup alerts to warn of problems such as worker restarts.

Part III – Putting it into Practice

Design Scenario takes the reader through a hypothetical design, scaling CloudForms Management Engine appliances in a region with several zones to manage a hybrid cloud.

The document is available here: https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_cloudforms_at_scale/
Quelle: CloudForms

Managing Secrets on OpenShift – Vault Integration

Credentials are environment dependent configurations that need to be kept secret and should be read only by subjects with a need-to-know. In this article, I present an integration with Vault from Hashicorp as one approach to solving the problem of strict secret management requirements. This orchestration was built on the work previously done by Kelsey Hightower.
Quelle: OpenShift

Building effective data governance with cloud

Travis Perkins, a UK-based builders’ merchant, has put considerable resources into improving its websites and building a mobile channel, because customers expect to be able to shop online with their mobile devices. Builders always have a smartphone with them; it’s the tool of choice when they’re on site. DIYers expect to shop online at any time.
Travis Perkins has more than 20 brands that serve the construction industry and the home improvement market. The company has more than 2,000 retail outlets in the United Kingdom and Ireland selling general building materials, tools, landscaping materials, and more.
To control some of its essential data processes, Travis Perkins uses IBM Business Process Manager (BPM) on Cloud and has instituted multiple high-level data governance processes. This means data is available to help Travis Perkins customers get the products they expect ready for pick-up at the branch or delivered to their site.

Travis Perkins is now poised to overcome challenges presented by older systems at some of the merchant branches that weren’t originally set up to follow current validation standards for data entry. It will also be able to start dealing with data silos, a consequence of its growth through acquisition which resulted in multiple sets of information in different systems with no interconnectivity.
Better data governance means better customer service
The company recognized that, as it moved forward, it was important to be able to scale to provide performance solutions for key applications. IBM Cloud is a good way for the company to achieve that goal. One of its current projects, Highway to Cloud, will see Travis Perkins move as many key applications as possible into a cloud-based environment within the next three years. It has already moved some websites to the cloud, and most of its newer software applications (including the IBM BPM on Cloud data governance solution) are software-as-a-service offerings.
The company’s IT department wants to transform from an order taker to an order shaper, which means the department will work closely with the business to define requirements, helping it achieve what it wants to do, such as using data to detect patterns in buyer behavior and to be better equipped to market to customers.
To learn more, view the press release or case study.
More about IBM Cloud Business Process Manager.
The post Building effective data governance with cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud