Using New Relic to Monitor Applications on OpenShift

In this post, we show a way to configure New Relic to monitor an application running on OpenShift Container Platform. The repository includes a customized assemble script for Tomcat/JBoss EWS 8 that instruments an application using a New Relic Java Agent. The script for downloading and layering in New Relic Agent is baked into this assemble script.
Quelle: OpenShift

Cloud-based managed firewall protects children using IBM Analytics

Many parents wouldn’t dream of letting their young children use an internet-connected device without some type of filter. It’s important to protect young web searchers from arriving at undesirable websites. An innocent enough search term could be a double entendre, leading impressionable minds to things better left unseen.
One approach for parents is to install software that restricts content on devices or a home router. Typically, if they’re not using a general rating scheme by age, parents need to know the sites they want to block ahead of time and set up their own block lists. This is time consuming, and chances are they’re going to miss something.
A firewall as a service for home use
ChildRouter from Cloud-Nanny offers an automated and intelligent way to filter web content with a  firewall-as-a-service (FaaS) solution. Parents choose which categories of sites they will allow their kids to see, and Cloud-Nanny handles almost everything else. The solution decides whether to allow or block web requests without noticeable effect on the user’s browsing experience. Using IBM dashDB, the processing check makes a request in Cloud-Nanny’s database and returns a decision is less than 40 microseconds.
ChildRouter uses machine learning algorithms running in IBM Analytics for Apache Spark together with AlchemyAPI to classify and categorize content in nearly real time. If the system is unsure about a site, it checks with the parents. Using that input, the model learns and gets better at classifying that type of site in the future.
How it works
The ChildRouter is a hardware appliance and a software appliance in one. This is one differentiator from other solutions, which reside in the browser. ChildRouter works independently of the operating system or browser.

Through a computer interface, parents can assign a device to a specific child. This means they can switch devices very easily within the family. For example, if your younger child wants to watch a movie on an adult’s iPad, parents can go to the ChildRouter interface on the iPad, set it under the younger child&;s account and all the secure settings are applied on that device. Parents can do this with a PlayStation, Xbox, Wii or any other internet-connected device. It is much like the kind of managed firewall that a company would have, but more affordable.
ChildRouter users’ security policies follow them wherever they take their devices because the managed firewall is in the cloud.
The road ahead
ChildRouter is just the tip of the iceberg. Cloud-Nanny’s FaaS solution has applications outside the home because it can also block dangerous software such as malware, adware and viruses. Phishing attempts don’t work; the system recognizes the domain name is not correlated with the IP address of the website and doesn’t let it pass.
Cloud-Nanny envisions schools and public WiFi using ChildRouter. For example, coffee shops that offer free WiFi can guarantee that there will be no risk to the user. Even Internet of Things (IoT) devices can be monitored for unwanted behavior.
Cloud-Nanny developed ChildRouter and got the solution up and running in less than one year with IBM Bluemix. Find out more about how it came together.
Read about other IBM clients who are poised for success using the IBM Cloud as their foundation here.
The post Cloud-based managed firewall protects children using IBM Analytics appeared first on news.
Quelle: Thoughts on Cloud

Why hybrid cloud is not just a transitional environment

Let’s say you have a car you love. It runs great and you don’t have to spend much on repairs. There are really cool new touch-screen car stereos that connect to the internet, play movies, provide GPS and so on. Do you go buy a new car to get one of the new stereos? No. You get a new stereo that enhances the great car you have.
Hybrid cloud helps you in the same way. You can create amazing new capabilities that leverage the investments you have already made in your backend applications and the data you store. Leveraging cloud services with on-premises backends can add value even when there is no new cloud-native app. A common example is leveraging cloud analytics for new insight to on-premises data.
How do you figure out how cloud can drive the most value for your company? For one, you need advisors who have driven success for other businesses. If you look at this purely from a speeds-and-feeds, cost-saving view, you may have missed the immediate value that hybrid cloud can provide.
Hybrid cloud = new capabilities
As I’ve said before, all companies are becoming software companies. Many companies are not sure where they should invest their resources and budget as they take on software opportunities. The value areas for businesses are the new interaction applications and services. Interaction capabilities are those that drive new kinds of exchanges with customers, partners and between applications.
A key aspect of driving this innovation is leveraging capabilities instead of building them. Cloud services are one of the fastest methods of driving value more quickly. So where are businesses creating impact?
Hybrid cloud strategic lessons to learn
Here are some hypothetical examples of both success and failure in embracing digital innovation.

Executives as a global financial services company knew they had to innovate to avoid losing more market share. They initially focused on mobile apps, but were finding them costly to build and it was taking them too long to get them into market. They also found that frequently they would miss the mark with the initial app capabilities. They needed to figure out how to create innovative concepts and rapidly drive that innovation to their customers and partners.
One of the world’s largest retail companies was running an enormous innovation program. It included external hackathons, an internal center of excellence and developer community involvement. While this was driving a significant amount of concept, most of the ideas were not impacting their business.
A U.S. auto insurer found it was continually chasing its competitors with their digital capabilities. Their competition began creating partnerships and driving new models. They began working to build their own partnerships but could not see how the partnerships would allow them to leapfrog what their competitors had already done.

There are multiple missteps in each of these approaches: Not focusing on innovation that can drive clear business wins, investing without strong concepts of outcomes and lack of investment in core capabilities that create innovation and get everything built.
Path to hybrid cloud success
To create stronger digital innovation strategies, the companies each followed these steps:

They engaged with service providers who understood how to deliver software products, not just projects.
They focused on an overall strategy that would enable them to evaluate, experiment, iterate and polish into products.
They focused on core technologies that would help them accelerate and allow them to leverage cloud services at scale as they grew their new capabilities.
Finally, they created partner and ecosystem objectives to drive innovation beyond their walls realizing that they could not do it all on their own.

Bridging from on-premises IT—where companies have valuable app capabilities and data—to the cloud is a turning point for most companies. Gartner refers to this technology core as the digital platform. If you don’t bridge to the cloud with production-quality integration, you can face significant failures that can harm business.
As a software products company, IBM focuses on enabling professional software production for our clients. Our hybrid cloud service Product Insights adds visibility and predictive management into existing on-premises software and cloud software. This is key enabler for what is needed to create the digital platform.
For more information on what Product insights can deliver to your hybrid cloud, click here.
The post Why hybrid cloud is not just a transitional environment appeared first on news.
Quelle: Thoughts on Cloud

A tale of Tempest rpm with Installers

Tempest is a set of integration tests to run against OpenStack Cloud.
Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in is rock-solid, we use Tempest
to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration,
packstack, and tripleo-quickstart.
And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.

And the story begins from here:

In RDO, we deliver Tempest as an rpm to be consumed by anyone to test their cloud. Till Newton release,
we maintained a fork of Tempest which contains the config_tempest.py script to auto generate tempest.conf
for your cloud and a set of helper scripts to run Tempest tests as well as with some backports for each release. From Ocata, we have changed the source of Tempest rpm from forked Tempest to upstream Tempest by keeping
the old source till Newton in RDO through rdoinfo. We are using rdo-patches branch
to maintain patches backports starting from Ocata release.

With this change, we have moved the config_tempest.py script from the forked Tempest repository to a separate project python-tempestconf
so that it can be used with vanilla Tempest to generate Tempest config automatically.

What have we done to make a happy integration between Tempest rpm and the installers?

Currently, puppet-openstack-integration, packstack, and tripleo-quickstart heavily use RDO packages. So using Tempest rpm with these installers will be the best match.
Before starting the integration, we need to make the initial ground ready. Till Newton release, all these installers are using Tempest from source in their respective CI.
We have started the match making of Tempest rpm with installers.
puppet-openstack-integration and packstack consume puppet-modules. So in order to consume Tempest rpm, first I need to fix the puppet-tempest.

puppet-tempest

It is a puppet-module to install and configure Tempest and openstack-services Tempest plugins based on the services available from source as well as packages.
So we have fixed puppet-tempest to install Tempest rpm from the package and created a Tempest workspace. In order to use that feature through puppet-tempest module [https://review.openstack.org/#/c/425085/].
you need to add install_from_source => ‘false’ and tempest_workspace => ‘path to tempest workspace’ to tempest.pp and it will do the job for you.
Now we are using the same feature in puppet-openstack-integration and packstack.

puppet-openstack-integration

It is a collection of scripts and manifests for puppet module testing (which powers the openstack-puppet CI).
From Ocata release, we have added a flag TEMPEST_FROM_SOURCE flag in run_tests.sh script.
Just change TEMPEST_FROM_SOURCE to false in the run_test.sh, Tempest is then installed and configured from packages using puppet-tempest.

packstack

It is a utility to install OpenStack on CentOS, Red Hat Enterprise Linux or other derivatives in proof of concept (PoC) environments. Till Newton, Tempest is installed and ran by packstack from the upstream source and behind the scenes, puppet-tempest does the job for us. From Ocata, we have replaced this feature by using Tempest RDO package. You can use this feature by running the following command:

$ sudo packstack –allinone –config-provision-tempest=y –run-tempest=y

It will perform packstack all in one installation and after that, it will install and configure Tempest and run smoke tests on deployed cloud.
We are using the same in RDO CI.

tripleo-quickstart

It is an ansible based project for setting up TripleO virtual environments.
It uses triple-quickstart-extras where validate-tempest roles exist which is used to install, configure and run Tempest on a tripleo deployment after installation. We have improved the validate-tempest role to use Tempest rpm package for all releases (supported by OpenStack upstream) by keeping the old workflow and as well as using Ocata Tempest rpm and using ostestr for running Tempest tests for all releases and using python-tempestconf to generate tempest.conf through this patch.

To see in action, Run the following command:

$ wget https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh
$ bash quickstart.sh –install-deps
$ bash quickstart.sh -R master –tags all $VIRTHOST

So finally the integration of Tempest rpm with installers is finally done and they are happily consumed in different CI and this will help to test and produce more robust OpenStack cloud in RDO as well as catch issues of Tempest with Tempest plugins early.

Thanks to apevec, jpena, amoralej, Haikel, dmsimard, dmellado, tosky, mkopec, arxcruz, sshnaidm, mwhahaha, EmilienM
and many more on rdo channel for getting this work done in last 2 and half months. It was a great learning experience.
Quelle: RDO

ASP.NET on OpenShift: Getting started in ASP.NET

In parts 1 &; 2 of this tutorial, I’ll be going over getting started quickly by using templates in Visual Studio Community 2015. This means that it’ll be for Windows in this part. However, I’ll go more in-depth with doing everything without templates in Visual Studio Code in a following tutorial, which will be applicable to Linux or Mac as well as Windows. If you’re not using Windows, you can still follow along in parts 1 &038; 2 to get a general idea of how to create a REST endpoint in .NET Core.
Quelle: OpenShift

8 Ways to be serverless and event-driven at InterConnect 2017

Serverless, based on the Apache OpenWhisk open source project, IBM Buemix OpenWhisk is a programming platform as a service with packaged access to 160+ cognitive and other cloud services.  OpenWhisk scalably executes code in runtime containers in response to configurable events, through direct invocation, and without the need to manage infrastructure.
Besides making cost-effective at scale, OpenWhisk equally facilitates end to end mobile and Internet of Things (IoT) application development.
Check-out these key sessions and labs.
Architecture and technical direction
Join OpenWhisk Lead Architect Michael Behrendt for an overview and current update.
Serverless, event-driven architectures and Bluemix OpenWhisk: Overview and IBM’s technical strategy
OpenWhisk Lead Developer Carlos Santana discusses the intersection of three key technologies:
Shaping the future of serverless APIs and microservices in IBM Bluemix
Featured client stories
International retail bank Santander uses IBM Bluemix OpenWhisk to optimize the repetitive daily task of receiving and processing check deposits. See the code behind an application that automatically processes each image added to an object storage service, invoking an external system to handle the transaction.
Serverless architectures in banking: OpenWhisk on IBM Bluemix at Santander
In creating an end to end mobile application DevOps pipeline with a single source repository, Wakefern Food Corporation uses OpenWhisk to broker JSON data between the mobile client and cloud services.
How to build homogeneously from one source repository to mobile and microservices targets
SiteSpirit, a Netherlands-based software developer, moved their data-intensive MediaSpirit tool onto OpenWhisk, and adding cloud data services on Bluemix to help customers implement advanced, flexibly-programmable, cloud-based data analytics that optimize infrastructure utilization through auto-scaling.
MediaSpirit: A Bluemix and OpenWhisk love story
Labs
Get familiar with basic OpenWhisk programming structures: events, triggers/rules, and actions.
Working with IBM OpenWhisk in Bluemix
Use OpenWhisk programming structures to create a basic bot:
Serverless bots: Create efficient inexpensive event-driven bots with Node.js & OpenWhisk
Go a step further to use OpenWhisk and Watson to build an intelligent chatbot:
Build your first Cognitive Chat Bot using OpenWhisk
A version of this article originally appeared on the IBM Bluemix blog.
The post 8 Ways to be serverless and event-driven at InterConnect 2017 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

An Analyst Roundup on Red Hat and CloudForms

Over the past year, several analysts looked at Red Hat and its Cloud management Platform (CMP) solution, Red Hat CloudForms, to provide their point of view. The reviews were overwhelmingly positive, finding that Red Hat as a company was positioned for success and recognizing Red Hat CloudForms as a leading product that delivered substantial savings in both cost and efficiency. In this post, we provide a brief round-up of the various analysts’ reports.
Gartner Vendor Ratings
Gartner gave Red Hat an overall “Positive” rating in their 2016 Vendor Rating Report. This was the second year that Red Hat achieved a positive rating and it was based on Gartner’s assessment of Red Hat’s strategy, products and services, technology, marketing, and overall financial viability. Gartner found that Red Hat is well positioned as the most successful open-source software vendor, which should give IT buyers confidence in dealing with Red Hat.
Forrester Wave
Forrester named Red Hat as a leader in two reports looking at private cloud software suites and hybrid cloud management solutions. In “The Forrester Wave™: Private Cloud Software Suites, Q1 2016,” Forrester evaluated Red Hat’s Cloud Infrastructure product, including Red Hat CloudForms and Red Hat OpenStack© Platform. Red Hat was cited as leading the evaluation of software suites with a powerful portal and top governance capabilities. The report evaluated cloud software providers along 40 criteria, with Red Hat receiving top marks life-cycle automation, administrative portal usability and experience, permissions, compliance tracking, and capacity monitoring; all attributes tied closely to the functionality provided by Red Hat CloudForms.
In “The Forrester Wave™: Hybrid Cloud Management Solutions, Q1 2016,” vendors were assessed based on their current offerings, market presence, and strategy along 32 different criteria. Red Hat CloudForms was placed as a leader in this report as well, citing it as being among the “top choices for developer and DevOps teams concerned mainly with building applications that run across multiple clouds, with a strong preference for public cloud platforms.”
These reports demonstrate the industry leading capabilities provided by Red Hat CloudForms, and when combined with a private cloud or public cloud infrastructure, provides the flexibility and control required for digital transformation projects.
Forrester Total Economic Impact
Red Hat commissioned Forrester Consulting to conduct a Total Economic Impact (TEI) study for Red Hat CloudForms. The study examined one company’s IT operations in detail, both prior to and after deploying Red Hat CloudForms. Based on the results, Forrester computed that Red Hat CloudForms delivered almost 80% improvement in efficiency by unifying their service management functions. Forrester also found that the company realized a 97% Return on Investment and a 6.8 month payback period. While limited to only one company’s results, the Forrester TEI study provides a sample of the type of results that organizations can experience with Red Hat CloudForms and it lays out a blueprint for organizations to compute their own TEI results.
IDC Business Value
Red Hat also commissioned IDC to study the business impact of Red Hat CloudForms. In their report, IDC found that the time required to process IT service requests dropped by 89% and the staff time required to fulfill those requests dropped by 92%. This meant that development groups could deliver almost twice as many applications to market (93% more on average), resulting in an average $3.85 million per year in additional revenue. The bottom line on the study showed that organizations could see a ROI of 436% and a payback period of 8 months.
Conclusion
This round-up of analyst publications covering Red Hat and Red Hat CloudForms demonstrates the company’s strong position in providing strong open source solutions such as Red Hat CloudForms in the industry. These reports show that Red Hat CloudForms can provide greater efficiency, more flexible agile infrastructure and even potential top-line revenue growth. The quick payback period and dramatic ROI figures show that Red Hat CloudForms is a smart investment for any IT organization looking to move forward with digital transformation.
Quelle: CloudForms

How open communities can hurt, and help, interoperability

Portability is the key concept of interoperability. When systems are interoperable, we can move around code and processes between different infrastructure and platforms with minimal concern about the layers below. In the past, I’ve described this as a “black box” approach where users only care about the APIs and are blind to the implementation details. Ideally, APIs provide a perfect abstraction so that different implementations of the API are completely equal.
Sadly, even small implementation differences can break API interoperability.
That means that when users of open software install it, configuration choices for their environment or technology stack may cause the software to behave slightly differently and break interoperability. In another common case, the pace of innovation creates problems. New features being introduced can also change behaviors that make it difficult to interoperate between versions. While these issues may not impact the single user’s experience, they have profound impacts across the community.
Without interoperability, it’s difficult to build ecosystems and share best practices.
Ecosystem and shared practices are a significant part of the user value for large, complex open platforms like OpenStack, Cloud Foundry and Kubernetes. The ecosystem ensures that people build products on top of the platform that furthers the platforms’ relevance and utility. Shared practices help control the cost of operating the platforms by allowing the community to benefit from communal operational experience.
We can drive interoperability with architectural work that drives consistent behaviors or add APIs to discover useful variations. Communities need to be reasonably opinionated to reduce variations. When variation is required—such as when different SDN layers or container runtime engines—then projects should maintain clear APIs to abstract implementations.
There are also interoperability tasks within the work to maintain a project. This work takes the form of maintaining and applying compliance tests to running systems such as OpenStack DefCore/Refstack work championed by IBM and others. It also means enforcing parity between development, test and production environments. Interoperability breaks down quickly when developer and continuous integration environments are very different from production deployments.
But the primary driver for interoperability is users demanding it.
Users and operators can put significant pressure on project leaders and vendors to ensure that the platforms are interoperable. That means rewarding vendors who take on time to work on the type of work I described in addition to adding features. It also means rewarding vendors who help drive operational improvements in a shared way. Those actions encourage shared best practices.
If you like these ideas, please subscribe to my blog, RobHirschfeld.com where I explore site reliability engineering, DevOps and open hybrid infrastructure challenges. And join me at my session at Interconnect 2017: Open cloud architecture: Think you can out-innovate the best of the rest?
The post How open communities can hurt, and help, interoperability appeared first on news.
Quelle: Thoughts on Cloud

OpenShift Commons Briefing #66: Microservices, .NET Core 1.1, and Kubernetes on OpenShift

In this briefing, Red Hat’s Todd Mancini, Senior Principal Product Manager and Don Schenck, Developer Advocate for .NET on Linux discussed some of the new application development highlights in Microsoft’s .NET Core 1.1:

• Over 1,300 new APIs since .NET Core 1.0.
• .NET Core 1.1 docker images from Red Hat’s container registry.
• Safe side-by-side installation with .NET Core 1.0.
• Performance improvements
Quelle: OpenShift