Deploying a UPI environment for OpenShift 4.1 on VMs and bare metal

With the release of Red Hat OpenShift 4, the concept of User Provisioned Infrastructure (UPI) has emerged to encompass the environments where the infrastructure (compute, network and storage resources) that hosts the OpenShift Container Platform is deployed by the user. This allows for more creative deployments, while leaving the management of the infrastructure to the […]
The post Deploying a UPI environment for OpenShift 4.1 on VMs and bare metal appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Israeli startup builds crime reporting and city services app with Watson AI on IBM Cloud

The percentage of people reporting crimes on their own initiative is close to zero. Law enforcement officials are well aware that vital information may fail to reach them, often due to informants’ fear of exposure.
Repo Cyber Ltd. is a startup company in Israel offering a mobile app for anonymous reporting to law enforcement organizations. The app can identify 80 different worldwide languages. It can gain insight from text, audio, pictures and video and transmit using the same methods. It also determines the relevant authority that would need the information.
Developing a pilot solution to anonymously report crime and help cities streamline services
The Repo Cyber app uses the Repo AI system, which was developed during our participation in the IBM Global Entrepreneur Program. Repo AI runs on the IBM Cloud platform.
Our initial vision was to simplify police reporting, but when we started to work, we realized that there is also a demand for cognitive services and machine learning. Thus, the foundation of Repo AI includes IBM Watson among other IBM technologies, including IBM Identity and Access Management solutions.
With Watson, we taught Repo AI how to recognize things like a dirty street, full garbage can, fighting children, and so much more.
During a pilot program in the city of Kiryat Yam, municipal employees were initially concerned that the system would replace them. They soon realized, though, that the app was changing the scale and type of work coming through. As it turned out, municipal employees got more reports from citizens with fewer calls to the Command & Control (C&C) Center. Because Repo AI can be integrated with smart city cameras, it can identify many situations proactively and route reports directly to the appropriate department, such as sanitation, electricity, healthcare or welfare.
Though the C&C Center in Kiryat Yam is staffed with only Hebrew speakers, it can now support Russian, Amharic, Rumanian, Arabic, Yiddish and more languages through multicloud language services for wider citizen support.
Delivering significant citizen benefits with Repo AI technology
With the easy-to-use and -learn Repo Cyber app, any citizen or tourist can anonymously report a concern to authorities without exposing their identity. This protects individuals, their families and their properties, making citizens more willing to share concerns and report crime. C&C operators also have an almost instantaneous (within seconds) situation alert, so they can engage quickly and stop threatening events before they escalate.
Pilot program reporting has increased by more than 30 percent now that foreign-language speakers now have the option to report. C&C representatives have reduced the amount of time to handle a complaint by 20 percent, while also increasing the efficiency of municipal employees by 20 percent. Law enforcement efficiency has increased by more than 35 percent.
Kiryat Yam municipal employees, who now recommend the Repo AI system to other municipalities, also use the Repo AI technology to stay informed while in the field. For example, they can use the app to see real-time city camera feeds and search for license plates or people.
When the pilot ended, Kiryat Yam adopted the Repo Cyber system. Aside from use in other large cities, we envision that corporate entities could use the system for anonymously reporting workplace bullying or sexual harassment. Additionally, perhaps a network of law enforcement agencies could more easily share information, such as a suspect, known terrorist, or runaway databases. By working with corporations and law enforcement agencies, we hope to make our world safer.
Read the case study for more details about the IBM solutions helping power the Repo Cyber app.
 
The post Israeli startup builds crime reporting and city services app with Watson AI on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

OpenShift Commons Briefing: Quay v3 Release Update and Road Map

  In this briefing, Dirk Herrmann, Red Hat’s Quay Product Manager walks through Quay v3.0’s features, and discusses the road map for future Quay releases, including a progress update on the open sourcing of Quay. Built for storing container images, Quay offers visibility over images themselves, and can be integrated into your CI/CD pipelines and […]
The post OpenShift Commons Briefing: Quay v3 Release Update and Road Map appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud-Native CI/CD with OpenShift Pipelines

With Red Hat OpenShift 4.1, we are proud to release the developer preview of OpenShift Pipelines to enable creation of cloud-native Kubernetes-style continuous integration and continuous delivery (CI/CD) pipelines based on the Tekton project.  Why OpenShift Pipelines? OpenShift has long provided an integrated CI/CD experience based on Jenkins which is actively used by a large […]
The post Cloud-Native CI/CD with OpenShift Pipelines appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

C’mon! OpenStack ain’t that tough

The post C’mon! OpenStack ain’t that tough appeared first on Mirantis | Pure Play Open Cloud.
Since Rackspace and NASA launched the OpenStack cloud-software initiative in July 2010, there have been 2 releases per year, beginning with the Austin release in October 2010, and most recently with the Stein release in April 2019. As with any software deliverable in its infancy, OpenStack was difficult to install and administer, lacked some usability and functionality, and had more than its share of defects.
Almost 10 years (and 19 releases) later, OpenStack has matured; it has improved in all areas, making it one of the leading choices for customers to implement a private cloud.
But OpenStack is still viewed as difficult to install and administer, as well as to use when managing cloud resources. The goal of this blog is to show that “OpenStack ain’t that tough,” especially after you’ve taken a class and been through the hands-on lab exercises.  
Brief introduction to OpenStack
OpenStack is not a product. From the openstack.org web site: The OpenStack project is a global collaboration of developers and cloud computing technologists producing the open standard cloud computing platform for both public and private clouds. It’s backed by a vibrant community of developers and some of the biggest names in the industry. For example, companies such as Mirantis, Red Hat, SUSE, AT&T, Rackspace, Cisco, NetApp, and many more contribute to its development.
OpenStack is divided into many components, called projects, to provide IaaS (Infrastructure as a Service) cloud services. Each project provides a specialized service, with names such as Keystone (the Identity service), Nova (the Compute service), Glance (the Image service), Neutron (the Networking service), and so on.
OpenStack can be managed and operated from the Linux command line interface (CLI) or a web-based UI. The UI is provided by the Horizon component and is commonly called the Dashboard UI.
OpenStack is in production at many organizations worldwide, such as Walmart, T-Mobile, Target, Progressive Insurance, eBay, Cathay Pacific, Overstock.com, SkyTV, GE Healthcare, DirecTV, American Airlines, Adobe Advertising Cloud, AT&T, Verizon, Banco Santander, Volkswagen AG, Ontario Institute for Cancer Research, Target, PayPal, and many more.
Previous perceptions
As with many software projects, OpenStack has had a perception of being difficult to install, configure, and use. For example, here are several user quotes from the April 2017 survey:

“Deployment is still a nightmare of complexity and riddled with failure unless you are covered in scars from previous deployments.”

Author’s comment: This is, perhaps, my favorite comment!  It is a true statement for anyone who has been around OpenStack for as long as I have been. The only users who were successful with an OpenStack deployment were those who had been through it before (several times).  BTW, I have the scars from previous deployments.

How to backup, clone and migrate Persistent Volume Claims on OpenShift

I recently implemented a complete backup solution for our Red Hat OpenShift clusters. I wanted to share the challenges we faced in putting together the OpenShift backups, restores, hardware migrations, and cluster-cloning features we needed to preserve users’ Persistent Volume Claims (PVCs). At the moment, these features are not implemented directly in Kubernetes, and it […]
The post How to backup, clone and migrate Persistent Volume Claims on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Accelerating enterprise cloud adoption with application modernization

Despite the significant benefits of cloud computing, which include cost savings, agility, elasticity and standardization, to name a few, some companies are struggling with their cloud journey. Most have successfully adopted a few software-as-a-service (SaaS) solutions, built cloud-native digital capabilities or migrated simpler distributed applications, but few have made real progress in application modernization. In fact, 80 percent of mission-critical workloads and sensitive data are still running on on-premises business systems because of performance and regulatory requirements, Forbes writes.
Large heritage portfolios can prevent organizations from transforming, due to:

Long development and testing cycle times using traditional IT frameworks.
Extended timeframes to bring new features to market.
Increased risk profile due to skills attrition and scarcity of heritage skills.
Disproportionate effort and cost in maintaining the established applications and infrastructure — also known as “technical debt”.

Without addressing these challenges, organizations risk a higher total cost of ownership, increased complexity across their target cloud and on-premises footprint, significant performance and stability challenges, and security exposure.
Beginning the application modernization journey
Many organizations need help with a wide range of applications from different eras, whether they’re mainframes, large commercial off-the-shelf packages or custom-developed monoliths. These heritage applications are frequently incompatible with the cloud; they’re either not supported by the cloud or they’re unable to fully use its automation and horizontal scalability benefits.
Companies simply can’t succeed in their cloud adoption journeys without modernizing their portfolios. However, transforming established applications requires a multidimensional approach. Let’s look more closely at the key elements of that transformation.
Align application modernization with business strategy
Companies too often embark upon the application modernization journey with only technology in mind and ignore the business and cost dimensions. The outcomes don’t align with underlying business needs and don’t generate desirable returns on investment.
If you assess how important applications are to the company business drivers and competitive landscape, you can appropriately sequence the technical approach and application modernization journey to generate value quickly and cost-effectively — and potentially in a self-funding manner.
Consider application modernization patterns
Businesses need to consider a wide range of transformation patterns to deliver meaningful application modernization incrementally, based on the type of application and the desired benefits. A few potential modernization options include:
Expose application programming interfaces. When companies begin the modernization journey, it is often best to expose established functionality as APIs to allow new business capabilities to be developed quickly.
Strangle the monolith. Most monolith applications have several components, but not all portions change as frequently or are equally important. Using the Strangler Pattern, monolithic applications can be incrementally transformed by replacing a particular functionality with a new service. Once the new functionality is ready, the old component is strangled, and the new service is put into use. The service can follow modern software engineering best practices and exploit cloud benefits such as automation, autoscaling and blue-green deployments.
Refactor and containerize. Wrapping an application — or parts of it — in a container is a good first step toward modernization, but many applications aren’t optimized for containers. Monitoring, load balancing, application state handling are different in containerized applications. You might still need to rewrite portions of the applications.
Replace with a new package. In some instances, it might make sense to replace the entire monolith or portions with newer-generation, born-in-the-cloud solutions.
Design an optimal target environment
The target environment must be designed appropriately. With containers and container management platforms such as Kubernetes, the benefits of cloud are no longer location-centric. Benefits can be realized on-premises without exiting data centers. However, critical choices must be made.

Cloud design. Would the business benefit from a hybrid design, which is more portable, or a cloud-native design, which requires a greater cloud provider lock-in, but is simpler to adopt?
Multicloud management. Inevitably, enterprises end up with multiple cloud land zones (a combination of on-premises, public cloud, SaaS and PaaS topologies). In order to manage the complexity of the target state, businesses must consider adopting multicloud management solutions.
Solution design. Companies must also decide between a fully managed (e.g., managed platform as a service) or a do-it-yourself solution. The building blocks in the cloud world are fairly complex; a fully managed solution might not be a bad way to start.

Transform the operating model
Too many companies focus solely on the technology and not the transformation of their workforce and IT processes. Unlike traditional operating models, cloud adoption pushes the infrastructure-as-code paradigm with a deep focus on platform engineering, which requires companies to rework infrastructure organization and overhaul the processes related to provisioning and management of IT assets.
Modernizing an application portfolio is arguably the most critical phase of the cloud adoption journey. To get started, IBM recommends the following:

Build a business case for the application modernization strategy first.
Start small — perhaps by using a collaborative approach, such as the IBM Garage Method for Cloud, that focuses on co-creation — and declare success early.
Reduce the risk of the transformation journey by partnering with firms that understand the technology landscape and the underlying industry.
Share experiences with peer firms and learn from one another.

Learn more about how expert guidance, migration, modernization, cloud-native application development and managed services from IBM can help simplify and accelerate the journey to cloud.
The post Accelerating enterprise cloud adoption with application modernization appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

BIOMIN helps farmers review mycotoxin risk with IBM and The Weather Company data

Mycotoxins, which are poisonous byproducts of naturally occurring mold fungi commonly growing on grain, are the single greatest natural contaminant threat in our food supply.
Perhaps the single best-known mycotoxins would be aflatoxin, which was first discovered in the UK following the death of more than 100,000 young turkeys on poultry farms in England. Animal fatalities, however, are only the tip of the iceberg for the agriculture industry. What we much more commonly see is that there’s an effect on the general health and productivity of the animal.
And, while mycotoxins are a major threat to farm animals, they also pose a risk for humans when they’re carried over into animal products that humans consume, such as milk or eggs.
Weather, which creates a great deal of uncertainty in the agricultural industry from year to year, is the single most important factor that affects the type and level of mycotoxins that may be present in each year’s harvest.
BIOMIN has developed a tool to monitor and predict the risk of mycotoxins in corn and wheat based on weather. Our tool is designed to reduce the amount of guessing needed by the agriculture industry.
Predicting mycotoxin risk
BIOMIN, an animal nutrition company that develops and produces feed additives for mycotoxin risk management, uses data from The Weather Company, an IBM Business, delivered through the Watson Decision Platform for agriculture on the IBM Cloud, to look at how weather affects mycotoxin risk. In conjunction, our company technicians also evaluate the global occurrence of mycotoxins in feed using mycotoxin test data dating from 2004 to the present.
The company has a very intricate series of models to predict how weather is affecting different mycotoxin risks in different regions. Some fungi like warm, wet conditions; but, for example, in the case of aflatoxin, the risk can actually increase if the conditions are hot and dry.
The BIOMIN Mycotoxin Prediction Tool starts predicting mycotoxin risk in grain crops from around the time of flowering, which is the first critical period when moisture and temperature affect how a fungus can infect, grow in grain and produce mycotoxins.
Choosing The Weather Company
BIOMIN chose to work with The Weather Company because of the validated weather data that’s generated on a grid fashion worldwide, with a 15-day forecast of hourly information. BIOMIN’s prediction tool downloads hourly weather data for 61,000 points around the world.
It’s not good enough to know whether it’s going to be a wet day or not. BIOMIN wants to know how many hours of wetness there will be, because it makes a big difference for the fungus. Fungi can grow really fast.
BIOMIN uses historical weather data going back to 2013 as well as the current weather forecast to model data worldwide. Altogether, BIOMIN churns through one terabyte of data a day.
The output of the BIOMIN tool is a worldwide “heat map” that provides an advanced estimation of mycotoxin risk.
Using data to warn farmers and help reduce risk
BIOMIN will use its heat map to warn farmers in certain regions that they may need to do more intensive or earlier testing, or plan for an earlier harvest.
For example, if a farmer tests and confirms that the mycotoxin levels look reasonably high, they could actually harvest their crop earlier. They won’t get as high a yield because the crop may not have finished maturing, but they can get a crop that has lower mycotoxin levels. They might accept that the yield of corn is going to be 20 percent less than what it would have been in tons per hectare, but the mycotoxin level is not going to be above a legal limit, over which they might not be able to sell or use the harvest to feed their animals.
Aside from realizing higher crop yields and more productive livestock, farmers are compliant with federal guidelines, if they exist.
BIOMIN benefits by being able to predict and meet high regional demand for its mycotoxin deactivation product.
In the future, BIOMIN intends to use the IBM Global High-Resolution Atmospheric Forecasting System (GRAF). The system will be able to predict something as small as a thunderstorm anywhere on Earth using crowdsourced data from millions of sources. For example, since there’s not a weather station at each specific location where weather data is needed, GRAF uses multiple algorithms combined with cellular data to determine what the atmospheric pressure is in whatever latitude and longitude is requested.
Learn more about how companies are using The Weather Company, an IBM Business, products and services across industries.
The post BIOMIN helps farmers review mycotoxin risk with IBM and The Weather Company data appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

IBM Aspera powers remote editing for FOX Sports’ coverage of the 2019 FIFA Women’s World Cup

FOX Sports made history at last year’s FIFA Men’s World Cup in Russia by using live streaming technology for remote editing on a scale it had never achieved before. In previous years, broadcasters traditionally sent their entire editing staff and equipment overseas to the venue. By taking advantage of  IBM Aspera live streaming technology, FOX Sports was able to begin producing highlights in near real time from their state-of-the-art production facility in Los Angeles, with only a small crew onsite in Russia, saving the broadcaster time and money while creating richer content for viewers. Ultimately, this event ended up being the largest production in FOX Sports’ 24-year history.
2019 FIFA Women’s World Cup fully edited from Los Angeles
Tasked with delivering a similar experience for the 2019 FIFA Women’s World Cup in France, FOX Sports wanted to go all in on remote editing, with 100 percent of the editing and post-production work taking place in Los Angeles. The projected number of concurrent streams to remote storage over the course of the tournament was expected to more than double the FIFA Men’s World Cup. This creates a challenge to not only build on last year’s performance, but to continue to deliver a spectacular viewing experience for audiences worldwide.

Achieving greater efficiencies while reducing production time
To help with this massive undertaking, FOX Sports again teamed up with IBM Aspera. The Aspera streaming technology, along with Telestream and Levels Beyond, are delivering new remote production capabilities and greater efficiencies, such as real-time direct-to-cloud archiving, more extensive and consolidated monitoring, and live streaming into Adobe video editing. Creative teams are able to quickly begin working on live-capture feeds in Los Angeles, delivered straight from Paris while the match is in progress. Transcoding, packaging, editing and other downstream workflows work off the streamed video. As a result, live camera feeds from matches were edited in Los Angeles within Adobe Premiere Pro in less than 10 seconds of live action.
This capability significantly shortens highlight and feature story production cycles while increasing the quality and value of the produced content. Furthermore, within seconds of a match being completed, raw, high resolution footage will be fully archived in the cloud, saving the production team the time typically spent on the post-match archival process while also allowing it to fix issues with downstream assets in minutes instead of hours.
“IBM Aspera has built a legacy in sports broadcasting,” said Kevin Callahan, FOX Sports VP Field Operations. “Given our success with the Aspera live-streaming technology at the 2018 FIFA World Cup in Russia, we pushed even harder to move all editing to LA this year while adding exciting new capabilities like real time auto-archival to the cloud.”
Through the Round of 16, FOX Sports had achieved the following results using IBM Aspera:

Streamed 350 live edit feeds from 40 matches from Paris to LA, for a total of 118TB of content.
Transferred roughly 500TB of video content to cloud object storage for archival and editing.

FOX Sports plans to use the joint Aspera solution for other major sporting events in the future to deliver consistently great content for viewers.
A partial view of all high bit rate feeds being streamed from Paris to Los Angeles and Portland for live editing and archival in real time.
Enhancing the viewer experience with IBM Watson
Also at the World Cup this year, FOX Sports worked with IBM to uplevel the viewer experience with a new broadcast segment called Player Spotlight built with IBM Watson. The AI-backed tool helped generate stat analysis for match commentary using a natural language interface. Watson offers the power to reason, understand, categorize and learn what’s inside video footage, further enabling FOX Sports to enhance viewer engagement. Commentators can interact directly with the tool, which in turn surfaces highlights, data and analytics that the broadcasters can reference on-air.
For more information on IBM Aspera streaming solutions, visit http://ibm.biz/aspera-streaming.
The post IBM Aspera powers remote editing for FOX Sports’ coverage of the 2019 FIFA Women’s World Cup appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Simplify Migration from OpenShift 3 to 4

This is a guest post written by Appranix. Now that Red Hat OpenShift 4 has officially been released, it’s time to start thinking about migration from Red Hat OpenShift Container Platform 3 to OpenShift Container Platform 4. You can check out the details about the differences between OpenShift 3 and 4 here. One of the biggest differences […]
The post Simplify Migration from OpenShift 3 to 4 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift