Galaxy-S20-Serie im Hands-on: Samsung will im Kameravergleich an die Spitze

Mit der neuen Galaxy-S20-Serie verbaut Samsung erstmals seine eigenen Isocell-Kamerasensoren mit hoher Auflösung, auch im Zoombereich eifert der Hersteller der chinesischen Konkurrenz nach. Wer die beste Kamera will, muss allerdings zum sehr großen und vor allem wohl teuren Ultra-Modell greifen. Ein Hands on von Tobias Költzsch, Peter Steinlechner und Martin Wolf (Samsung, Smartphone)
Quelle: Golem

Now updated: Our Data Engineering Learning Path

With the market for artificial intelligence and machine learning-powered solutions projected to grow to $1.2B by 2023, it’s important to consider business needs now and in the future. We’ve heard from our customers and have witnessed internally that the data engineering role has evolved and now requires a larger set of skills. In the past, data engineers worked with distributed systems and Java programming to use Hadoop MapReduce in the data center but now, they need to leverage AI, machine learning, and business intelligence skills to efficiently manage and analyze data. To address the new skills data engineers now need, we updated our Data Engineering on Google Cloud learning path.We’ve added new course content to this learning path like introductions to Data Fusion and Cloud Composer. We also added more labs on advanced BigQuery, BigQuery ML, and Bigtable streaming to help you get more hands-on practice.This learning path covers the primary responsibilities of data engineers and consists of five courses: Google Cloud Big Data and Machine Learning Fundamentals – Start off by learning the important GCP big data and machine learning concepts and terminologies. Modernizing Data Lakes and Data Warehouse with Google Cloud – Understand the responsibilities of data engineers, the business need for effective data pipelines, and the benefits of data engineering in the cloud. This course will also dig deeper into the use cases and available GCP solutions for data lakes and warehouses, the key components of data pipelines. Building Batch Data Pipelines on Google Cloud – Discover which paradigm to use for different batch data as this course walks you through the main data pipeline paradigms: extra-load, extract-load-transform or extract-transform-load. You’ll also learn more about data transformation technologies such as how to use BigQuery, execute Spark on Dataproc, pipeline graphs in Data Fusion, and do serverless data processing with Dataflow. Building Resilient Streaming Analytics Systems on Google Cloud – Learn how to build streaming data pipelines on Google Cloud, apply aggregations and transformations to streaming data using Dataflow, and store processed records to BigQuery or Bigtable for analysis in order to get real-time metrics on business operations. Smart Analytics, Machine Learning, and AI on Google Cloud –  Extract more insights from your  data by learning how to customize machine learning in data pipelines on Google Cloud in this course. You will learn how to use AutoML for when you need little to no customization and how to use AI Platform Notebooks and BigQuery ML for more tailored machine learning capabilities. You will also be taught how to productionalize machine learning solutions using Kubeflow Pipelines. Want to learn more? Join us for a special webinar Data Engineering, Big Data, and Machine Learning 2.0, on Feb 21 at 9:00 AM PST with Lak Lakshmanan, Head of Google Cloud Data Analytics and AI Solutions. We will go over what this learning path has to offer, demonstrate hands-on labs, and answer any questions you have. Also, just for attending the webinar, we will give you special discounts on training. Register today!
Quelle: Google Cloud Platform

Bring 20/20 vision to your pipelines with enhanced monitoring

Stream analytics is bringing data to life in a way that was previously unimaginable, unlocking new use cases, from connected medical devices in healthcare to predictive maintenance on the factory floor. But with new uses comes new challenges that, if left unaddressed, can lead to unintended behaviors for end-user applications. Before the days of modern stream analytics, you could guarantee the reliability of your batch data processing by re-executing your data workflows. Plus, since batch processing latency was a lesser concern, ensuring that your data was delivered within your SLOs was a manageable task. Stream processing is a different beast, however. Stream analytics shrinks the time horizon between a user event and an application action, which means it is more important than ever to quickly respond to performance degradations in your data pipelines. To that end, Dataflow, Google Cloud’s fully managed batch and stream data processing service, now includes new observability features that will allow you to identify, diagnose, and remediate your pipelines faster than ever. With better observability, you can spend less time fixing problems and more time getting value out of your data.Introducing Dataflow observabilityWith this launch, we are introducing new charts into the Dataflow monitoring UI and streamlined workflows with the Cloud Monitoring interface. You will find these charts in the new “Job metrics” tab located at the top of the screen when you navigate to the job details page within Dataflow.In addition to the data freshness, system latency, and autoscaling graphs that have historically been a part of the Dataflow monitoring experience, you’ll now also see throughput and CPU utilization charts. Throughput charts, shown below, show how many elements (or bytes) are flowing through your pipeline. The time-series graph contains a line for each step of your pipeline, which can quickly illustrate which step(s) of your pipeline could be slowing down the overall processing of your job. Our new time selector tool allows you to drag your cursor over interesting points in the graph to zoom in for higher fidelity.CPU utilization charts the utilization of your workers over time. These charts can indicate whether you have allocated the appropriate amount of cores for your workers or if you have selected the appropriate amount of workers for your job (assuming you have disabled autoscaling). You can toggle between multiple views, including an all-worker view, stats view, four top-utilized machines, and four least-utilized machines, as shown here:Developers can create alerts with just a few clicks by using the “Create alerting policy” link in the top right corner of the chart card. You can find job and worker logs in an expandable panel at the bottom of your screen, giving you all of the tools to debug your stuck pipelines.Dataflow observability in the real worldWe’ve heard from customers about how useful this new feature has been already. “We are loving the new UI! In the last day we’ve already been using it to track throughput of our pipelines and diagnose issues,” said Vinay Mayar, senior software engineer at Expanse.It’s been helpful for Ocado too. “The killer feature of the page is the ability to see the throughput for each processing step,” says Mateusz Juraszek, software engineer at Ocado Technology. “It’s great that all the statistics are gathered in one place on the JOB METRICS page. Displaying data freshness and system latency enables us to quickly and preemptively detect anything that might affect reliability, and then use other charts or logs to investigate and address what we discover.”What’s next for pipeline observabilityThe general availability of these observability charts is our first step toward making Dataflow monitoring the best in class for data engineers. Over the coming months, we plan to add new features including memory and disk usage charts, I/O metrics such as response latencies and error rates for Pub/Sub calls, and visualizers that will significantly enhance the explainability of Dataflow jobs. By spending less time managing reliability and performance and more time extracting value from your data, you can spend your time laying the foundation for tomorrow’s cutting-edge streaming analytics applications.Learn more about these new Dataflow features.
Quelle: Google Cloud Platform

Easily upgrade Windows Server 2008 R2 while migrating to Google Cloud

Cloud migration projects almost always involve components that span multiple applications, technologies, and platforms. This means they also usually require a comprehensive strategy including multiple products and solutions to get migrations over the finish line. There’s no magical ‘one size fits all’ approach, much as we wish there were. Of all the applications you’ve got running and might be looking to migrate, at least a few are probably running on Windows Server 2008 R2—in fact, as many as 60% of systems globally are estimated to still be running this version of Windows Server. Now, unless you’ve just returned from a nice three-year vacation you probably know that Windows Server 2008 R2 reached End of Support (EOS) last month. This is likely to put some of you in a bind, and might accelerate your schedule to try and migrate or replatform these systems. Given that time is of the essence and that you probably don’t have unlimited resources to dedicate to this transition, we’re happy to announce a new feature for Migrate for Compute Engine that lets you simultaneously migrate and replatform your Windows Server 2008 R2 systems into Windows Server 2012. Everything you had running on the original system will persist, but when the migration is done it’ll be running the new OS, Windows Server 2012. You can do this with your physical and virtual servers from on-prem, and also with VMs currently running in AWS or Azure. It’s a fast way to accomplish your upgrade and migration goals in one seamless motion, and better than the other options you might be facing.  Speaking of other options, let’s review some of the alternatives. Another option is to perform a manual, in-place upgrade on any of your Windows Server 2008 R2 VMs (current guidance and installation media). You could also manually create another VM that’s running a supported version of the OS (such as Windows Server 2012 or 2016) and then manually migrate your apps and data. The challenge with both of these options, though, is that they require time and staff. That’s why we’re so excited for this new Migrate for Compute Engine feature, which lets you avoid these time and staff challenges completely, by simply replatforming to Windows Server 2012 while you migrate. Technically, there’s one more option for solving your Windows Server 2008 R2 problem: paying a huge sum to Microsoft to buy you another few years of security updates. But that’s costly and just ‘kicks the can down the road,’ as they say. That’s why we recommend you take a more proactive approach, which also happens to be faster, easier, and less costly. Whichever route you ultimately choose, it’s always a best practice to make sure your systems are running current, supported versions of the OS. We’re confident that Migrate for Compute Engine will help you accomplish this goal more smoothly than traditional approaches to replatforming. And when it comes to migration, where you likely have a lot of things to figure out, our mission at Google Cloud is to provide migration pathways that are low in risk and toil, but high in success! To learn more, read up on Migrate for Compute Engine or check out our data center migration solutions page.
Quelle: Google Cloud Platform

Microsoft Connected Vehicle Platform: trends and investment areas

This post was co-authored by the extended Azure Mobility Team.

The past year has been eventful for a lot of reasons. At Microsoft, we’ve expanded our partnerships, including Volkswagen, LG Electronics, Faurecia, TomTom, and more, and taken the wraps off new thinking such as at CES, where we recently demonstrated our approach to in-vehicle compute and software architecture.

Looking ahead, areas that were once nominally related now come into sharper focus as the supporting technologies are deployed and the various industry verticals mature. The welcoming of a new year is a good time to pause and take in what is happening in our industry and in related ones with an aim to developing a view on where it’s all heading.

In this blog, we will talk about the trends that we see in connected vehicles and smart cities and describe how we see ourselves fitting in and contributing.

Trends

Mobility as a Service (Maas)

MaaS (sometimes referred to as Transportation as a Service, or TaaS) is about people getting to goods and services and getting those goods and services to people. Ride-hailing and ride-sharing come to mind, but so do many other forms of MaaS offerings such as air taxis, autonomous drone fleets, and last-mile delivery services. We inherently believe that completing a single trip—of a person or goods—will soon require a combination of passenger-owned vehicles, ride-sharing, ride-hailing, autonomous taxis, bicycle-and scooter-sharing services transporting people on land, sea, and in the air (what we refer to as “multi-modal routing”). Service offerings that link these different modes of transportation will be key to making this natural for users.

With Ford, we are exploring how quantum algorithms can help improve urban traffic congestion and develop a more balanced routing system. We’ve also built strong partnerships with TomTom for traffic-based routing as well as with AccuWeather for current and forecast weather reports to increase awareness of weather events that will occur along the route. In 2020, we will be integrating these routing methods together and making them available as part of the Azure Maps service and API. Because mobility constitutes experiences throughout the day across various modes of transportation, finding pickup locations, planning trips from home and work, and doing errands along the way, Azure Maps ties the mobility journey with cloud APIs and iOS and Android SDKs to deliver in-app mobility and mapping experiences. Coupled with the connected vehicle architecture of integration with federated user authentication, integration with the Microsoft Graph, and secure provisioning of vehicles, digital assistants can support mobility end-to-end. The same technologies can be used in moving goods and retail delivery systems.

The pressure to become profitable will force changes and consolidation among the MaaS providers and will keep their focus on approaches to reducing costs such as through autonomous driving. Incumbent original equipment manufacturers (OEMs) are expanding their businesses to include elements of car-sharing to continue evolving their businesses as private car ownership is likely to decline over time.

Connecting vehicles to the cloud

We refer holistically to these various signals that can inform vehicle routing (traffic, weather, available modalities, municipal infrastructure, and more) as “navigation intelligence.” Taking advantage of this navigation intelligence will require connected vehicles to become more sophisticated than just logging telematics to the cloud.

The reporting of basic telematics (car-to-cloud) is barely table-stakes; over-the-air updates (OTA, or cloud-to-car) will become key to delivering a market-competitive vehicle, as will command-and-control (more cloud-to-car, via phone apps). Forward-thinking car manufacturers deserve a lot of credit here for showing what’s possible and for creating in consumers the expectation that the appearance of new features in the car after it is purchased isn’t just cool, but normal.

Future steps include the integration of in-vehicle infotainment (IVI) with voice assistants that blend the in- and out-of-vehicle experiences, updating AI models for in-market vehicles for automated driving levels one through five, and of course pre-processing the telemetry at the edge in order to better enable reinforcement learning in the cloud as well as just generally improving services.

Delivering value from the cloud to vehicles and phones

As vehicles become more richly connected and deliver experiences that overlap with what we’ve come to expect from our phones, an emerging question is, what is the right way to make these work together? Projecting to the IVI system of the vehicle is one approach, but most agree that vehicles should have a great experience without a phone present.

Separately, phones are a great proxy for “a vehicle” in some contexts, such as bicycle sharing, providing speed, location, and various other probe data, as well as providing connectivity (as well as subsidizing the associated costs) for low-powered electronics on the vehicle.

This is probably a good time to mention 5G. The opportunity 5G brings will have a ripple effect across industries. It will be a critical foundation for the continued rise of smart devices, machines, and things. They can speak, listen, see, feel, and act using sensitive sensor technology as well as data analytics and machine learning algorithms without requiring “always on” connectivity. This is what we call the intelligent edge. Our strategy is to enable 5G at the edge through cloud partnerships, with a focus on security and developer experience.

Optimizations through a system-of-systems approach

Connecting things to the cloud, getting data into the cloud, and then bringing the insights gained through cloud-enabled analytics back to the things is how optimizations in one area can be brought to bear in another area. This is the essence of digital transformation. Vehicles gathering high-resolution imagery for improving HD maps can also inform municipalities about maintenance issues. Accident information coupled with vehicle telemetry data can inform better PHYD (pay how you drive) insurance plans as well as the deployment of first responder infrastructure to reduce incident response time.

As the vehicle fleet electrifies, the demand for charging stations will grow. The way in-car routing works for an electric car is based only on knowledge of existing charging stations along the route—regardless of the current or predicted wait-times at those stations. But what if that route could also be informed by historical use patterns and live use data of individual charging stations in order to avoid arriving and having three cars ahead of you? Suddenly, your 20-minute charge time is actually a 60-minute stop, and an alternate route would have made more sense, even if, on paper, it’s more miles driven.

Realizing these kinds of scenarios means tying together knowledge about the electrical grid, traffic patterns, vehicle types, and incident data. The opportunities here for brokering the relationships among these systems are immense, as are the challenges to do so in a way that encourages the interconnection and sharing while maintaining privacy, compliance, and security.

Laws, policies, and ethics

The past several years of data breaches and elections are evidence of a continuously evolving nature of the security threats that we face. That kind of environment requires platforms that continuously invest in security as a fundamental cost of doing business.

Laws, regulatory compliance, and ethics must figure into the design and implementation of our technologies to as great a degree as goals like performance and scalability do. Smart city initiatives, where having visibility into the movement of people, goods, and vehicles is key to doing the kinds of optimizations that increase the quality of life in these cities, will confront these issues head-on.

Routing today is informed by traffic conditions but is still fairly “selfish:” routing for “me” rather than for “we.” Cities would like a hand in shaping traffic, especially if they can factor in deeper insights such as the types of vehicles on the road (sending freight one way versus passenger traffic another way), whether or not there is an upcoming sporting event or road closure, weather, and so on.

Doing this in a way that is cognizant of local infrastructure and the environment is what smart cities initiatives are all about.

For these reasons, we have joined the Open Mobility Foundation. We are also involved with Stanford’s Digital Cities Program, the Smart Transportation Council, the Alliance to Save Energy by the 50×50 Transportation Initiative, and the World Business Council for Sustainable Development.

With the Microsoft Connected Vehicle Platform (MCVP) and an ecosystem of partners across the industry, Microsoft offers a consistent horizontal platform on top of which customer-facing solutions can be built. MCVP helps mobility companies accelerate the delivery of digital services across vehicle provisioning, two-way network connectivity, and continuous over-the-air updates of containerized functionality. MCVP provides support for command-and-control, hot/warm/cold path for telematics, and extension hooks for customer/third-party differentiation. Being built on Azure, MCVP then includes the hyperscale, global availability, and regulatory compliance that comes as part of Azure. OEMs and fleet operators leverage MCVP as a way to “move up the stack” and focus on their customers rather than spend resources on non-differentiating infrastructure.

Innovation in the automotive industry

At Microsoft, and within the Azure IoT organization specifically, we have a front-row seat on the transformative work that is being done in many different industries, using sensors to gather data and develop insights that inform better decision-making. We are excited to see these industries on paths that are trending to converging, mutually beneficial paths. Our colleague Sanjay Ravi shares his thoughts from an automotive industry perspective in this great article.

Turning our attention to our customer and partner ecosystem, the traction we’ve gotten across the industry has been overwhelming:

The Volkswagen Automotive Cloud will be one of the largest dedicated clouds of its kind in the automotive industry and will provide all future digital services and mobility offerings across its entire fleet. More than 5 million new Volkswagen-specific brand vehicles are to be fully connected on Microsoft’s Azure cloud and edge platform each year. The Automotive Cloud subsequently will be rolled out on all Group brands and models.

Cerence is working with us to integrate Cerence Drive products with MCVP. This new integration is part of Cerence’s ongoing commitment to delivering a superior user experience in the car through interoperability across voice-powered platforms and operating systems. Automakers developing their connected vehicle solutions on MCVP can now benefit from Cerence’s industry-leading conversational AI, in turn delivering a seamless, connected, voice-powered experience to their drivers.

Ericsson, whose Connected Vehicle Cloud connects more than 4 million vehicles across 180 countries, is integrating their Connected Vehicle Cloud with Microsoft’s Connected Vehicle Platform to accelerate the delivery of safe, comfortable, and personalized connected driving experiences with our cloud, AI, and IoT technologies.

LG Electronics is working with Microsoft to build its automotive infotainment systems, building management systems and other business-to-business collaborations. LG will leverage Microsoft Azure cloud and AI services to accelerate the digital transformation of LG’s B2B business growth engines, as well as Automotive Intelligent Edge, the in-vehicle runtime environment provided as part of MCVP.

Global technology company ZF Friedrichshafen is transforming into a provider of software-driven mobility solutions, leveraging Azure cloud services and developer tools to promote faster development and validation of connected vehicle functions on a global scale.

Faurecia is collaborating with Microsoft to develop services that improve comfort, wellness, and infotainment as well as bring digital continuity from home or the office to the car. At CES, Faurecia demonstrated how its cockpit integration will enable Microsoft Teams video conferencing. Using Microsoft Connected Vehicle Platform, Faurecia also showcased its vision of playing games on the go, using Microsoft’s new Project xCloud streaming game preview.

Bell has revealed AerOS, a digital mobility platform that will give operators a 360° view into their aircraft fleet. By leveraging technologies like artificial intelligence and IoT, AerOS provides powerful capabilities like fleet master scheduling and real-time aircraft monitoring, enhancing Bell’s Mobility-as-a-Service (MaaS) experience. Bell chose Microsoft Azure as the technology platform to manage fleet information, observe aircraft health, and manage the throughput of goods, products, predictive data, and maintenance.

Luxoft is expanding its collaboration with Microsoft to accelerate the delivery of connected vehicle solutions and mobility experiences. By leveraging MCVP, Luxoft will enable and accelerate the delivery of vehicle-centric solutions and services that will allow automakers to deliver unique features such as advanced vehicle diagnostics, remote access and repair, and preventive maintenance. Collecting real usage data will also support vehicle engineering to improve manufacturing quality.

We are incredibly excited to be a part of the connected vehicle space. With MCVP, our ecosystem partners and our partnerships with leading automotive players, both vehicle OEMs and automotive technology suppliers, we believe we have a uniquely capable offering enabling at global scale the next wave of innovation in the automotive industry as well as related verticals such as smart cities, smart infrastructure, insurance, transportation, and beyond.
Quelle: Azure

Building RHEL based containers on Azure Red Hat OpenShift

Red Hat Summit 2020 is fast approaching, and if you missed it last year, you would have also missed Microsoft CEO Satya Nadella and former Red Hat CEO Jim Whitehurst announcing Red Hat and Microsoft’s first joint offering: Azure Red Hat OpenShift (ARO).
Azure Red Hat OpenShift (ARO) is a fully managed service of Red Hat OpenShift on Azure, jointly engineered, operated and supported by Microsoft and Red Hat. 
Did you know that it is possible for both new and existing Red Hat customers to build Red Hat Enterprise Linux (RHEL) based container images on Azure Red Hat OpenShift?
In this blog I will demonstrate how to perform the following on Azure Red Hat OpenShift:

Build a RHEL based container with a Dockerfile using your existing Red Hat subscription, and;
Build a freely redistributable RHEL based container with a Dockerfile using the Red Hat Universal Base Image (UBI). 

Both of these methods will work on the current Azure Red Hat OpenShift offering, the next iteration of which will be based on OpenShift 4. 
Provisioning an Azure Red Hat OpenShift cluster
Let’s start with provisioning an Azure Red Hat OpenShift cluster. There are some prerequisites to complete. An existing Azure subscription is required, and users need to be created in Azure Active Directory. Follow the documentation to set environment variables and using the Azure cli create a resource group and provision the cluster.
$ az openshift create –resource-group $CLUSTER_NAME –name $CLUSTER_NAME -l $LOCATION –aad-client-app-id $APPID –aad-client-app-secret $SECRET –aad-tenant-id $TENANT –customer-admin-group-id $GROUPID
After about 10 – 15 minutes, the deployment process should have completed and the public URL for your fully managed Azure Red Hat OpenShift cluster is displayed. Log in to the console with your Active Directory credentials and copy the login command by clicking on your username and selecting “Copy login command.” This string will be used to login to the cluster using the command line.
Using an existing Red Hat subscription
For this section I highly recommend using an existing RHEL machine which holds a valid subscription. This will make creating the OpenShift prerequisites required for the Dockerfile build much easier. The OpenShift command line tool ‘oc’ is also required to be installed on this machine. For those without an existing subscription skip ahead to the section titled “Using the Universal Base Image (UBI)”.
Login to the ARO cluster using the copied login command. It will look similar to below.
$ oc login https://osa{ID}.{REGION}.cloudapp.azure.com –token={ARO TOKEN}
Create a new OpenShift project
$ oc new-project rhel-build
If you do not have one already, create a registry service account to ensure that you can pull a RHEL image from registry.redhat.io using your credentials. In a browser go to catalog.redhat.com, login and select “Service Accounts” and then “New Service Account”. Download the generated OpenShift secret. Create the secret in your OpenShift project.
$ oc create -f {SECRET_FILE}.yaml -n rhel-build
Create a secret that contains the entitlements
$ oc create secret generic etc-pki-entitlement –from-file /etc/pki/entitlement/{ID}.pem –from-file /etc/pki/entitlement/{ID}-key.pem -n rhel-build
Create a configmap that contains the subscription manager configuration.
$ oc create configmap rhsm-conf –from-file /etc/rhsm/rhsm.conf -n rhel-build
Create a configmap for the certificate authority.
$ oc create configmap rhsm-ca –from-file /etc/rhsm/ca/redhat-uep.pem -n rhel-build
Create a build configuration in the project.
$ oc new-build https://github.com/grantomation/rhel-build.git –context-dir sub-build –name rhel-build -n rhel-build
$ oc get buildconfig rhel-build -n rhel-build
NAME         TYPE FROM     LATEST
rhel-build   Docker Git     1
List the secrets in the project
$ oc get secrets -n rhel-build
NAME                    TYPE               DATA AGE
{SERVICE PULL SECRET}   kubernetes.io/dockerconfigjson        1 2m
Set the registry pull credentials as a secret on the buildConfig
$ oc set build-secret –pull bc/rhel-build {SECRET CREATED BY REGISTRY SERVICE ACCOUNT FILE}
Patch the build configuration
$ oc patch buildconfig rhel-build -p ‘{“spec”:{“source”:{“configMaps”:[{“configMap”:{“name”:”rhsm-conf”},”destinationDir”:”rhsm-conf”},{“configMap”:{“name”:”rhsm-ca”},”destinationDir”:”rhsm-ca”}],”secrets”:[{“destinationDir”:”etc-pki-entitlement”,”secret”:{“name”:”etc-pki-entitlement”}}]}}}’ -n rhel-build
Start the Dockerfile build on OpenShift.
$ oc start-build rhel-build –follow -n rhel-build
Following a successful build, the new image is pushed to the internal OpenShift registry and an image stream is created in the project. To confirm that the image build worked correctly, the imagestream can be used to create an OpenShift application.
$ oc new-app rhel -n rhel-build
Create an edge route which will use the digicert certificate included on ARO.
$ oc create route edge –port 8080 –service rhel-build -n rhel-build
Curl the route to the application
$ curl https://$(oc get route rhel -o go-template='{{.spec.host}}’)
Azure Red Hat OpenShift
Using the Universal Base Image (UBI)
Red Hat UBI provides complementary runtime languages and packages that are freely redistributable. If you’re new to the UBI, you can check out Scott McCarty’s excellent blog and demo as a primer. Using the UBI as a base for your next containerised application is a great way to build and deploy on Azure Red Hat OpenShift. The following steps demonstrate how to use UBI based on RHEL 8. 
Create a new OpenShift project.
$ oc new-project ubi-build
Create a build configuration in the project.
$ oc new-build https://github.com/grantomation/rhel-build.git –context-dir ubi-build –name ubi-build -n ubi-build
Follow the container build.
$ oc logs -f build/ubi-build-1
To confirm that the image build worked correctly, the generated imagestream can be used to create an OpenShift application.
$ oc new-app ubi
Create an edge route which will use the digicert certificate included on ARO.
$ oc create route edge –port 8080 –service ubi -n ubi-build
Curl the route to the application.
$ curl https://$(oc get route ubi -o go-template='{{.spec.host}}’)
And with that done, you’ve got an OpenShift cluster up and running in Azure, running RHEL based containers.
 
The post Building RHEL based containers on Azure Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift