Mobiles Betriebssystem: Apple veröffentlicht iOS 13.4 und iPadOS 13.4
Apples mobile Betriebssysteme bieten teils Trackpad-Unterstützung, iCloud-Drive-Ordnerfreigaben und sind Autoschlüsselersatz. (iOS 13, Apple)
Quelle: Golem
Apples mobile Betriebssysteme bieten teils Trackpad-Unterstützung, iCloud-Drive-Ordnerfreigaben und sind Autoschlüsselersatz. (iOS 13, Apple)
Quelle: Golem
Kein Kino, kein Fitnessstudio, kein Theater, keine Bars, kein gar nix. Das Coronavirus hat das Land (und die Welt) lahmgelegt, so dass viele nun zu Hause sitzen: Zeit für Serien-Streaming. Eine Rezension von Peter Osteried (Streaming, Video-Community)
Quelle: Golem
Der Autohersteller Ford arbeitet mit General Electrics und 3M zusammen, um die Produktion von Beatmungsgeräten und Masken zu unterstützen. (Ford, Technologie)
Quelle: Golem
Take OKD 4, the Community Distribution of Kubernetes that powers Red Hat OpenShift, for a test drive on your Home Lab.
Craig Robinson at East Carolina University has created an excellent blog explaining how to install OKD 4.4 in your home lab!
What is OKD?
OKD is the upstream community-supported version of the Red Hat OpenShift Container Platform (OCP). OpenShift expands vanilla Kubernetes into an application platform designed for enterprise use at scale. Starting with the release of OpenShift 4, the default operating system is Red Hat CoreOS, which provides an immutable infrastructure and automated updates. OKD’s default operating system is Fedora CoreOS which, like OKD, is the upstream version of Red Hat CoreOS.
Instructions for Deploying OKD 4 Beta on your Home Lab
For those of you who have a Home Lab, check out the step-by-step guide here helps you successfully build an OKD 4.4 cluster at home using VMWare as the example hypervisor, but you can use Hyper-V, libvirt, VirtualBox, bare metal, or other platforms just as easily.
Experience is an excellent way to learn new technologies. Used hardware for a home lab that could run an OKD cluster is relatively inexpensive these days ($250–$350), especially when compared to a cloud-hosted solution costing over $250 per month.
The purpose of this step-by-step guide is to help you successfully build an OKD 4.4 cluster at home that you can take for a test drive. VMWare is the example hypervisor used in this guide, but you could use Hyper-V, libvirt, VirtualBox, bare metal, or other platforms.
This guide assumes you have a virtualization platform, basic knowledge of Linux, and the ability to Google.
Check out the step-by-step guide here on Medium.com
Once you’ve gain some experience with OpenShift by using the open source upstream combination of OKD and FCOS (Fedora CoreOS) to build your own cluster on your home lab, be sure to share your feedback and any issues with the OKD-WG on this Beta release of OKD in the OKD Github Repo here: https://github.com/openshift/okd
Additional Resources:
To report issues, use the OKD Github Repo: https://github.com/openshift/okd
For support check out the #openshift-users channel on k8s Slack
The OKD Working Group meets bi-weekly to discuss development and next steps. Meeting schedule and location are tracked in the openshift/community repo.
Google group for okd-wg: https://groups.google.com/forum/#!forum/okd-wg
This should get you up and going. Good luck on your journey with OpenShift!
The post Guide to Installing an OKD 4.4 Cluster on your Home Lab appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
.
In this briefing, IBM Cloud’s Chris Rosen discusses the logistics of bringing OpenShift to IBM Cloud and walk us thru how to make the most of this new offering from IBM Cloud.
Red Hat OpenShift is now available on IBM Cloud as a fully managed OpenShift service that leverages the enterprise scale and security of IBM Cloud, so you can focus on developing and managing your applications. It’s directly integrated into the same Kubernetes service that maintains 25 billion on-demand forecasts daily at The Weather Company.
Chris Rosen walks us thru how to
Enjoy dashboards with a native OpenShift experience, and push-button integrations with high-value IBM and Red Hat middleware and advanced services.
Rely on continuous availability with multizone clusters across six regions globally.
Move workloads and data more securely with Bring Your Own Key; Level 4 FIPS; and built-in industry compliance including PCI, HIPAA, GDPR, SOC1 and SOC2.
Start fast and small using one-click provisioning and metered billing, with no long-term commitment
Slides here: Red Hat OpenShift on IBM Cloud – Webinar – 2020-03-18
Additional Resources:
Red Hat OpenShift on IBM Cloud: https://www.ibm.com/ca-en/cloud/openshift
Documentation: https://cloud.ibm.com/docs/openshift?topic=openshift-service-arch
Get Started Tutorials: https://www.ibm.com/cloud/openshift/get-started
To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post OpenShift Commons Briefing: Bringing OpenShift to IBM Cloud with Chris Rosen (IBM) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
The world of supercomputing is evolving. Work once limited to high-performance computing (HPC) on-premises clusters and traditional HPC scenarios, is now being performed at the edge, on-premises, in the cloud, and everywhere in between. Whether it’s a manufacturer running advanced simulations, an energy company optimizing drilling through real-time well monitoring, an architecture firm providing professional virtual graphics workstations to employees who need to work remotely, or a financial services company using AI to navigate market risk, Microsoft’s collaboration with NVIDIA makes access to NVIDIA graphics processing units (GPU) platforms easier than ever.
These modern needs require advanced solutions that were traditionally limited to a few organizations because they were hard to scale and took a long time to deliver. Today, Microsoft Azure delivers HPC capabilities, a comprehensive AI platform, and the Azure Stack family of hybrid and edge offerings that directly address these challenges.
This year during GTC Digital, we’re spotlighting some of the most transformational applications powered by NVIDIA GPU acceleration that highlight our commitment to edge, on-prem, and cloud computing. Registration is free, so sign up to learn how Microsoft is powering transformation.
Visualization and GPU workstations
Azure enables a wide range of visualization workloads, which are critical for desktop virtualization as well as professional graphics such as computer-aided design, content creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class GPUs and Quadro technology, the world’s preeminent visual computing platform. With access to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from anywhere, and from any connected device. See our NV-Series virtual machines (VMs) for Windows and Linux.
Artificial intelligence
We’re sharing the release of the updated execution provider in ONNX Runtime with integration for NVIDIA TensorRT 7. With this update, ONNX Runtime can execute open Open Neural Network Exchange (ONNX) models on NVIDIA GPUs on Azure cloud and at the edge using the Azure Stack Edge, taking advantage of the new features in TensorRT 7 like dynamic shape, mixed precision optimizations, and INT8 execution.
Dynamic shape support enables users to run variable batch size, which is used by ONNX Runtime to process recurrent neural network (RNN) and bit error test rate (BERT) models. Mixed precision and INT8 execution are used to speed up execution on the GPU, which enables ONNX Runtime to better balance the performance across CPU and GPU. Originally released in March 2019, TensorRT with ONNX Runtime delivers better inferencing performance on the same hardware when compared to generic GPU acceleration.
Additionally, the Azure Machine Learning service now supports RAPIDS, a high-performance GPU execution accelerator for data science framework using the NVIDIA CUDA platform. Azure developers can use RAPIDS in the same way they currently use other machine learning frameworks, and in conjunction with Pandas, Scikit-learn, PyTorch, and TensorFlow. These two developments represent major milestones towards a truly open and interoperable ecosystem for AI. We’re working to ensure these platform additions will simplify and enrich those developer experiences.
Edge
Microsoft provides various solutions in the Intelligent Edge portfolio to empower customers to make sure that machine learning not only happens in the cloud but also at the edge. The solutions include Azure Stack Hub, Azure Stack Edge, and IoT Edge.
Whether you are capturing sensor data and inferencing at the Edge or performing end-to-end processing with model training in Azure and leveraging the trained models at the edge for enhanced inferencing operations Microsoft can support your needs however and wherever you need to.
Supercomputing scale
Time-to-decision is incredibly important with a global economy that is constantly on the move. With the accelerated pace of change, companies are looking for new ways to gather vast amounts of data, train models, and perform real-time inferencing in the cloud and at the edge. The Azure HPC portfolio consists of purpose-built computing, networking, storage, and application services to help you seamlessly connect your data and processing needs with infrastructure options optimized for various workload characteristics.
Azure Stack Hub announced preview
Microsoft, in collaboration with NVIDIA, is announcing that Azure Stack Hub with Azure NC-Series Virtual Machine (VM) support is now in preview. Azure NC-Series VMs are GPU-enabled Azure Virtual Machines available on the edge. GPU support in Azure Stack Hub unlocks a variety of new solution opportunities. With our Azure Stack Hub hardware partners, customers can choose the appropriate GPU for their workloads to enable Artificial Intelligence, training, inference, and visualization scenarios.
Azure Stack Hub brings together the full capabilities of the cloud to effectively deploy and manage workloads that otherwise are not possible to bring into a single solution. We are offering two NVIDIA enabled GPU models during the preview period. They are available in both NVIDIA V100 Tensor Core and NVIDIA T4 Tensor Core GPUs. These physical GPUs align with the following Azure N-Series VM types as follows:
NCv3 (NVIDIA V100 Tensor Core GPU): These enable learning, inference and visualization scenarios. See Standard_NC6s_v3 for a similar configuration.
TBD (NVIDIA T4 Tensor Core GPU): This new VM size (available only on Azure Stack Hub) enables light learning, inference, and visualization scenarios.
Hewlett Packard Enterprise is supporting the Microsoft GPU preview program as part of its HPE ProLiant for Microsoft Azure Stack Hub solution.“The HPE ProLiant for Microsoft Azure Stack Hub solution with the HPE ProLiant DL380 server nodes are GPU-enabled to support the maximum CPU, RAM, and all-flash storage configurations for GPU workloads,” said Mark Evans, WW product manager, HPE ProLiant for Microsoft Azure Stack Hub, at HPE. “We look forward to this collaboration that will help customers explore new workload options enabled by GPU capabilities.”
As the leading cloud infrastructure provider1, Dell Technologies helps organizations remove cloud complexity and extend a consistent operating model across clouds. Working closely with Microsoft, the Dell EMC Integrated System for Azure Stack Hub will support additional GPU configurations, which include NVIDIA V100 Tensor Core GPUs, in a 2U form factor. This will provide customers increased performance density and workload flexibility for the growing predictive analytics and AI/ML markets. These new configurations also come with automated lifecycle management capabilities and exceptional support.
To participate in the Azure Stack Hub GPU preview, please send us an email today.
Azure Stack Edge preview
We also announced the expansion of our Microsoft Azure Stack Edge preview with the NVIDIA T4 Tensor Core GPU. Azure Stack Edge is a cloud managed appliance that provides processing for fast local analysis and insights to the data. With the addition of an NVIDIA GPU, you’re able to build in the cloud then run at the edge. For more information about this exciting release please see the detailed blog.
GTC Digital
Microsoft session recordings will be available on the GTC Digital site starting March 26. You can find a list of the Microsoft digital sessions along with corresponding links in the Microsoft Tech Community blog here.
1 IDC WW Quarterly Cloud IT Infrastructure Tracker, Q3 2019, January 2020, Vendor Revenue
Quelle: Azure
We’re expanding the Microsoft Azure Stack Edge with NVIDIA T4 Tensor Core GPU preview during the GPU Technology Conference (GTC Digital). Azure Stack Edge is a cloud-managed appliance that brings Azure’s compute, storage, and machine learning capabilities to the edge for fast local analysis and insights. With the included NVIDIA GPU, you can bring hardware acceleration to a diverse set of machine learning (ML) workloads.
What’s new with Azure Stack Edge
At Mobile World Congress in November 2019, we announced a preview of the NVIDIA GPU version of Azure Stack Edge and we’ve seen incredible interest in the months that followed. Customers in industries including retail, manufacturing, and public safety are using Azure Stack Edge to bring Azure capabilities into the physical world and unlock scenarios such as the real-time processing of video powered by Azure Machine Learning.
These past few months, we’ve taken our customers' feedback to make key improvements and are excited to make our preview available to even more customers today.
If you’re not already familiar with Azure Stack Edge, here are a few of the benefits:
Azure Machine Learning: Build and train your model in the cloud, then deploy it to the edge for FPGA or GPU-accelerated inferencing.
Edge Compute: Run IoT, AI, and business applications in containers at your location. Use these to interact with your local systems, or to pre-process your data before it transfers to Azure.
Cloud Storage Gateway: Automatically transfer data between the local appliance and your Azure Storage account. Azure Stack Edge caches the hottest data locally and speaks file and object protocols to your on-prem applications.
Azure-managed appliance: Easily order and manage Azure Stack Edge from the Azure Portal. No initial capex fees; pay as you go, just like any other Azure service.
Enabling our partners to bring you world-class business applications
Equally important to bringing you a great device is enabling our partners to bring you innovative applications to meet your business needs. We’d love to share some of the continued investment we’re making with partners to bring their exciting developments to you.
As self-checkouts grow in prevalence, Malong Technologies is innovating in AI applications for loss prevention.
“For our customers in the retail industry, artificial intelligence innovation is happening at the edge,” said Matt Scott, co-founder and chief executive officer, Malong Technologies. “Along with our state-of-the-art solutions, our customers need hardware that is powerful, reliable, and custom-tailored for the cloud. Microsoft’s Azure Stack Edge fits the bill perfectly. We’re proud to be a Microsoft Gold Certified Partner, working with Microsoft to help our retail customers succeed.”
Increasing your manufacturing organization’s quality inspection accuracy is key to Mariner’s Spyglass Visual Inspection application.
“Mariner has standardized on Microsoft’s Azure Stack Edge for our Spyglass Visual Inspection and Spyglass Connected Factory products. These solutions are mission critical to our manufacturing customers. Azure Stack Edge provides the performance, stability and availability they require.” – Phil Morris, CEO, Mariner
Building computer vision solutions to improve performance and safety in manufacturing and other industries is a key area of innovation for XXII.
“XXII is thrilled to be a Microsoft partner and we are working together to provide our clients with real time video analysis software on edge with the Azure Stack Edge box. With this solution, Azure allow us to harvest the full potential of NVIDIA GPUs directly on edge and be able to provide our clients in retail, industry and smart city with smart video analysis that are easily deployable, scalable and easily manageable with Azure stack Edge.” – Souheil Hanoune, Chief Scientific Officer, XXII
More to come with Azure Stack Edge
There are even more exciting developments with Azure Stack Edge coming. We’re putting the final touches on much-awaited new compute and AI capabilities including virtual machines, Kubernetes clusters, and multi-node support. Along with these new features announced at Ignite 2019, Data Box Edge was renamed Azure Stack Edge to align with the Azure Stack portfolio.
Our Rugged series for sites with harsh or remote environments is also coming this year, including the battery-powered form-factor that can be carried in a backpack. The versatility of these Azure Stack Edge form-factors and cloud-managed capabilities brings cloud intelligence and compute to retail stores, factory floors, hospitals, field operations, disaster zones, and rescue operations.
Get started with the Azure Stack Edge with NVIDIA GPU preview
Thank you for continuing to partner with us as we bring new capabilities to Azure Stack Edge. We’re looking forward to hearing from you.
To get started with the preview, please email us and we’ll follow up to learn more about your scenarios.
Learn more about Azure Stack Edge.
Learn more about Azure’s Hybrid Strategy
Read about more updates from Azure during NVIDIA’s GTC.
Quelle: Azure
In this briefing, Guillaume Moutier, Senior Principal Technical Evangelist at Red Hat, gives an overview on building automated and scalable data pipelines in the cloud leveraging Ceph notifications, Kafka, and KNative Eventing and Serving.
With accelerating need for data agility around the globe, it’s important that the right data be in the right place at the right time. Failure to meet these demands can even result in regulatory non-compliance, as data retention policies change around the globe on an almost daily basis. Guillaume gives an introduction into what it means to build automated and scalable data pipelines in OpenShift.
Slides from the Briefing: Automate-and-scale-your-data-pipelines-the-Cloud-Native-Way
Additional Resources:
Red Hat Container Storage 4
AMQ Streams and Kafka on OpenShift
KNative
To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post OpenShift Commons Briefing: Automate and Scale Your Data Pipelines the Cloud Native Way with Guillaume Moutier (Red Hat) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
Aus Sorgen um die Leistungsfähigkeit der Netze wird das Playstation Network in Europa gedrosselt – Multiplayer ist offenbar nicht betroffen. (PSN, Sony)
Quelle: Golem
Kaum jemand kann noch in andere Länder reisen. Die Vermittlungsdienst Airbnb macht wegen des Coronavirus kaum noch Umsatz. (Airbnb, Bundesregierung)
Quelle: Golem