Mobile Betriebssysteme: iOS 14 und iPadOS 14 zum Ausprobieren sind da
Apple hat die öffentlichen Betaversionen von iOS 14 und iPadOS 14 zum Download freigegeben. Beide funktionieren schon sehr gut. (iOS, Apple)
Quelle: Golem
Apple hat die öffentlichen Betaversionen von iOS 14 und iPadOS 14 zum Download freigegeben. Beide funktionieren schon sehr gut. (iOS, Apple)
Quelle: Golem
Wie kann Fliegen umweltverträglicher werden? Der DLR-Vorstand für Luftfahrtforschung, Rolf Henke, gibt Antworten auf die drängende Frage. Ein Interview von Daniela Becker (Luftfahrt, Interview)
Quelle: Golem
Imec und Globalfoundries nutzen die 22FDX-Halbleiterfertigung für ein effizientes IoT-Edge-Design. (Globalfoundries, KI)
Quelle: Golem
Wie von Google versprochen, kommt Flutter nun auch auf Desktops. Für Ubuntu setzt Canonical auf das Snap-Paketformat. (Softwareentwicklung, Ubuntu)
Quelle: Golem
300 Tage nach Marktstart hat sich Android 10 besser etabliert als seine Vorgängerversionen. (Android 10, Google)
Quelle: Golem
Editor’s note: This is the first in a multi-part series to help you get the most out of your Compute Engine VMs.Have you ever wondered whether you’re using the best possible cloud compute resource for your workloads? In this post, we discuss the different Compute Engine machine families in detail and provide guidance on what factors to consider when choosing your Compute Engine machine family. Whether you’re new to cloud computing, or just getting started on Google Cloud, these recommendations can help you optimize your Compute Engine usage.For organizations that want to run virtual machines (VMs) in Google Cloud, Compute Engine offers multiple machine families to choose from, each suited for specific workloads and applications. Within every machine family there is a set of machine types that offer a prescribed combination of processor and memory configuration.General purpose – These machines balance price and performance and are suitable for most workloads including databases, development and testing environments, web applications, and mobile gaming.Compute-optimized – These machines provide the highest performance per core on Compute Engine and are optimized for compute-intensive workloads, such as high performance computing (HPC), game servers, and latency-sensitive API serving.Memory-optimized – These machines offer the highest memory configurations across our VM families with up to 12 TB for a single instance. They are well-suited for memory-intensive workloads such as large in-memory databases like SAP HANA and in-memory data analytics workloads.Accelerator-optimized – These machines are based on the NVIDIA Ampere A100 Tensor Core GPU. With up to 16 GPUs in a single VM, these machines are suitable for demanding workloads like CUDA-enabled machine learning (ML) training and inference, and HPC. General purpose familyThese machines provide a good balance of price and performance, and are suitable for a wide variety of common workloads. You can choose from four general purpose machine types:E2 offers the lowest total cost of ownership (TCO) on Google Cloud with up to 31% savings compared to the first generation N1. E2 VMs run on a variety of CPU platforms (across Intel and AMD), and offer up to 32 vCPUs and 128GB of memory per node. E2 machine types also leverage dynamic resource management, which offers many economic benefits for workloads that prioritize cost savings.N2 introduced the 2nd Generation Intel Xeon Scalable Processors (Cascade Lake) to Compute Engine’s general purpose family. Compared with first-generation N1 machines, N2s offer a greater than 20% price-performance improvement for many workloads and support up to 25% more memory per vCPU.N2D VMs are built on the latest 2nd Gen AMD EPYC (Rome) CPUs, and support the highest core count and memory of any general-purpose Compute Engine VM. N2D VMs are designed to provide you with the same features as N2 VMs including local SSD, custom machine types, and transparent maintenance through live migration.N1s are first-generation general purpose VMs and offer up to 96 vCPUs and 624GB of memory . For most use cases we recommend choosing one of the second-generation general purpose machine types above. For GPU workloads, N1 supports a variety of NVIDIA GPUs (see this table for details on specific GPUs supported in each zone). For flexibility, general purpose machines come as predefined (with a preset number of vCPUs and memory), or can be configured as custom machine types. Custom machine types allow you to independently configure CPU and memory to find the right balance for your application, so you only pay for what you need.Let’s take a closer look at the general purpose machine family:E2 machine typesE2 VMs utilize dynamic resource management technologies developed for Google’s own services that make better use of hardware resources, driving down costs and passing the savings on to you. If you have workloads such as web serving, small-to-medium databases, and application development and testing environments that run well on N1 but don’t require large instance sizes, GPUs or local SSD, consider moving them to E2.Whether comparing on-demand usage TCO or leveraging committed use discounts, E2 VMs offer up to 31% improvement in price-performance as illustrated below, across a range of benchmarks. E2 pricing already includes sustained use discounts and E2s are also eligible for committed use discounts, bringing additional savings of up to 55% for three-year commitments.Disclaimer: Results are based on Google Cloud’s internal benchmarking, using comparable sized VMs (16 vCPUs) for all instance types.N2 machine typesN2 machines run at 2.8GHz base frequency, and 3.4GHz sustained all-core turbo, and offer up to 80 vCPUs and 640GB of memory. This makes them a great fit for many general purpose workloads that can benefit from increased per core performance, including web and application servers, enterprise applications, gaming servers, content and collaboration systems, and most databases.Whether you are running a business critical database or an interactive web application, N2 VMs offer you the ability to get ~30% higher performance from your VMs, and shorten many of your computing processes, as illustrated through a wide variety of benchmarks. Additionally, with double the FLOPS per clock cycle compared to previous-generation Intel Advanced Vector Extensions 2 (Intel AVX2), Intel AVX-512 boosts performance and throughput for the most demanding computational tasks.N2 instances perform 2.82x faster than N1 instances on AI inference of a Wide & Deep model using Intel-optimized Tensorflow, leveraging new Deep Learning ( DL) Boost instructions in 2nd Generation Xeon Scalable Processors. The new DL Boost instructions extend the Intel AVX-512 instruction set to do with one instruction which took three instructions in previous generation processors.Disclaimer: Results are based on Google Cloud’s internal benchmarking, using comparable sized VMs (16 vCPUs) for all instance types.N2D machine typesN2D VMs provide performance improvements for data management workloads that leverage AMD’s higher memory bandwidth and higher per-system throughput (available with larger VM choices), with up to 224 vCPUs, making them the largest general purpose VM on Google Compute Engine. N2D VMs offer savings of up to 13% over comparable N-series instances.N2D machine types are suitable for web applications, databases, workloads, and video streaming. N2D VMs can also offer a performance improvement for many high-performance computing workloads that would benefit from higher memory bandwidth.The benchmark below illustrates a 20-30% performance increase across many workload types with up to 2.5X improvements for benchmarks that benefit from N2D’s improved memory bandwidth, like STREAM, making them a great fit for memory bandwidth-hungry applications.Disclaimer: Results are based on Google Cloud’s internal benchmarking, using comparable sized VMs (16 vCPUs) for all instance types.N2 and N2D VMs offer up to 20% sustained use discounts and are also eligible for committed use discounts, bringing additional savings of up to 55% for three-year commitments.Compute-optimized (C2) familyCompute-optimized machines focus on the highest performance per core and the most consistent performance to support real-time applications performance needs. Based on 2nd Generation Intel Xeon Scalable Processors (Cascade Lake), and offering up to 3.8 GHz sustained all-core turbo, these VMs are optimized for compute-intensive workloads such as HPC, gaming (AAA game servers), and high-performance web serving.Compute-optimized machines produce a greater than 40% performance improvement compared to the previous generation N1 and offer higher performance per thread and isolation for latency-sensitive workloads. Compute-optimized VMs come in different shapes ranging from 4 to 60 vCPUs, and offer up to 240 GB of memory. You can choose to attach up to 3TB of local storage to these VMs for applications that require higher storage performance. As illustrated below, compute-optimized VMs demonstrate up to 40% performance improvements for most interactive applications, whether you are optimizing for the number of queries per second or the throughput of your map routing algorithms. For many HPC applications, benchmarks such as OpenFOAM indicate that you can see up to 4X reduction in your average runtime.Disclaimer: Results are based on Google Cloud’s internal benchmarking, using comparable sized VMs (16 vCPUs) for all instance types.C2 VMs offer up to 20% sustained use discounts and are also eligible for committed use discounts, bringing additional savings of up to 60% for three-year commitments.Memory-optimized (M1, M2) familyMemory-optimized machine types offer the highest memory in our VM family. With VMs that range in size from 1TB to 12TBs of memory, and offer up to 416 vCPUs, these VMs offer the most compute and memory resources of any Compute Engine VM offering. They are well suited for large in-memory databases such as SAP HANA, as well as in-memory data analytics workloads. M1 VMs offer up to 4TB of memory, while M2 VMs support up to 12TB of memory.M1 and M2 VM types also offer the lowest cost per GB of memory on Compute Engine, making them a great choice for workloads that utilize higher memory configurations with low compute resources requirements. For workloads such as Microsoft SQL Server and similar databases, these VMs allow you to provision only the compute resources you need as you leverage larger memory configurations.With the addition of 6TB and 12TB VMs to Compute Engine’s memory-optimized machine types (M2), SAP customers can now run their largest SAP HANA databases on Google Cloud. These VMs are the largest SAP certified VMs available from a public cloud provider. Not only do M2 machine types accommodate the most demanding and business critical database applications, they also support your favorite Google Cloud features. For these business critical databases, uptime is critical for business continuity. With live migration, you can keep your systems up and running even in the face of infrastructure maintenance, upgrades, security patches, and more. And Google Cloud’s flexible committed use discounts let you migrate your growing database from a 1TB-4TB instance to the new 6TB VM while leveraging your current memory-optimized commitments. M1 and M2 VMs offer up to 30% sustained use discounts and are also eligible for committed use discounts, bringing additional savings of up to >60% for three-year commitments.Accelerator-optimized (A2) familyThe accelerator-optimized family is thelatest addition to the Compute Engine portfolio. A2s are currently available via our alpha program, with public availability expected later this year. The A2 is based on the latest NVIDIA Ampere A100 GPU and was designed to meet today’s most demanding applications such as machine learning and HPC. A2 VMs were the first NVIDIA Ampere A100 Tensor Core GPU-based offering on a public cloud.Each A100 GPU offers up to 20x the compute performance compared to the previous generation GPU and comes with 40GB of high-performance HBM2 GPU memory. The A2 uses NVIDIA’s HGX system to offer high-speed NVLink GPU-to-GPU bandwidth up to 600 GB/s. A2 machines come with up to 96 Intel Cascade Lake vCPUs, optional Local SSD for workloads requiring faster data feeds into the GPUs, and up to 100Gbps of networking. A2 VMs also provide full vNUMA transparency into the architecture of underlying GPU server platforms, enabling advanced performance tuning.For very demanding compute workloads, the A2 has the a2-megagpu-16g machine type, which comes with 16 A100 GPUs, offering a total of 640GB of GPU memory and providing up to 10 petaflops of FP16 or 20 petaOps of int8 CUDA compute power in a single VM, when using the new sparsity feature.Getting the best out of your computeChoosing the right VM family is the first step in driving efficiency for your workloads. In the coming weeks, we’ll share other helpful information including an overview of our intelligent compute offerings, OS troubleshooting and optimization, licensing, and data protection, to help you optimize your Compute Engine resources. In addition, be sure to read our recent post on how to save on Compute Engine. To learn more about Compute Engine, visit our documentation pages.
Quelle: Google Cloud Platform
If you have a Google Cloud environment, you’ve probably spent some time with the gcloud command-line tool, the primary command-line tool for creating and managing Google Cloud resources. But with over 2,000 commands, it can be a little overwhelming to get started with its multitude of flags, filters, and formats. Fear not for the gcloud command-line tool cheat sheet is now here to help guide the way! It’s a handy tool to stow in your proverbial knapsack or actual back pocket as you start out with the Cloud SDK, helping you recognize command patterns and find useful gcloud commands, to get you on your way. The gcloud command-line tool cheat sheet is available as a one-page sheet, an online resource, and quite fittingly, a command itself, gcloud cheat-sheet.We’ve organized the gcloud command-line tool cheat sheet around common command invocations (like creating a Compute Engine virtual machine instance), essential workflows (such as authorization and setting properties for your configuration), and core tool capabilities (like filtering and sorting output). This list of useful commands, all neatly packed into a single double-sided page, is ready to be downloaded and printed. As a bonus, the cheat sheet also includes a quick rundown of how gcloud commands are structured, enabling you to easily discover commands beyond the confines of this pithy list.Whether you’re new to the gcloud command-line tool and need a good starting point, or are a seasoned user and need a map to situate yourself, the gcloud command-line tool cheat sheet is a nifty companion as you traverse the expansive landscape of Google Cloud. You can access the cheat sheet online, or download the printable PDF. Or if you’ve already got the latest version of Cloud SDK installed, give the cheat sheet a whirl right now with gcloud cheat-sheet. We hope you find it to be a useful resource!
Quelle: Google Cloud Platform
Every week throughout Google Cloud Next ‘20 OnAir, we’re focusing on a different theme to help you grow your cloud skills through a series of guided hands-on labs, talks by Google Cloud’s technical experts, and competitions. Guided hands-on labsIf you’re new to Google Cloud, or brushing up on the basics, join us for Cloud Study Jam every Wednesday during which Google Cloud experts will review relevant cloud training and certification resources, lead you through hands-on labs, and answer your questions live. The sessions will be hosted in Americas and Asia Pacific-friendly times. By participating in the labs featured in our Cloud Study Jam sessions, you’ll also be working towards earning your firstskill badge on Qwiklabs. Digital skill badges allow you to demonstrate your growing Google Cloud-recognized skillset and share your progress with your network. You can earn the badges by completing a series of hands-on labs, leading up to a final assessment challenge lab, to test your skills.Here’s a taste of what to expect from the Cloud Study Jam sessions:Infrastructure sessions On July 29, our Cloud Study Jam events will be all about infrastructure. Hands-on labs will focus on cloud environment provisioning, introducing cloud monitoring best practices, configuring networking, and more. In these value-packed sessions, you’ll also learn how to best prepare for Google Cloud certifications such as the Associate Cloud Engineer and the highest-paying IT certification for the past two years, Professional Cloud Architect. Application modernization sessions Explore how to modernize your applications using Kubernetes on August 26. Participate in hands-on labs that demonstrate how Google Kubernetes Engine (GKE) can be used to perform workload orchestration and effortlessly run continuous delivery pipelines. You’ll also have a chance to learn about the Google Cloud Professional Cloud DevOps Engineer certification. AI sessionsDive into Google Cloud AI on September 2 and learn how to address real-world challenges at scale. See how AI can enable businesses to continue interacting with their customer base through the use of virtual agents in hands-on labs. Understand why certification matters and how you can take the next steps on this path. You’ll also have the opportunity to get your machine learning questions answered by Lak Lakshmanan, Head of Data Analytics and AI Solutions at Google Cloud. Related ArticleYour Next ‘20 OnAir journey starts here: Resources and session guides available nowGoogle Cloud Next ‘20 OnAir, running from Jul 14 to Sep 8, offers nine full weeks of programming to help you solve your toughest business…Read ArticleTalks with Google Cloud’s technical expertsEvery Friday starting on July 24, you can participate in Google Cloud Talks by DevRel. Ask Google Cloud Developer Relations team members your questions on Google Cloud solutions including machine learning, AI, serverless, app modernization, and more. The team will also provide a summary of each week’s topic and deliver technical talks to supplement the week’s programming. Talks by DevRel will be hosted in Americas and Asia Pacific-friendly times. We’ll also have details soon on the sessions running in Japan. CompetitionsJoin our weekly Cloud Hero game to take your skills to the next level. Each game will have a collection of labs relevant to that week’s theme. You can pick and choose which hands-on labs to do, or try them all. Play with other attendees and compete to see yourself on the leaderboard. The weekly game link and its access code will be released on Tuesdays at 9 am PDT here. Ready to get started? Register for our Cloud Study Jam sessions. You can also find our full schedule of training opportunities on the Learning Hub.
Quelle: Google Cloud Platform
Commercial real estate developers, building owners, facilities management companies, and tenants have a huge opportunity to address, and solve for, the unique business challenges faced by their industry, by applying the Internet of Things (IoT) to buildings. For example, by leveraging data from IoT sensors and building management systems, companies can gain insights that enable them to save energy, reduce operational expenses, increase occupant comfort, and optimize space.
However, the COVID-19 crisis has presented a new set of challenges for developers, owners, and management companies. New forecasts show the smart building market size growing between 7.3 percent and 11.6 percent annually to overall market revenues of between $65.2 billion and $82.7 billion USD in 2025.1
Smart buildings also help companies meet regulations for tracking and reducing greenhouse gas emissions.
Let’s look at how Bosch Building Technologies, Bentley Systems, Schneider Electric, and ICONICS use Azure IoT to deliver the benefits of smart buildings.
Decreasing energy requirements
The American Council for an Energy-Efficient Economy estimates that implementing smart building technology in an existing building can result in energy savings of 30–50 percent.2 For example, companies can combine data from occupancy sensors with data from HVAC and lighting systems to lower room temperatures and turn lights off in unoccupied rooms.
Bosch Building Technologies developed an in-house Energy Platform to analyze energy consumption and pursue ongoing energy efficiency. Based on Microsoft Azure, the Energy Platform monitors and analyzes energy consumption in real-time. Bosch customers use the Energy Platform to connect to IoT enabled devices and then link to existing meters, sensors, and machines. Customers can make informed decisions to improve energy and resource efficiency.
Bosch offers the solution to customers and uses it internally at more than 100 manufacturing plants worldwide. At one of their larger plants, Bosch saves up to €1.2 million (approximately $1.3 million USD) a year.
Bosch also created a Building Intelligence as a Service program to provide new IoT-based services for customers. Bosch adopted Azure Digital Twins as part of their Connected Building Services offering. By leveraging Azure Digital Twins, the company can query data from entire rooms or spaces, rather than from disparate sensors, to build complete digital models of the physical building environment.
By using Azure Digital Twins, Bosch gains more precise data for a wide range of building technology systems. With this level of precision, it’s easier for customers to fully understand data points, consumption results, context, and how they relate to the physical environment to quickly gain insights on energy usage to inform their business decisions.
Human factor design of new buildings can help decrease energy requirements.
Creating a connected workplace
At Microsoft’s Frasers Tower in Singapore, Bentley Systems and Schneider Electric implemented sensors and telemetry to create a connected workplace. They used a mix of 179 Bluetooth beacons in meeting rooms and 900 sensors for lighting, air quality, and temperature. The platform generates nearly 2,100 data points that are stored and analyzed in Azure. Using the data, Microsoft optimizes various aspects of the spaces, making them more comfortable for employees, while reducing energy consumption in a sustainable and economical manner.
Additionally, Bentley Systems built a digital twin of the Fraser Towers on its Bentley iTwin platform—using Azure Digital Twins, Azure IoT Hub, and Azure Time Series Insights. The iTwin platform uses both historical and real-time data from IoT sensors to create an exact digital replica of the physical building. The building management team uses the information to dynamically allocate space, increase utilization, reduce costs, improve competitiveness, and enhance collaboration and productivity.
Sensors generate data that is stored and analyzed to decrease energy use.
Monitoring occupancy and reducing costs
ICONICS smart building software has run on Microsoft Azure since 2015. The software is an integration hub for building management systems that control heating, ventilation, and lighting and collect and centralize each system’s sensor data. ICONICS relies on Azure Digital Twins to boost solution scalability and rapidly deliver innovative capabilities to customers, such as viewing space occupancy and spatial analytics.
Microsoft uses the ICONICS smart building software to collect sensor data in office buildings in the Puget Sound area of Washington State. The ICONICS solution aggregates the data over multiple buildings to give facility managers visibility into building health and applies big data analytics to provide insights that drive decisions in order to deliver energy savings. In fact, the Microsoft Energy Smart Buildings program, leveraging ICONICS software, has saved Microsoft 20 percent off its energy bills.
Next steps
Smart buildings provide insights that enable real estate developers, commercial building owners, facilities managers, and tenants to save energy, reduce operational expenses, increase occupant comfort, and meet regulatory and sustainability goals.
To learn more about best practices for planning smart building projects, download the white paper, Smart buildings: From design to reality, co-written by Microsoft and L&T Technology Services.
Also visit, Azure IoT to find the right IoT approach for your solutions.
1Impact of COVID-19 on the Global IoT in Smart Commercial Buildings Market to 2025 – ResearchAndMarkets.com.
2 Smart Buildings: Using Smart Technology to Save Energy in Existing Buildings.
Quelle: Azure
Just about six years ago to the day Docker hit the first milestone for Docker Compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex. New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.
One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.
Today this functionality is being made available as a beta UX using docker ecs to drive commands. Later this year when the functionality becomes generally available this will become part of our new Docker Contexts and will allow you to just run docker run and docker compose.
To learn more about what we are building together with Amazon go read Carmen Puccio’s post over at the Amazon Container blog. After that register for the Amazon Cloud Container Conference and come see Carmen and my session at 3:45 PM Pacific.
We are extremely excited for you to try out the public beta starting right now. In order to get started, you can sign up for a Docker ID, or use your existing Docker ID, and download the latest version of Docker Desktop Edge 2.3.3.0 which includes the new experience. You can also head straight over to the GitHub repository which will include the conference session’s demo you can follow along. We are excited for you to try it out, report issues and let us know what other features you would like to see on the Roadmap!
The post From Docker Straight to AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/