Elektroauto: Mit dem chinesischen Nio über die Alpen

Was macht eine digitale Assistentin im Auto, die nur Chinesisch versteht? Sie signalisiert Wohlwollen mit der Langnase, indem sie zur Musik gut gelaunt Rasseln schüttelt. Trotz Verständigungsschwierigkeiten macht die Fahrt mit dem Elektroauto Nio über die Alpen Spaß. Ein Praxistest von Dirk Kunde (Elektroauto, Technologie)
Quelle: Golem

Azure Marketplace new offers – Volume 42

We continue to expand the Azure Marketplace ecosystem. For this volume, 86 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Applications

360°VR Museum: The 360°VR Museum is a virtual exhibition platform that allows users to view HD 360-degree re-creations of local and international exhibitions where visitors can move around freely using a mouse or touch screen input. This application is available only in Korean.

Apache Airflow Helm Chart: Apache Airflow is a tool to express and execute workflows as directed acyclic graphs. It includes utilities to schedule tasks, monitor task progress, and handle task dependencies.

Apache Superset (Ubuntu): Websoft9 Superset stack is a preconfigured, ready-to-run image for running the Apache Superset data exploration and visualization web application on Azure.

Ataccama ONE: Data Quality Management: Employ smart, automated metadata discovery algorithms to know the state of your data quality; empower data users to make smarter, more informed decisions; and prevent costly mistakes with Ataccama ONE.

Avid Media Composer Azure Test Drive: Experience editing in the cloud with Avid Media Composer on Azure. This Test Drive includes one NV12 virtual machine with Avid Media Composer 2018.12, sample media, and Teradici Cloud Access software installed.

CallidusCloud Workflow: CallidusCloud Workflow includes everything you need to organize, automate, execute, and analyze business processes to connect people, data, and daily activities.

CentOS 6.10: This secure, cost-efficient, and quick to deploy distribution of Linux is based on CentOS and provided by Northbridge Secure. Enjoy the power of Microsoft Azure from any device in a matter of hours.

Citrix ADC 13.0: Providing operational consistency and a smooth user experience, Citrix ADC is an enterprise-grade application delivery controller that delivers your applications quickly, reliably, and securely – with deployment and pricing flexibility to meet your unique needs.

Citynet: Citynet is a monthly subscription-based SaaS application that enables cities to upload their unstructured city council data to Azure, where it's automatically and semantically indexed and made available for natural language querying.

cleverEAI by Sunato: cleverEAI monitors all BizTalk integration processes. View your workflows in real time, analyze, and reprocess failed instances immediately. This VM image contains a complete BizTalk environment, configured automatically by the cleverEAI installation package.

CMFlex: The CMFlex SaaS solution can be operated in different browsers and devices via the web, allowing you to manage your business from anywhere with accurate, real-time information. This application is available only in Portuguese.

Compliant FileVision: Compliant FileVision is a policy management solution that empowers you to implement consistent, efficient, and sustainable processes for managing the lifecycle of corporate policies and standards, incidents, service improvement requests, and procedures.

Data Protector: Micro Focus Data Protector is an enterprise-grade backup and DR solution for large, complex, heterogeneous IT environments. Built on a scalable architecture that combines security and analytics, it enables users to meet continuity needs reliably and cost-effectively.

FileMage Gateway: FileMage Gateway is a secure cloud file transfer solution that seamlessly connects legacy SFTP, FTPS, and FTP protocols to Azure Blob Storage.

Forms Connect: Forms Connect enables you to digitize paper processes by capturing images and data and storing them in Office 365. This solution is ideal for HR and finance teams looking to solve the challenges of capturing information from the field and moving it to Azure.

Global Product Authentication Service: Global Product Authentication Service is an innovative cloud‐based brand protection, track-and-trace, and consumer engagement service that drives business value by addressing challenges organizations face when operating in global markets.

Graylog (Ubuntu): Websoft9 Graylog stack is a preconfigured, ready-to-run image for running log systems on Azure. Graylog captures, stores, and enables real-time analysis of terabytes of machine data.

HealthCloud: The HealthCloud platform enables organizations and partners to easily develop highly interoperable solutions across the healthcare value chain. Its API-driven methodology delivers consolidated health data and patient-centric records from a wide range of sources.

Hibun Information Leak Prevention Solution: Protect confidential data from various information leaks, including theft, loss of device, insider fraud, and information theft by targeted cyberattacks.

Hyperlex: Hyperlex is a Software-as-a-Service solution for contract management and analysis with AI that identifies legal documents and their important information for retrieval, saving your organization considerable time and resources. This application is available only in French.

Hysolate for Privileged User Devices: Privileged Access Workstations (PAWs) provide a dedicated operating system for sensitive tasks that is protected from attacks and threat vectors. Hysolate makes PAWs practical to adopt at scale without degrading productivity.

Hystax Backup and Disaster Recovery to Azure: Hystax Backup and Disaster Recovery to Azure delivers consistent replication, storage-agnostic snapshots, and orchestration functionality with enterprise-grade recovery point objective and recovery time objective.

Imago.ai Intelligent Chatbot: Intelligent Chatbot on Microsoft Azure includes an interactive chatbot interface allowing clients to plug into any digital media as well as a dashboard that combines user behaviors and history to provide business insights.

InnGage Citypoints: InnoWave’s InnGage Citypoints gamification application on Microsoft Azure recognizes and rewards citizens who adopt good citizenship practices.

Intellicus BI Server V18.1 (25 Users – Linux): Intellicus BI Server on Microsoft Azure is an end-to-end self-service business intelligence platform that offers advanced reporting and analytics capabilities, a semantic layer, and integrated ETL capabilities.

Intellicus BI Server V18.1 (50 Users – Linux): Intellicus BI Server on Microsoft Azure is an end-to-end self-service business intelligence platform that offers advanced reporting and analytics capabilities, a semantic layer, and integrated ETL capabilities.

Intellicus BI Server V18.1 (100 Users – Linux): Intellicus BI Server on Microsoft Azure is an end-to-end self-service business intelligence platform that offers advanced reporting and analytics capabilities, a semantic layer, and integrated ETL capabilities.

Jamcracker CSB Service Provider Version 7.0.3: This solution automates order management, provisioning, and billing and can be easily integrated to support enterprise ITSM, billing, ERP, and identity systems including Active Directory and Active Directory Federation Services.

Jenkins (Ubuntu): Jenkins is an automation server with a broad plugin ecosystem for supporting practically every tool as a part of the delivery pipeline. Websoft9 Jenkins stack is a preconfigured, ready-to-run image for running Jenkins on Azure.

Jenkins on Windows Server 2016: Jenkins is a leading open source CI/CD server that enables the automation of building, testing, and shipping software projects. Jenkins on Windows Server 2016 includes all plugins needed to deploy any service to Azure.

KNIME Server Small: KNIME Server, KNIME's flagship collaboration product, offers shared repositories, advanced access management, flexible execution, web enablement, and commercial support. Share data, nodes, metanodes, and workflows throughout your company.

Knowage Community Edition (Ubuntu): Websoft9 Knowage is a preconfigured, ready-to-run image for deployment Knowage on Azure. Knowage Community Edition includes all analytical capabilities and guarantees a full end user experience.

Lustre on Azure: Lustre on Azure is a scalable, parallel file system built for high performance computing (HPC). It is ideally suited for dynamic, pay-as-you-go applications from rapid simulation and prototyping to peak HPC workloads.

Machine Translation: Tilde Machine Translation offers custom systems to fit each client's needs, delivering human-like translations that help save time and money, facilitate processes, and maximize sales.

NGINX Plus Enterprise Edition: NGINX Plus brings enterprise-ready features such as application load balancing, monitoring, and advanced management to your Azure application stack, helping you deliver applications with the performance, security, and scale of Azure.

Odoo Community Edition (Ubuntu): Websoft9 Odoo stack is a preconfigured, ready-to-run image for Odoo on Azure. The Odoo suite of web-based, open source business apps includes CRM, website builder, e-commerce, warehouse management, project management, and more.

OMNIA Low-code Platform: Model your applications using a business language based on economic theory, greatly reducing your product's development cycles from conception to deployment.

Omnia Retail: Omnia is a leading SaaS solution for integrated dynamic pricing and online marketing automation. It helps retailers regain control, save time, and drive profitable growth.

OXID eShop e-commerce platform: ESYON's OXID SaaS solution on Azure offers powerful, modern shop software with many out-of-the-box functions for B2B, B2C, and internationalization.

Package Be Cloud RGDP Azure – PIA: Designed to facilitate your compliance process, Be Cloud's tool can be adapted to your specific needs or to your business sector. This application is available only in French.

POINTR – Customer & Marketing Analytics: POINTR is a customer and marketing analytics application built using Microsoft Azure and Power BI. It delivers customer intelligence and actionable insights from personalized marketing campaigns via an intuitive interface.

Population Health Management Solution: BroadReach creates simple solutions to complex health challenges. By combining expert consulting and powered Vantage technologies, BroadReach gives clients the innovative edge to transform health outcomes.

Portability: Onecub is a personal data portability tool for the GDPR right to portability (article 20), providing companies with an all-in-one service to offer controlled, innovative portability to their clients.

Postgres Pro Enterprise Database 11: Postgres Pro Standard Database comes with SQL and NoSQL support. Postgres Pro Enterprise Database contains more features on top of Postgres Pro Standard Database to work with large databases and process lots of transactions.

Power BI voor Exact Online: Power BI for Exact Online is a powerful business analysis application configured and optimized for Exact Online's business administration and accounting environment. This application is available only in the Netherlands.

Power BI voor Twinfield: Power BI for Twinfield is a powerful business analysis application configured and optimized for Twinfield's business administration and accounting environment. This application is available only in the Netherlands.

Realtime Sales Radar: Track developments and sales figures of your online platforms in real time with the help of this HMS consulting service and data collection in the Azure cloud.

ReportServer on Ubuntu: Websoft9 offers a preconfigured and ready-to-run image for ReportServer, a modern and versatile business intelligence (OSBI) platform, on Azure.

SentryOne Test: SentryOne Test (formerly LegiTest) is a comprehensive, automated data testing framework that allows you to test all your data-centric applications in an easy-to-use platform.

Service Management Automation X: Micro Focus SMAX is an application suite for service and asset management, built from the ground up to include machine learning and analytics.

Snyk Cloud Security Platform: This Snyk solution lets developers securely use open source software while accelerating migration to Azure of micro-services and containerized and serverless workloads.

Social Intranet Analytics – with Netmind Core: Get a detailed overview of the use, acceptance, multilocation collaboration, and interactions on your social intranet with Netmind Core from Mindlab.

sospes: Sospes allows staff to report workplace incidents (injuries, property damage, environmental hazards, security threats) and generates management and regulatory reports.

StoreHippo: StoreHippo is a SaaS e-commerce platform used by customers across more than 15 countries and 35 business verticals. StoreHippo offers scalability and flexibility for next-gen businesses.

SyAudit for Medical Record Audits: This solution from SyTrue scans medical records and highlights key data by record type to let auditors quickly validate findings through a modernized workflow.

Tidal Migrations – Premium Insights for Database: Analyze your databases and uncover roadblocks to Azure cloud migration with this add-on to your Tidal Migrations subscription.

Trac – Issue Tracking System (Ubuntu): This stack from Websoft9 is a preconfigured image for Trac on Azure. Trac is an enhanced wiki and issue tracking system for software development projects.

Unsupervised Anomaly Detection Module: This IoT Edge Module (with Python) from BRFRame automatically categorizes dataset anomalies, eliminating manual work that can take time and lead to inaccuracies.

Video Inteligencia para Seguridad y Prevención: This video analytics solution acts as the brain of a security system, enabling decision-making in real time. This application is available only in Spanish.

VM Explorer: Micro Focus VM Explorer is an easy-to-use and reliable backup solution, offering fast VM and granular restore, replication, and verification of VMware vSphere and Microsoft Hyper-V environments.

winsafe: Winsafe from Nextronic is an IoT dashboard platform that can locate static or mobile end-devices positioned in outdoor or indoor areas without a dedicated infrastructure.

Consulting Services

Advanced DevOps Automation with CI/CD: 10-Day Imp.: Leveraging InCycle’s Azure DevOps Accelerators, InCycle cloud architects will ensure customers realize modern CI/CD pipelines, IT governance, and minimum time to production.

App Modernization Implementation – 3-Week Imp.: Based on InCycle’s proprietary Modern App Factory approach and Accelerators, InCycle’s Azure architects will analyze your environment, co-define your goals, and develop a cloud adoption strategy and roadmap.

Application Portfolio Assessment – Briefing: 1-day: HCL Technologies' free one-day briefing ensures customers understand HCL’s Cloud Assessment Framework and how it performs assessment in a proven methodology for migration to Microsoft Azure.

Azure AI & Bots: 2-Hr Assessment: This Neudesic assessment will provide a recommendation on how Microsoft Azure can be used to meet a key business need with an AI-powered bot using Neudesic's agile, repeatable approach to accelerate delivery time and value.

Azure Architecture Assessment – 2-day workshop: In this assessment, Cloud Valley's cloud architects gather functional and operational requirements, see how they align with your current business goals, and propose a technological solution.

Azure back-up and DR workshop – 2.5 days: Acora's team will review your on-premises or cloud environment to provide a recommended approach for migrating to Azure Backup and Azure Site Recovery.

Azure Cloud Readiness: 2-Week Assessment: Emtec evaluates business processes and technology infrastructure to assess current investments and identify potential areas that are ripe for successful cloud migration and adoption.

Azure Management Services: 10-Wk Implementation: Catapult Systems' Azure Management Services allow users to continuously optimize their cloud environment. In this assessment, Catapult helps you pick the option that best fits the objectives for your cloud environment.

Azure Migration – 2 Day Assessment: This Third I assessment is driven by an in-depth review of your existing solution architecture to help identify a suitable modern data warehouse to match your solution needs.

Azure Migration: 2.5 day Workshop: Acora will review your technical capability and readiness for a migration to Azure and provide recommendations on the cost, resources, and time needed to move with minimal downtime.

Citrix Workspace on Azure: 5 Day Proof of Concept: Get a custom proof of concept of Citrix Cloud Workspace Integrated with Microsoft Azure along with design and cost estimates to enable your organization to move forward with the solution.

Cloud Foundation Assessment: 6 Wk Assessment: This Anglepoint assessment will help you optimize your environment before you migrate to the cloud to ensure the most cost-effective solution that maximizes throughput and availability.

Cloud Journey Assessment – 4 Weeks: In this four-week assessment, Dedalus will determine the Azure dependencies for each of your applications to prioritize which applications and systems are the best candidates for migration.

Cloud Migration – 8 week implementation: CloudOps will help develop an Azure migration and cloud-native strategy that meets your current workload and security requirements while enabling you to scale to the future needs of your business.

Cloud Optimized WAN Engagement: 4-day Assessment: Equinix will help develop a customized WAN strategy focusing on improved latency, performance, security, and flexibility while providing clear insights into your expected return on investment or total cost of ownership.

DataDebut Cloud Analytics: 5-Day Proof of Concept: This POC engagement helps boost your understanding of cloud concepts and offerings so that you can identify potential future value of enhancing your data platform, define your path to cloud-native data analytics, and more.

DataGuide Cloud Analytics Intro: 1-Day Assessment: Intended for solution architects, project and program managers, and key stakeholders, this free engagement gives your organization an overview of Azure data analytics and how they can greatly enhance your data estate.

DataVision Project Discovery: 5-Day Assessment: This Azure data project discovery provides in-depth analysis, design, and planning, enabling you to employ future-proof architectures, identify the best approach for the rationalization of existing data assets, and more.

Equinix Cloud Exchange: 2-day Implementation: Equinix's Cloud Enablement services make it easy to complete the setup and configuration necessary to activate your connection to the Azure cloud.

ExpressRoute Connectivity Strategy: 3-day Workshop: This Equinix workshop empowers customers to implement an Azure ExpressRoute connectivity strategy tailored for their specific needs and is a fast-track path to optimized Azure consumption.

Onboarding Services – USA: 4 weeks implementation: Anunta's Onboarding Services on Azure ensure end-to-end management of virtual desktop workload transition to the cloud, including implementation, Active Directory configuration, image creation, app configuration, and more.

Palo Alto Test Drive on Azure: 1/2 Day Workshop: See how easy it is to securely extend your corporate datacenter to Azure using Palo Alto Networks Next Generation VM-Series firewalls with security features to protect applications and data from threats.

Predica Azure Migration 5-Day Proof of Concept: In this cloud migration proof of concept, Predica will guide you through the process of workload migration, ensuring you get the most from your Microsoft Azure implementation.

Security & Compliance Assessment – 4 Wk Assessment: This Logicworks offering will help you assess your Azure environment against compliance frameworks and receive automated reporting, vulnerability scanning, and a remediation roadmap to help you improve security.

Spyglass/Azure Security – 10 Wk. Implementation: Catapult's Spyglass service jump-starts your cloud security by deploying Microsoft's security tools and leveraging security experts, best practices, and centralized security dashboards.

Quelle: Azure

Why you Have to Fail Fearlessly to Succeed: The Citizens Bank Story of Innovation with Docker

We had the chance recently to sit down with the Citizens Bank mortgage division and ask them how they’ve incorporated innovation into a regulated and traditional business that is still very much paper-based.
The most important lesson they’ve learned: you have to be willing to “fail fearlessly,” but to do that, you also have to minimize the consequences and cost of failure so you can constantly try new ideas. With Docker Enterprise, the team has been able to take ideas from concept to production in as little as a day.
Here’s what they told us. You can also catch the highlights in this 2 minute video:

On focus: 
Matt Rider, CIO Mortgage Division: Our focus is changing the mortgage technology experience at the front end with the borrower and on the back end for the loan officers and the processors. How do we bring those two together? How do we reduce the aggravation that comes with obtaining a mortgage?
On founding an “innovation team” . . .
Matt: When I came here I recognized that we were never going to achieve our vision if we kept doing things the same way. We wanted to reduce the aggravation that comes with obtaining a mortgage. But you can’t change when you’re supporting what’s in front of you, dealing with production issues and how do we keep the lights on. You need a separate entity that’s going to look forward, has the funding they need, and most importantly is not afraid to fail. The innovation team was key to that.
Sharon Frazier, SVP Innovation: The innovation team was formed because we knew we wanted to disrupt ourselves. We knew we wanted to start greenfield. We created a cross-functional team because we were basically building a product to deliver to ourselves. In essence, we were acting like a startup.
On the importance of failing. . . 
Matt: You have to fail. You have to be able to take on new risks and try new ways of doing things, so how the organization and leadership reacts matters. You have to empower teams to fail and ask what we can do differently next time. 
Don Bauer, Senior DevOps Manager: Docker has allowed us to fail fearlessly. We can test new things easily and quickly and if they work, awesome. But if they don’t, we didn’t spend weeks or months on it. We might have spent a couple of hours or days.
On Docker . . .
Matt: We didn’t have a platform for innovation. And that’s when Docker came on our radar. We did our due diligence and our research and we realized that was the pivotal piece that was going to set us free.
Sharon: Docker is the building block for our new platform. It allows our developers to be self-sufficient. When they want to create a new service or new component within the application, they can self-serve through the delivery platform.
Don: The Docker platform has made it really easy for us to tackle every part of the pipeline all the way from our development environments through production. It has really helped us with being able to tackle new problems and challenges every day.
On results…
Mike Noe, Senior DevOps Engineer: In November 2016 we started our innovation team and had about a dozen containers and maybe three or four services running. Since then we’ve grown to over 3,000 containers across our entire platform and over 1,000 services. That includes our test and staging clusters as well as our ops and production cluster.
Don: The way we’ve developed applications has changed drastically with Docker. We’re no longer building big monoliths and trying to cram everything into one package that we’re going to have a hard time maintaining. We’re moving to single flow and building smaller but single purpose services. We couldn’t do that without Docker and we couldn’t manage those services without Docker.
Matt: Docker has definitely helped us innovate. It has definitely helped us to accelerate ideas that we’ve had and move from idea to operate in a matter of hours in some instances. So Docker has given us a lot of capabilities there that will distinguish us in the mortgage industry.

Why you Have to Fail Fearlessly to Succeed: A Story of Innovation at Citizens BankClick To Tweet

To learn more:

Read about Citizens Bank and Docker Enterprise
Watch the Citizens Bank DockerCon presentation
Start a free trial of Docker Enterprise

The post Why you Have to Fail Fearlessly to Succeed: The Citizens Bank Story of Innovation with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

PyTorch on Azure: Full support for PyTorch 1.2

Congratulations to the PyTorch community on the release of PyTorch 1.2! Last fall, as part of our dedication to open source AI, we made PyTorch one of the primary, fully supported training frameworks on Azure. PyTorch is supported across many of our AI platform services and our developers participate in the PyTorch community, contributing key improvements to the code base. Today we would like to share the many ways you can use PyTorch 1.2 on Azure and highlight some of the contributions we’ve made to help customers take their PyTorch models from training to production.

PyTorch 1.2 on Azure

Getting started with PyTorch on Azure is easy and a great way to train and deploy your PyTorch models. We’ve integrated PyTorch 1.2 in the following Azure services so you can utilize the latest features:

Azure Machine Learning service – Azure Machine Learning streamlines the building, training, and deployment of machine learning models. Azure Machine Learning’s Python SDK has a dedicated PyTorch estimator that makes it easy to run PyTorch training scripts on any compute target you choose, whether it’s your local machine, a single virtual machine (VM) in Azure, or a GPU cluster in Azure. Learn how to train Pytorch deep learning models at scale with Azure Machine Learning.
Azure Notebooks – Azure Notebooks provides a free, cloud-hosted Jupyter notebook server with PyTorch 1.2 pre-installed. To learn more, check out the PyTorch tutorials and examples.
Data Science Virtual Machine – Data Science Virtual Machines are pre-configured with popular data science and deep learning tools, including PyTorch 1.2. You can choose a variety of machine types to host your Data Science Virtual Machine, including those with GPUs. To learn more, refer to the Data Science Virtual Machine documentation.

From PyTorch to production

PyTorch is a popular open-source deep learning framework for creating and training models. It is built to use the power of GPUs for faster training and is deeply integrated with Python, making it easy to get started. However, deploying trained models to production has historically been a pain point for customers. For production environments, using Python for the core computations may not be suitable due to performance and multi-threading requirements. To address this challenge, we collaborated with the PyTorch community to make it easier to use PyTorch trained models in production.

PyTorch’s JIT compiler transitions models from eager mode to graph mode using tracing, TorchScript, or a mix of both. We then recommend using PyTorch’s built-in support for ONNX export. ONNX stands for Open Neural Network Exchange and is an open standard format for representing machine learning models. ONNX models can be inferenced using ONNX Runtime. ONNX Runtime is an inference engine for production scale machine learning workloads that are open source, cross platform, and highly optimized. Written in C++, it runs on Linux, Windows, and Mac. Its small binary size makes it suitable for a range of target devices and environments. It’s accelerated on CPU, GPU, and VPU thanks to Intel and NVIDIA who have integrated their accelerators with ONNX Runtime.

In PyTorch 1.2, we contributed enhanced ONNX export capabilities:

Support for a wider range of PyTorch models, including object detection and segmentation models such as mask RCNN, faster RCNN, and SSD
Support for models that work on variable length inputs
Export models that can run on various versions of ONNX inference engines
Optimization of models with constant folding
End-to-end tutorial showing export of a PyTorch model to ONNX and running inference in ONNX Runtime

You can deploy your own PyTorch models to various production environments with ONNX Runtime. Learn more at the links below:

Deploy to the cloud
Deploy to Windows apps
Deploy to Linux IoT ARM device

Next steps

We are very excited to see PyTorch continue to evolve and improve. We are proud of our support for and contributions to the PyTorch community. PyTorch 1.2 is now available on Azure—start your free trial today.

We look forward to hearing from you as you use PyTorch on Azure.
Quelle: Azure

Music to their ears: microservices on GKE, Preemptible VMs improved Musiio’s efficiency by 7000%

Editor’s note: Advanced AI startup Musiio, the first ever VC-funded music tech company in Singapore, needed more robust infrastructure for the data pipeline it uses to ingest and analyze new music. Moving to Google Kubernetes Engine gave them the reliability they needed; rearchitecting their application as a series of microservices running on Preemptible VMs gave them new levels of efficiency and helped to control their costs. Read on to hear how they did it.At Musiio we’ve built an AI that ‘listens’ to music tracks to recognize thousands of characteristics and features from them. This allows us to create highly accurate tags, allow users to search based on musical features, and automatically create personalized playlists. We do this by indexing, classifying and ultimately making searchable new music as it gets created—to the tune of about 40,000 tracks each day for one major streaming provider.But for this technology to work at scale, we first need to efficiently scan tens of millions of digital audio files, which represent terabytes upon terabytes of data. In Musiio’s early days, we built a container-based pipeline in the cloud orchestrated by Kubernetes, organized around a few relatively heavy services. This approach had multiple issues, including low throughput, poor reliability and high costs. Nor could we run our containers with a high node-CPU utilization for an extended period of time; the nodes would fail or time out and become unresponsive. That made it almost impossible to diagnose the problem or resume the task, so we’d have to restart the scans.Figure 1: Our initial platform architecture.As a part of reengineering our architecture, we decided to experiment with Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP). We quickly discovered some important advantages that allowed us to improve performance and better manage our costs: GKE reliability: We were very impressed by GKE’s reliability, as we were able to run the nodes at >90% CPU load for hours without any issues. On our previous provider, the nodes could not take a high CPU load and would often become unreachable.Preemptible VMs and GPUs: GKE supports both Preemptible VMs and GPUs on preemptible instances. Preemptible VMs only last up to 24 hours but in exchange are up to 80% cheaper than regular compute instances; attached GPUs are also discounted. They can be reclaimed by GCP at any time during these 24 hours (along with any attached GPUs). However, reclaimed VMs do not disappear without warning. GCP sends a signal 30 seconds in advance, so your code has time to react. We wanted to take advantage of GKE’s improved performance and reliability, plus lower costs with preemptible resources. To do so, though, we needed to implement some simple changes to our architecture. Building a microservices-based pipelineTo start, we redesigned our architecture to use lightweight microservices, and to follow one of the most important principles of software engineering: keep it simple. Our goal was that no single step in our pipeline would take more than 15 seconds, and that we could automatically resume any job wherever it left off. To achieve this we mainly relied on three GCP services:Google Cloud Pub/Sub to manage the task queue,Google Cloud Storage to store the temporary intermediate results, taking advantage of its object lifecycle managementto do automatic cleanup, andGKE with preemptible nodes to run the code.Specifically, the new processing pipeline now consists of the following steps:New tasks are added through an exposed API-endpoint by the clients.The task is published to Cloud Pub/Sub and attached data is passed to a cloud storage bucket.The services pulls new tasks from the queue and reports success status.The final output is stored in a database and all intermediate data is discarded.Figure 2: Our new improved architecture.While there are more components in our new architecture, they are all much less complex. Communication is done through a queue where each step of the pipeline reports its success status. Each sub-step takes less than 10 seconds and can easily and quickly resume from the previous state and with no data loss. How do Preemptible VMs fit in this picture?Using preemptible resources might seem like an odd choice for a mission-critical service, but because of our microservices design, we were able to use Preemptible VMs and GPUs without losing data or having to write elaborate retry code. Using Cloud Pub/Sub (see 2. above) allows us to store the state of the job in the queue itself. If a service is notified that a node has been preempted, it finishes the current task (which, by design, is always shorter than the 30-second notification time), and simply stops pulling new tasks. Individual services don’t have to do anything else to manage potential interruptions. When the node is available again, services begin pulling tasks from the queue again, starting where they left off.This new design means that preemptible nodes can be added, taken away, or exchanged for regular nodes without causing any noticeable interruption.GKE’s Cluster Autoscaler also works very well with preemptible instances. By combining the auto scaling features (which automatically replaces nodes that have been reclaimed) with node labels, we were able to achieve an architecture with >99.9% availability that runs primarily on preemptible nodes. Finally… We did all this over the course of a month—one week for design, and three weeks for the implementation. Was it worth all this effort? Yes! With these changes, we increased our throughput from 100,000 to 7 million tracks per week—and at the same cost as before! This is a 7000% increase (!) in efficiency, and was a crucial step in making our business profitable. Our goal as a company is to be able to transform the way the music industry handles data and volume and make it efficient. With nearly 15 million songs being added to the global pool each year, access and accessibility are the new trend. Thanks to our new microservices architecture and the speed and reliability of Google Cloud, we are on our way to make this a reality. Learn more about GKE on the Google Cloud Platform website.
Quelle: Google Cloud Platform

E-Mail-Client: Mozilla veröffentlicht Thunderbird 68.0

Thunderbird 68.0 dürfte gerade Admins gefallen: Die Software soll sich besser in großen Umgebungen einrichten lassen und unterstützt mehrere Sprachpakete. Auch ist es möglich, E-Mails aller Konten als gelesen zu markieren. Allerdings funktionieren auch einige Add-Ons erst einmal nicht mehr. (Thunderbird, E-Mail)
Quelle: Golem