Einführung der Amazon Elastic Inference

Die Amazon Elastic Inference ermöglicht Ihnen jeder beliebigen Amazon EC2- und Amazon SageMaker-Instanz genau die richtige GPU-Beschleunigung zuweisen, um die Kosten für das In Deep Learning Inference-Verfahren um bis zu 75 % zu reduzieren. Die Amazon Elastic Inference unterstützt TensorFlow-, Apache MXNet- und ONNX-Modelle, und weitere Frameworks werden in Kürze zur Verfügung stehen. 
Quelle: aws.amazon.com

Einführung in AWS Cloud Map

AWS Cloud Map ist eine Serviceerkennung für alle Ihre Cloud-Ressourcen. Mit Cloud Map können Sie benutzerdefinierte Namen für Ihre Anwendungs-Ressourcen festlegen und der Speicherort dieser sich dynamisch verändernden Ressourcen wird aktualisiert. Dadurch erhöht sich die Verfügbarkeit Ihrer Anwendung, denn Ihr Web-Service erkennt jederzeit die aktuellsten Speicherorte seiner Ressourcen. 
Quelle: aws.amazon.com

Mithilfe des AWS Serverless Application Repository unterstützt das AWS Serverless Application Model jetzt auch verschachtelte Anwendungen

Sie können nun neue serverlose Architekturen mit verschachtelten Anwendungen zusammenstellen und bereitstellen, die vom AWS Serverless Application Model (SAM) unterstützt werden, indem Sie das AWS Serverless Application Repository verwenden. Verschachtelte Anwendungen sind locker kombinierte Komponenten einer serverlosen Architektur.
Quelle: aws.amazon.com

Jetzt neu: Amazon Managed Streaming für Kafka (Amazon MSK) in der öffentlichen Vorversion

Heute haben wir Amazon Managed Streaming für Kafka (Amazon MSK) in der öffentlichen Vorversion angekündigt. Amazon MSK ist ein voll verwalteter, hochverfügbarer und sicherer Service, der es Entwicklern und DevOps-Managern leicht macht, Anwendungen in der AWS Cloud auf Apache Kafka auszuführen, ohne, dass Fachwissen zur Infrastrukturverwaltung von Apache Kafka nötig ist. Amazon MSK betreibt hochverfügbare Apache Kafka-Cluster, stellt Sicherheitsmerkmale direkt beim Start zur Verfügung, ist voll mit quelloffenen Version von Apache Kafka kompatibel (wodurch bestehende Anwendungen ohne Codeänderungen migriert werden können) und verfügt über AWS-Integrationen, die die Anwendungsentwicklung beschleunigen.
Quelle: aws.amazon.com

Application Load Balancer kann jetzt Lambda-Funktionen aufrufen, um HTTP(S)-Anfragen zu bedienen

Application Load Balancers unterstützen jetzt den Aufruf von Lambda-Funktionen, um HTTP(S)-Anfragen zu bedienen. Damit können Benutzer serverlose Anwendungen über jeden HTTP-Client aufrufen, einschließlich Webbrowsern. Mit der Unterstützung für inhaltsbasierte Weiterleitungsregeln von Application Load Balancer können Sie auch Anfragen basierend auf dem Anfrageninhalt an unterschiedliche Lambda-Funktionen leiten. Vor dieser Neuerung konnten Sie nur EC2-Instances, Container und Vor-Ort-Server als Ziele für Application Load Balancer definieren und benötigten andere Proxy-Lösungen zum Aufruf von Lambda-Funktionen über HTTP(S). Sie können nun einen Application Load Balancer als gemeinsamen HTTP-Endpunkt nutzen, um den Betrieb und die Überwachung von Anwendungen, die Server und serverlose Datenverarbeitung nutzen, zu vereinfachen.
Sie können Lambda-Funktionen mithilfe der Elastic Load Balancing-Konsole, des AWS SDK oder des AWS Command Line Interface (CLI) als Ziele für einen Application Load Balancer registrieren. Außerdem können Sie über die AWS Lambda-Konsole in nur wenigen Klicks einen Application Load Balancer als Auslöser für Lambda-Funktionen konfigurieren.
Es gelten die üblichen Gebühren für AWS Lambda und Application Load Balancer. Weitere informationen erhalten Sie auf der Preisseite für Application Load Balancer.
Die Unterstützung des Lambda-Aufrufs über den Application Load Balancer ist für bestehende und neue Application Load Balancer in den AWS-Regionen USA-Ost (Nord-Virginia), USA-Ost (Ohio), USA-West (Nordkalifornien), USA West (Oregon), Asien-Pazifik (Mumbai), Asien-Pazifik (Seoul), Asien-Pazifik (Singapur), Asien-Pazifik (Sydney), Asien-Pazifik (Tokio), Kanada (Zentral), EU (Frankfurt), EU (Irland), EU (London), EU (Paris), Südamerika (São Paulo) und GovCloud (USA-West) verfügbar.
Mehr erfahren Sie in der Demo, im Blog und in der Dokumentation für den Application Load Balancer.
Quelle: aws.amazon.com

Native Python support on Azure App Service on Linux: new public preview!

We’re excited to officially announce the public preview of the built-in Python images for Azure App Service on Linux, a much requested feature by our customers. Developers can get started today deploying Python Web Apps to the cloud, on a fully-managed environment running on top of the Linux operating system.

This new preview runtime adds to a list of growing stacks supported by Azure App Service on Linux, which includes also Node.js, .NET Core, PHP, Java SE, Tomcat, and Ruby. With the choice of Python 3.7, 3.6 and soon 2.7, developers can get started quickly and deploy Python applications to the cloud, including Django and Flask, and leverage the full suite of features of Azure App Service on Linux. This includes support for deployments via “git push”, and the ability to deploy and debug live applications using Visual Studio Code (our free and open source editor for macOS, Linux, and Windows).

When you use the official images for Python on App Service on Linux, the platform automatically installs the dependencies specified in the requirements.txt​ file. Additionally, it detects common Flask and Django application structures and hosts them using gunicorn, and includes the necessary modules for connecting to Azure DB for PostgreSQL.

While the underlying infrastructure of Azure App Service on Linux has been generally available (GA) for over a year, at the moment we’re releasing the runtime for Python in public preview, with GA expected in a few months. In addition to using the built-in images, Python developers can deploy their applications using a custom Docker container on Web Apps for Containers.

Learn more about Python on Azure and Visual Studio Code

Carlton Gibson, Django Software Foundation fellow and core maintainer of the Django project, recently joined our developer advocate Nina Zakharenko for a video series on using Python/Django on Visual Studio Code, Azure, and Azure DevOps.

The full walkthrough is available on the Microsoft + Open Source blog.

Next steps

Try out Python on App Service on Linux using the Azure CLI.
Get started experience using Visual Studio Code.

Let us know your feedback!
Quelle: Azure

Announcing Cloud DNS forwarding: Unifying hybrid cloud naming

A key part of a successful hybrid cloud strategy is making sure your resources can find each other via DNS, whether they are in the cloud or on-prem. Rather than create separate islands of DNS namespaces, we’ve added new forwarding capability to Cloud DNS, our managed DNS service, letting you easily link your cloud and on-prem environments, so you can use the same DNS service for all your workloads and resources.Built with Cloud DNS’ new network policy capability, DNS forwarding allows you to create bi-directional forwarding zones between your on-prem name servers and Google Cloud Platform’s internal name servers. Currently in beta, DNS forwarding provides the following features and benefits:Outbound forwarding lets your GCP resources use your existing DNS authoritative servers on-prem, including BIND, Active Directory, etc.Inbound forwarding allows on-prem (or other cloud resources) to resolve names via Cloud DNS.Intelligent Google caching improves the performance of your queries;  cached queries do not travel over your connectivity links.DNS forwarding is a fully managed service—no need to use additional software or your own compute and support resources.In a nutshell, DNS forwarding provides a first-class GCP managed service to connect your DNS cloud and on-prem environments, providing unified naming for your workloads and resources. Further, you can use DNS forwarding for inbound traffic, outbound traffic, or both, to support existing or future network architecture needs.DNS is a critical component of tying hybrid cloud architectures together. DNS forwarding in combination with GCP connectivity solutions such as Cloud Interconnect and Cloud VPN creates a seamless and secure network environment between your GCP cloud and on-prem data centers. To learn more, check out the DNS forwarding documentation and get started using DNS forwarding today.
Quelle: Google Cloud Platform

Accelerate your app delivery with Kubernetes and Istio on GKE

It’s no wonder so many organizations have moved all or part of their IT to the cloud; it offers a range of powerful benefits. However, making the jump is often easier said than done. Many organizations have a significant on-premises IT footprint, aren’t quite cloud-ready, and constrained by regulations or lack of consistent security and operating model across on-premises and the cloud.We are dedicated to helping you modernize your existing on-premises IT and move to the cloud at a pace that works for you. To do that, we are leading the charge on a number of open-source technologies for containers and microservices-based architectures. Let’s take a look at some of these and how they can help your organization prepare for a successful journey to the cloud.Toward an open cloud stackAt Google Cloud Next ‘18, we announced Cloud Services Platform, a fully managed solution based on Google open-source technologies. With Cloud Services Platform, you have the tools to transform your IT operations and build applications for today and the future, using containerized infrastructure and a microservices-based application architecture.Cloud Services Platform combines Kubernetes for container orchestration with Istio, the service management platform, helping you implement infrastructure, security, and operations best practices. The goal is to bring you increased velocity and reliability, as well as to help manage governance at the scale you need. Today, we are taking another step towards this vision with Istio on GKE.Think services first with IstioWe truly believe that Istio will play a key role in helping you make the most of your microservices. One way Istio does this is to provide improved visibility and security, making working with containerized workloads easier. With Istio on GKE, we are the first major cloud provider to offer direct integration to a Kubernetes service and simplified lifecycle management for your containers.Istio is a service mesh that lets you manage and visualize your applications as services, rather than individual infrastructure components. It collects logs, traces, and telemetry, which you can use to set and enforce policies on your services. Istio also lets you add security by encrypting network traffic, all while layering transparently onto any existing distributed application—you don’t need to embed any client libraries in your code.Istio securely authenticates and connects your services to one another. By transparently adding mTLS to your service communication, all information is encrypted in transit. Istio provides a service identity for each service, allowing you to create service-level policies that are enforced for each individual application transaction, while providing non-replayable identity protection.  Out of the gate, you can also benefit from Istio’s visibility features thanks to its integration with Stackdriver, GCP’s native monitoring and logging suite. This integration sends service metrics, logs, and traces to Stackdriver, letting you monitor your golden signals (traffic, error rates, and latencies) for every service running in GKE.Istio 1.0 was a key step toward helping you manage your services in a hybrid world, where multiple workloads run in different environments—clouds and on-premises, in containerized microservices or monolithic virtual machines. With Istio on GKE, you get granular visibility, security, and resilience for your containerized applications, with a dead-simple add-on that works out-of-the-box with all your existing applications.Using Istio on GKEThe service-level view and security that Istio delivers are especially important for distributed applications deployed as containerized microservices, and Istio on GKE lets you deploy Istio to your Kubernetes clusters with the click of a button.Istio on GKE works with both new and existing container deployments. It lets you incrementally roll out features, such as Istio security, bringing the benefits of Istio to your existing deployments. It also simplifies Istio lifecycle management by automatically upgrading your Istio deployments when newer versions become available.Today’s Beta availability of Istio on GKE is just the latest of many advancements we have made to make GKE the ideal choice for enterprises. Try Istio on GKE today by visiting the Google Cloud Platform console. To learn more please visit cloud.google.com/istio or the Istio on GKE documentation.Enhancing GKE networkingEarlier this year we announced many new networking features for GKE, including VPC-native clusters, Shared VPC, container-native load balancing and container-native network services for applications running on GKE and self-managed Kubernetes in Google Cloud.With VPC-native clusters , GKE natively supports many VPC features such as scale enhancement, IP management, security checks, and hybrid connectivity etc.Shared VPC lets  you delegate administrative responsibilities to cluster admins while ensuring your critical network resources are managed by network admins.Container-native load balancing lets you program load balancers with containers as endpoints directly for more optimal load balancing. Network services let you use Cloud Armor, Cloud CDN and Identity Aware Proxy natively with your container workloads.We also announced new features to help simplify the configuration of containerized deployments, with some backend and frontend config enhancements. These improvements make everything easier, from identity and access management for network resources to better controls for CDN, Cloud Armor, and load balancing for easier application delivery.Improving GKE securityGCP helps you secure your container environment at each stage of the build-and-deploy lifecycle with software supply chain and runtime security tools. These include integrations to tools from multiple security partners, all on top of Google’s security-focused infrastructure and security best practices. New features like node auto-upgrade and private clusters increase the security options available to GKE users. You can read more about new security features in GKE in “Exploring Containers Security: This year it’s about security.”Delivering Kubernetes apps via GCP MarketplaceEnterprises usually work with a number of partners within their IT environments, whether it’s in the cloud or on-premises. Six months ago, we introduced Kubernetes applications delivered through GCP Marketplace. Kubernetes apps offer more than just a container image; they are production-ready solutions that are integrated with GKE for simple click-to-deploy launches. Once deployed to GKE, Kubernetes apps are managed as full applications, simplifying resource management. You can also deploy Kubernetes apps to non-GKE Kubernetes clusters, whether they’re on-premises or in the cloud, for quick deployment that’s billed alongside other GCP spend.With Kubernetes, your cloud, your wayIf you use containers and Kubernetes, you already know how they can optimize infrastructure resources, reduce operational overhead, and improve application portability. But by standardizing on Kubernetes, you’ve also laid the foundation for improved service management and security, as well as simplified application procurement and deployment, across clouds and on-prem. Stay tuned in the coming months for more about Kubernetes, microservices, and Cloud Services Platform.
Quelle: Google Cloud Platform