Ab Sommer: App Satellite bringt eSIM als In-App-Kauf
Erst einmal nur im Ausland, später auch in Deutschland: Die App Satellite hat die eSIM gemeistert, für Android und iOS. Doch noch nicht für alle Smartphones. (Sipgate, Telekom)
Quelle: Golem
Erst einmal nur im Ausland, später auch in Deutschland: Die App Satellite hat die eSIM gemeistert, für Android und iOS. Doch noch nicht für alle Smartphones. (Sipgate, Telekom)
Quelle: Golem
Cadillac will sein erstes Fahrzeug, das rein elektrisch angetrieben wird, im April 2020 zeigen. Es wird sich um einen SUV handeln. (General Motors, Technologie)
Quelle: Golem
Panasonic hat einen elektronischen Kamerasucher entwickelt, mit dem farbblinde Menschen genauso sehen können wie Normalsichtige. (Digitalkamera, Panasonic)
Quelle: Golem
한국어 버전South Korea is a digital powerhouse—a manufacturing giant with an emphasis on robotics and AI, a massive gaming market, and a leader in smartphone penetration. To better help our customers deliver digital services closer to this engaged market, we’re happy to announce that our new Google Cloud Platform (GCP) region in Seoul is officially open for business.Designed to support Korean customers, the Seoul region is our first GCP region in South Korea and eighth in Asia Pacific. With this region, Google Cloud now offers 21 regions and 64 zones across 16 countries worldwide.A cloud made for KoreaThe launch of our new Seoul region (asia-northeast3) brings lower latency access to data and applications for both local and global companies doing business in South Korea. The new Seoul region is comprised of three zones from the start, enabling Google Cloud customers and partners to run high availability workloads and store their data locally.The Seoul region launches with our standard set of services, including Compute Engine, Google Kubernetes Engine, Bigtable, Spanner, and BigQuery. Hybrid cloud customers can seamlessly integrate new and existing deployments with help from our regional partner ecosystem, and via multiple Dedicated Interconnect locations.Visit our cloud locations page for a complete list of services available in the Seoul region.What customers and partners are sayingThe presence of the new Seoul region lets new and existing customers in South Korea leverage advanced Google Cloud technologies to drive innovation.“Google Cloud’s flexibility and extensibility help us provide various services more reliably and economically. And now with Seoul as a region, we can have an even bigger impact.” – Soobaek Jang, VP, AI Server Development, Samsung Electronics“As South Korea’s largest gaming company, we’re partnering with Google Cloud for game development, infrastructure management, and to infuse our operations with business intelligence. Google Cloud’s region in Seoul reinforces its commitment to the region and we welcome the opportunities this initiative offers our business.” – Chang-Whan Sul, CTO, Netmarble“SK Telecom uses Google Cloud to establish a data pipeline to support data processing and modeling, and unlock the potential of AI and machine learning. We look forward to exploring the opportunities presented by the new region in Seoul.” – Yoo-sun Jeong, Head of AI Product Engineering, SK Telecom“We’re providing unmanned consultation service in 53 contact centers worldwide using Dialogflow on Google Cloud, reducing consultation time per case and increasing customer satisfaction. We look forward to working with Google Cloud further with the opening of the Seoul region.” – Hyekyung Pak, Principal, CS Information Strategy Team, LG Electronics“We use Google Cloud AI and analytics to reduce dropout rates and increase advertising exposure for our flagship mobile puzzle game Anipang. The Google Cloud region in Seoul will provide new opportunities for developers in Korea to create and monetize games while supporting rapid growth in users.” – Changmyoung Lee, CTO, SundayToz“At Bespin Global, we hold 100+ Google Cloud certifications in Korea and China and are the first Asia-Pacific-headquartered partner to achieve the status of Premier Partner and Managed Services Provider with Google Cloud. This means the number of customers successfully introduced to Google Cloud through Bespin Global and the number of future customers that want to be introduced to Google Cloud have both increased. We expect even more activity with the launch of the Google Cloud Seoul region.” – Hanjoo Lee, Co-founder and CEO, Bespin GlobalWhat’s next2020 will be a tremendous year for Google Cloud as we continue to expand our global infrastructure. Visit our Seoul region page for more details about the region, and our cloud locations page for updates on the availability of additional services and regions. Then, stay tuned as we launch more zones and regions throughout the year, including locations in Salt Lake City, Las Vegas, and Jakarta.
Quelle: Google Cloud Platform
AWS Firewall Manager unterstützt jetzt AWS CloudFormation, sodass Kunden alle Firewall Manager-Richtlinientypen und -Ressourcen mit CloudFormation-Stack-Vorlagen verwalten können. AWS Firewall Manager ist ein Sicherheitsmanagementservice, der die zentrale Konfiguration und Verwaltung von Firewallregeln für Ihre Konten und Anwendungen in AWS Organization ermöglicht. Mit Firewall Manager können Sie AWS WAF, AWS Shield Advanced oder VPC-Sicherheitsgruppen in Ihrer gesamten AWS Organization verwalten. Firewall Manager stellt sicher, dass alle Sicherheitsregeln konsistent durchgesetzt werden, auch wenn neue Konten oder Anwendungen erstellt werden.
Quelle: aws.amazon.com
Amazon Rekognition ist ein auf Deep Learning basierender Bild- und Videoanalyseservice, der Objekte, Menschen, Text und Szenen erkennt sowie die Inhaltsmoderation durch das Erkennen unsicherer Inhalte erleichtert. Ab sofort können Sie Text in Videos erkennen und die Erkennungszuverlässigkeit, den Positionsbegrenzungsrahmen und den Zeitstempel für jede Texterkennung erhalten. Zudem bietet die Texterkennung sowohl in Bildern als auch in Videos jetzt praktische Optionen zum Herausfiltern von Wörtern nach Bereichen von Interesse (Regions Of Interest, ROIs), Größe des Wortbegrenzungsrahmens und Wortzuverlässigkeitswert.
Quelle: aws.amazon.com
We launched Anthos to provide customers with a platform to deliver and manage applications across all types of environments and infrastructure—most commonly, hybrid and multi-cloud environments—leveraging containers and Kubernetes.To date, we have seen an extremely enthusiastic response from customers who want to run key workloads on Anthos. Our partners are enabling customers to deliver solutions that leverage Anthos in new and exciting ways. This includes storage, which is a key consideration as organizations look to manage their data across hybrid or multi-cloud deployments in containerized environments.Today, we’re excited to announce a new qualification for partner storage solutions:Anthos Ready Storage. This qualification recognizes partner solutions that have met a core set of requirements to run optimally with Anthos running on-premises, and helps organizations select storage solutions that are deployed with Anthos.The first set of partners to achieve the Anthos Ready Storage qualification for Anthos on-premise are Dell EMC, HP Enterprise, NetApp, Portworx, Pure Storage, and Robin.io. All Anthos Ready Storage partners have met multiple criteria, including:Demonstrated core Kubernetes functionality including dynamic provisioning of volumes via open and portable Kubernetes-native storage APIs.A proven ability to automatically manage storage across cluster scale-up and scale-down scenarios.A simplified deployment experience following Kubernetes practices.“Speed is the new scale in the world upset by digital transformation, the complex reality is that data and resources live anywhere and everywhere.” said Anthony Lye, senior vice president and general manager, Cloud Data Services at NetApp. “We’re excited to expand our support for customers on Anthos in the hybrid multicloud as a part of the Anthos Ready Storage initiative. Together, Google Cloud’s Anthos and NetApp Trident and Kubernetes-ready storage offer a proven solution that helps customers manage their data on public cloud, on premises and hybrid cloud environments.”“Speed to market is a key differentiator as companies develop next generation, cloud-native applications. The emergence of Kubernetes is driven by that need for agility,” said Jay Snyder, SVP Global Alliances, Dell Technologies. “We’re pleased to participate in this program, as Dell EMC PowerMax and VxFlex are ideal infrastructure options when paired with Google Cloud Anthos to deploy Kubernetes in multi-cloud environments.”“Businesses are moving rapidly to modernize their applications using container based architectures,” said Omer Asad, VP and GM Primary Storage & Data Services at HP Enterprise. “We’re excited to expand our work with Google Cloud to qualify our fully-managed, container-based storage solutions such as HPE Nimble Storage for the Anthos platform.”We’re committed to meeting customers where they are, and providing them with the ability to run key workloads and applications in the environment best suited for their business. To learn more about the Anthos Ready Storage program, please visit here. To get started with Anthos, contact us.
Quelle: Google Cloud Platform
This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on Kubernetes with Istio and Monitoring Containerized Application Health with Docker.
Istio is a service mesh – a software component that runs in containers alongside your application containers and takes control of the network traffic between components. It’s a powerful architecture that lets you manage the communication between components independently of the components themselves. That’s useful because it simplifies the code and configuration in your app, removing all network-level infrastructure concerns like routing, load-balancing, authorization and monitoring – which all become centrally managed in Istio.
There’s a lot of good material for digging into Istio. My fellow Docker Captain Lee Calcote is the co-author of Istio: Up and Running, and I’ve just published my own Pluralsight course Managing Apps on Kubernetes with Istio. But it can be a difficult technology to get started with because you really need a solid background in Kubernetes before you get too far. In this post, I’ll try and keep it simple. I’ll focus on three scenarios that Istio enables, and all you need to follow along is Docker Desktop.
Setup
Docker Desktop gives you a full Kubernetes environment on your laptop. Just install the Mac or Windows version – be sure to switch to Linux containers if you’re using Windows – then open the settings from the Docker whale icon, and select Enable Kubernetes in the Kubernetes section. You’ll also need to increase the amount of memory Docker can use, because Istio and the demo app use a fair bit – in the Resources section increase the memory slider to at least 6GB.
Now grab the sample code for this blog post, which is in my GitHub repo:
git clone https://github.com/sixeyed/istio-samples.git
cd istio-samples
The repo has a set of Kubernetes manifests that will deploy Istio and the demo app, which is a simple bookstore website (this is the Istio team’s demo app, but I use it in different ways so be sure to use my repo to follow along). Deploy everything using the Kubernetes control tool kubectl, which is installed as part of Docker Desktop:
kubectl apply -f ./setup/
You’ll see dozens of lines of output as Kubernetes creates all the Istio components along with the demo app – which will all be running in Docker containers. It will take a few minutes for all the images to download from Docker Hub, and you can check the status using kubectl:
# Istio – will have “1/1” in the “READY” column when fully running:
kubectl get deploy -n istio-system
# demo app – will have “2/2” in the “READY” column when fully running:
kubectl get pods
When all the bits are ready, browse to http://localhost/productpage and you’ll see this very simple demo app:
And you’re good to go. If you’re happy working with Kubernetes YAML files you can look at the deployment spec for the demo app, and you’ll see it’s all standard Kubernetes resources – services, service accounts and deployments. Istio is managing the communication for the app, but we haven’t deployed any Istio configurations, so it isn’t doing much yet.
The demo application is a distributed app. The homepage runs in one container and it consumes data from REST APIs running in other containers. The book details and book reviews you see on the page are fetched from other containers. Istio is managing the network traffic between those components, and it’s also managing the external traffic which comes into Kubernetes and on to the homepage.
We’ll use this demo app to explore the main features of Istio: traffic management, security and observability.
Managing Traffic – Canary Deployments with Istio
The homepage is kinda boring, so let’s liven it up with a new release. We want to do a staged release so we can check out how the update gets received, and Istio supports both blue-green and canary deployments. Canary deployments are generally more useful and that’s what we’ll use. We’ll have two versions of the home page running, and Istio will send a proportion of the traffic to version 1 and the remainder to version 2:
We’re using Istio for service discovery and routing here: all incoming traffic comes into Istio and we’re going to set rules for how it forwards that traffic to the product page component. We do that by deploying a VirtualService, which is a custom Istio resource. That contains this routing rule for HTTP traffic:
gateways:
– bookinfo-gateway
http:
– route:
– destination:
host: productpage
subset: v1
port:
number: 9080
weight: 70
– destination:
host: productpage
subset: v2
port:
number: 9080
weight: 30
There are a few moving pieces here:
The gateway is the Istio component which receives external traffic. The bookinfo-gateway object is configured to listen to all HTTP traffic, but gateways can be restricted to specific ports and host names;The destination is the actual target where traffic will be routed (which can be different from the requested domain name). In this case, there are two subsets, v1 which will receive 70% of traffic and v2 which receives 30%;Those subsets are defined in a DestinationRule object, which uses Kubernetes labels to identify pods within a service. In this case the v1 subset finds pods with the label version=v1, and the v2 subset finds pods with the label version=v2.
Sounds complicated, but all it’s really doing is defining the rules to shift traffic between different pods. Those definitions come in Kubernetes manifest YAML files, which you deploy in the same way as your applications. So we can do our canary deployment of version 2 with a single command – this creates the new v2 pod, together with the Istio routing rules:
# deploy:
kubectl apply -f ./canary-deployment
# check the deployment – it’s good when all pods show “2/2” in “READY”:
kubectl get pods
Now if you refresh the bookstore demo app a few times, you’ll see that most of the responses are the same boring v1 page, but a lucky few times you’ll see the v2 page which is the result of much user experience testing:
As the positive feedback rolls in you can increase the traffic to v2 just by altering the weightings in the VirtualService definition and redeploying. Both versions of your app are running throughout the canary stage, so when you shift traffic you’re sending it to components that are already up and ready to handle traffic, so there won’t be additional latency from new pods starting up.
Canary deployments are just one aspect of traffic management which Istio makes simple. You can do much more, including adding add fault tolerance with retries and circuit breakers, all with Istio components and without any changes to your apps.
Securing Traffic – Authentication and Authorization with mTLS
Istio handles all the network traffic between your components transparently, without the components themselves knowing that it’s interfering. It does this by running all the application container traffic through a network proxy, which applies Istio’s rules. We’ve seen how you can use that for traffic management, and it works for security too.
If you need encryption in transit between app components, and you want to enforce access rules so only certain consumers can call services, Istio can do that for you too. You can keep your application code and config simple, use basic unauthenticated HTTP and then apply security at the network level.
Authentication and authorization are security features of Istio which are much easier to use than they are to explain. Here’s the diagram of how the pieces fit together:
Here the product page component on the left is consuming a REST API from the reviews component on the right. Those components run in Kubernetes pods, and you can see each pod has one Docker container for the application and a second Docker container running the Istio proxy, which handles the network traffic for the app.
This setup uses mutual-TLS for encrypting the HTTP traffic and authenticating and authorizing the caller:
The authentication Policy object applied to the service requires mutual TLS, which means the service proxy listens on port 443 for HTTPS traffic, even though the service itself is only configured to listen on port 80 for HTTP traffic;The AuthorizationPolicy object applied to the service specifies which other components are allowed access. In this case, everything is denied access, except the product page component which is allowed HTTP GET access;The DestinationRule object is configured for mutual-TLS, which means the proxy for the product page component will upgrade HTTP calls to HTTPS, so when the app calls the reviews component it will be a mutual-TLS conversation.
Mutual-TLS means the client presents a certificate to identify itself, as well as the service presenting a certificate for encryption (only the server cert is standard HTTPS behavior). Istio can generate and manage all those certs, which removes a huge burden from normal mTLS deployments.
There’s a lot to take in there, but the deployment and management of all that is super simple, it’s just the same kubectl process:
kubectl apply -f ./service-authorization/
Istio uses the Kubernetes Service Account for identification, and you’ll see when you try the app that nothing’s changed, it all works as before. The difference is that no other components running in the cluster can access the reviews component now, the API is locked down so only the product page can consume it.
You can verify that by connecting to another container – the details component is running in the same cluster. Try to consume the reviews API from the details container:
docker container exec -it $(docker container ls –filter name=k8s_details –format ‘{{ .ID}}’) sh
curl http://reviews:9080/1
You’ll see an error – RBAC: access denied, which is Istio enforcing the authorization policy. This is powerful stuff, especially having Istio manage the certs for you. It generates certs with a short lifespan, so even if they do get compromised they’re not usable for long. All this without complicating your app code or dealing with self-signed certs.
Observability – Visualising the Service Mesh with Kiali
All network traffic runs through Istio, which means it can monitor and record all the communication. Istio uses a pluggable architecture for storing telemetry, which has support for standard systems like Prometheus and Elasticsearch.
Collecting and storing telemetry for every network call can be expensive, so this is all configurable. The deployment of Istio we’re using is the demo configuration, which has telemetry configured so we can try it out. Telemetry data is sent from the service proxies to the Istio component called Mixer, which can send it out to different back-end stores, in this case, Prometheus:
(This diagram is a simplification – Prometheus actually pulls the data from Istio, and you can use a single Prometheus instance to collect metrics from Istio and your applications).
The data in Prometheus includes response codes and durations, and Istio comes with a bunch of Grafana dashboards you can use to drill down into the metrics. And it also has support for a great tool called Kiali, which gives you a very useful visualization of all your services and the network traffic between them.
Kiali is already running in the demo deployment, but it’s not published by default. You can gain access by deploying a Gateway and a VirtualService:
kubectl apply -f ./visualization-kiali/
Now refresh the app a few times at http://localhost/productpage and then check out the service mesh visualization in Kiali at http://localhost:15029. Log in with the username admin and password admin, then browse to the Graph view and you’ll see the live traffic for the bookstore app:
I’ve turned on “requests percentage” for the labels here, and I can see the traffic split between my product page versions is 67% to 34%, which is pretty close to my 70-30 weighting (the more traffic you have, the closer you’ll get to the specified weightings).
Kiali is just one of the observability tools Istio supports. The demo deployment also runs Grafana with multiple dashboards and Jaeger for distributed tracing – which is a very powerful tool for diagnosing issues with latency in distributed applications. All the data to power those visualizations is collected automatically by Istio.
Wrap-Up
A service mesh makes the communication layer for your application into a separate entity, which you can control centrally and independently from the app itself. Istio is the most fully-featured service mesh available now, although there is also Linkerd (which tends to have better baseline performance), and the Service Mesh Interface project (which aims to standardise mesh features).
Using a service mesh comes with a cost – there are runtime costs for hosting additional compute for the proxies and organizational costs for getting teams skilled in Istio. But the scenarios it enables will outweigh the cost for a lot of people, and you can very quickly test if Istio is for you, using it with your own apps in Docker Desktop.
The post Getting Started with Istio Using Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
This post was co-authored by Suren Jamiyanaa, Program Manager, Azure Networking
We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall capabilities based on your top feedback items:
ICSA Labs Corporate Firewall Certification.
Forced tunneling support now in preview.
IP Groups now in preview.
Customer configured SNAT private IP address ranges now generally available.
High ports restriction relaxation now generally available.
Azure Firewall is a cloud native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.
ICSA Labs Corporate Firewall Certification
ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the world’s top technology vendors.
Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see information here. For more information, see the ICSA Labs Firewall Certification program page.
Figure one – Azure Firewall now ICSA Labs certified.
Forced tunneling support now in preview
Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or a nearby Network Virtual Appliance (NVA) for additional inspection. By default, forced tunneling isn't allowed on Azure Firewall to ensure all its outbound Azure dependencies are met.
To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and BGP route propagation must be disabled.
Within this configuration, the AzureFirewallSubnet can now include routes to any on-premise firewall or NVA to process traffic before it's passed to the Internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet. For more information see Azure Firewall forced tunneling documentation.
Figure two – Creating a firewall with forced tunneling enabled.
IP Groups now in preview
IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. For more information, see the IP Groups in Azure Firewall documentation.
Figure three – Azure Firewall application rules utilize an IP group.
Customer configured SNAT private IP address ranges
Azure firewall provides automatic Source Network Address Translation (SNAT) for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. If your organization uses a public IP address range for private networks or opts to force tunnel Azure Firewall internet traffic via an on-premises firewall, you can configure Azure Firewall to not SNAT additional custom IP address ranges. For more information, see Azure Firewall SNAT private IP address ranges.
Figure four – Azure Firewall with custom private IP address ranges.
High ports restriction relaxation now generally available
Since its initial preview release, Azure Firewall had a limitation that prevented network and application rules from including source or destination ports above 64,000. This default behavior blocked RPC based scenarios and specifically Active Directory synchronization. With this new update, customers can use any port in the 1-65535 range in network and application rules.
Next steps
For more information on everything we covered above please see the following blogs, documentation, and videos.
Azure Firewall documentation.
Azure Firewall July 2019 blog: What’s new in Azure Firewall.
Azure Firewall Manager documentation.
Azure Firewall Manager blog: Azure Firewall Manager now supports virtual networks.
Azure Firewall central management partners:
AlgoSec CloudFlow.
Barracuda Cloud Security Guardian, now generally available in Azure Market.
Tufin SecureCloud.
Quelle: Azure
Benutzer von iOS können ab sofort Amazon API Gateway, AWS CloudTrail, AWS Identity and Access Management, AWS Lambda und Amazon Simple Queue Service in der mobilen Anwendung der Konsole verwenden. Zudem unterstützt Amazon CloudWatch ab sofort Protokolle.
Quelle: aws.amazon.com