Tropico 6 im Test: Wir basteln eine Bananenrepublik

Das Militär droht mit Streik, die Nachbarländer mit Krieg und wir haben keinen Rum mehr: Vor solchen Problemen stehen wir als (nicht allzu böser) Diktator in Tropico 6, das für anspruchsvolle Aufbauspieler mehr als einen Blick wert ist. Ein Test von Peter Steinlechner (Tropico, Spieletest)
Quelle: Golem

Feature Friday: DockerCon speakers on Kubernetes, Service Mesh and More

Please make a copy of the template, prior to filling in.

Created

March 28 2019

Updated

Authors

Jim Armstrong

Reviewer(s)

Alex H, Brett I, Elton S., Anusha, Deep

Final Approver(s)

Suzanne

Timing / Target
(Submitted 3 days prior to due date)

Friday March 29 2019

Priority Level:

Medium

Click to Tweet
Add more tweet options into the SoMe spreadsheet

Blog URL:

Key words (SEO)

Title: Feature Friday: DockerCon speakers sound off on Kubernetes, Service Mesh and More.

DockerCon brings industry leaders and experts of the container world to one event where they share their knowledge, experience and guidance. This year is no different. For the next few weeks, we’re going to highlight a few of our amazing speakers and the talks they will be leading.
In this first highlight, we have a few of our own Docker speakers that are covering storage and networking topics, including everything from container-level networking on up to full cross-infrastructure and cross-orchestrator networking.
Persisting State for Windows Workloads in Kubernetes
More on their session here.

Anusha Ragunathan
Docker Software Engineer

Deep Debroy
Docker Software Engineer

What is your breakout about?
We’ll be talking about persistent storage options for Windows workloads on Kubernetes. While a lot of options exist for Linux workloads we will look at dynamic provisioning scenarios for Windows workloads.
Why should people go to your session?
Persistence in Windows containers is very limited. Our talk aims to tackle this hard problem and provide practical solutions. The audience will learn about ways to achieve persistent storage in their Windows container workloads and they will also hear about future direction.
What is your favorite DockerCon moment?
Deep: The Dockercon party in Austin.
What are you looking forward to the most?
Anusha: I’m looking forward to the Docker women’s summit and attending Black Belt sessions.

Just What Is A “Service Mesh”, And If I Get One Will It Make Everything OK?
More on Elton’s session here.

Elton Stoneman 
Docker Developer Advocate

What is your breakout about? 
I’m talking about service meshes – Linkerd and Istio in particular. It’s a technical session so there are lots of demos, but it’s grounded in the practical question – do you really need a service mesh, and is it worth the cost? You’ll learn what a service mesh can do, and how it helps to cut a lot of infrastructure concerns from your code.
What are you most excited about at DockerCon?
I can’t tell you, it’s a secret… But I’m involved in one of our big announcements and it’s going to be a real “this changes everything” moment.
What is your all time favorite DockerCon moment?
In Barcelona I presented one of the keynote demos with my Docker buddy Lee. We were on the big stage for about 7 minutes, and rehearsing for that took all weekend. Lots of work but great fun and we had a ton of positive feedback.

Microservices Enabled API Server – Routing Across Any Infrastructure
More on their session here.

Brett Inman
Docker Engineering Manager

Alex Hokanson
Docker Infrastructure Engineer

What is your breakout about?
Our session is about how we do service discovery, load balancing, and rate limiting for high-traffic public-facing services like Docker Hub. At Docker we have developed a solution that allows routing web traffic across different workloads and environments . It doesn’t matter if the application is natively running on Ubuntu, via “docker container run”, in Docker Swarm, or in Kubernetes–our solution will get traffic to your service efficiently and even handle containers coming and going!
Why should people go to your session?
Our routing layer is the single most important piece of our infrastructure at Docker and moving that layer from host-based, native applications to Kubernetes was no small feat. Your routing layer shouldn’t slow developers down–see how we give our internal customers even more choice and flexibility!
What’s your favorite DockerCon moment?
Brett I: My favorite moment was meeting a large group of devops people in a Hallway Track and realizing we were all solving the same problems individually, how inefficient that was, and how powerful community and open source can be.
What are you most excited about for DCSF 19?
Alex: Meeting people and learning about how they operationalize Docker.

Thank you all so much and see you at DockerCon!

#DockerCon sneak peek: A chat with networking and storage breakout creators. Register for @DockerCon 2019 today:Click To Tweet

Call to Action

Register for DockerCon 2019, April 29 – May 2 in San Francisco.
Sign up and attend these additional events, running conjunction with DockerCon:

Women@DockerCon Summit, Monday, April 29th
Open Source Summit, Thursday, May 2nd
Official Docker Training and Certification
Workshops

The post Feature Friday: DockerCon speakers on Kubernetes, Service Mesh and More appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Schema validation with Event Hubs

Event Hubs is fully managed, real-time ingestion Azure service. It integrates seamlessly with other Azure services. It also allows Apache Kafka clients and applications to talk to Event Hubs without any code changes.

Apache Avro is a binary serialization format. It relies on schemas (defined in JSON format) that define what fields are present and their type. Since it's a binary format, you can produce and consume Avro messages to and from the Event Hubs.

Event Hubs' focus is on the data pipeline. It doesn't validate the schema of the Avro events.

If it's expected that the producer and consumers will not be in sync with the event schemas, there needs to be "source of truth" for the schema tracking, both for producers and consumers.

Confluent has a product for this. It's called Schema Registry. Schema Registry is part of the Confluent’s open source offering.

Schema Registry can store schemas, list schemas, list all the versions of a given schema, retrieve a certain version of a schema, get the latest version of a schema, and it can do schema validation. It has a UI and you can manage schemas via its REST APIs as well.

What are my options on Azure for the Schema Registry?

You can install and manage your own Apache Kafka cluster (IaaS)
You can install Confluent Enterprise from the Azure Marketplace.
You can use the HDInsight to launch a Kafka Cluster with the Schema Registry.
I've put together an ARM template for this. Please see the GitHub repo for the HDInsight Kafka cluster with Confluent’s Schema Registry.
Currently, Event Hubs store only the data, events. All metadata for the schemas doesn’t get stored. For the schema metadata storage, along with the Schema Registry, you can install a small Kafka cluster on Azure.
Please see the following GitHub post on how to configure the Schema Registry to work with the Event Hubs.
On a future release, Event Hubs, along with the events, will be able to store the metadata of the schemas. At that time, just having a Schema Registry on a VM will suffice. There will be no need to have a small Kafka cluster.

Other than the Schema Registry, are the any alternative ways of doing the schema validation for the events?

Yes, we can utilize Event Capture feature of the Event Hubs for the schema validation.

While we are capturing the message on a Azure Blob storage or a Azure Data Lake store, we can trigger an Azure Function via a Capture Event. This Function then can custom validate the received message's schema by leveraging the Avro Tools/libraries.

Please see the following for capturing events through Azure Event Hubs into Azure Blob Storage or Azure Data Lake Storage and/or see how to use Event Grid and Azure Functions for migrating Event Hubs data into a data warehouse.

We can also write Spark job(s) that consumes the events from the Event Hubs and validates the Avro messages by the custom schema validation Spark code with the help of org.apache.avro.* and kafka.serializer.* Java packages per say. Please look at this tutorial on how to stream data into Azure Databricks using Event Hubs.

Conclusion

Microsoft Azure is a comprehensive cloud computing service that allows you both the control of IaaS and the higher-level services of PaaS.

After the assessment of the project, if the schema validation is required, one can use Event Hubs PaaS service with a single Schema Registry VM instance, or can leverage the Event Hubs Capture Event feature for the schema validations.
Quelle: Azure

What’s new in Kubernetes 1.14? Webinar Q&A

The post What’s new in Kubernetes 1.14? Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Last week we took a look at some of the new features and other changes in Kubernetes 1.14 ahead of Monday’s release.  You can view the entire webinar here, but we also wanted to take a few minutes to answer some questions that came up during the webinar.
What is the status of RunAsGroup?
RunAsGroup, which enables you to specify the system group ID as which a Pod will run, is now beta and enabled by default. PodSpec and PodSecurityPolicy. (Not all container runtimes support this capability, however.)
If you’re using OpenStack, can you limit the number of Cinder volumes?
Yes, as part of the openStack provider, you can limit Cinder volumes, just as you can limit the number of volumes of other types users can create.
Can you use Kustomize by itself? What about for non-K8s YAML?
You don’t have to have Kubernetes installed to use Kustomize, but if you try to create non-Kubernetes YAML, you’ll get an error.
What will Ingress be replaced [with]?
Ingress itself isn’t being replaced, it’s just been moved to the networking group.
In order to run k8s on OpenStack what are the pros / cons compared to Rancher and Magnum?
This is probably a topic to which we could devote an entire blog article, but briefly, while Rancher is solely a container management system, running Kubernetes on OpenStack means you’ve got both environments available in case you need to run one or more VMs, and of course the same thing applies to Magnum, since it itself is an OpenStack component. However, running Magnum on Kubernetes means you’re stuck to the version of Kubernetes managed by your version of Magnum.
Can you resize a PersistentVolume without restarting your pods to pick up the changes?
It doesn’t appear so. It’s not so much a matter of the pods not picking up the change as it is that the change can’t happen until the pod is terminated.
Concerning the kubeadm, we must have a LB in front of masters as I could see in the docs, mustn’t we?
The community is currently in the process of simplifying the process of using load balancers with Kubeadm, but for the moment, yes, you need to go ahead and set them up. For example, you can use HA Proxy as your load balancer. You can find more information here: https://kubernetes.io/docs/setup/independent/high-availability/.
The post What’s new in Kubernetes 1.14? Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Increasing trust in your cloud: security sessions at Next ‘19

Google Cloud Next ‘19 promises to be chock-full of learning opportunities across all areas of cloud technology. If security will be your focus for Next ‘19, we’ve got you covered.This year’s security spotlight session is your best bet to get a big-picture look at how security works across Google Cloud, and here are a few other can’t-miss talks from the more than 30 security sessions to check out while you’re at Next ‘19.1.Google Cloud: Data Protection and Regulatory ComplianceThese two topics are the goal of many security practices. This session will cover the latest trends in data protection and regulatory compliance and tools to address these needs. You’ll learn how Google Cloud handles these particular security challenges. This related session will offer specifics on risk management and compliance for healthcare companies in particular.2.Comprehensive Protection of PII in GCPNo matter your industry, you need to treat personally identifiable information (PII) with care. In this session, Scotiabank’s platform VP will share the company’s cloud-native approach to protecting PII in Google Cloud Platform (GCP). The session will cover their considerations around access and bank application reidentification.3.Shared Responsibility: What This Means for You as a CISOThe shared responsibility model is foundational to understanding cloud security, and you’ll learn more in this session about how security responsibility gets divided between customers and providers.4.Who Protects What? Shared Security in GKEIf you’re running Google Kubernetes Engine (GKE) or thinking about it, this session can help you further understand the shared responsibility model in a containerized world, and specifically what that means for GKE security. The session will cover how Google secures GKE clusters and offers tips on how you can further harden your GKE workloads.5.Enhance Your Security Posture with Cloud Security Command CenterTake a look under the hood of Cloud Security Command Center to see how it delivers centralized visibility into your GCP assets. It helps you prevent, detect, and respond to threats and works with partner solutions.6.Detecting Threats in Logs at Cloud ScaleLearn about how you can take advantage of Google’s threat intelligence capabilities. In this session, you’ll hear about recent threats and see how GCP services can help detect them.7.Identifying and Protecting Sensitive Data in the Cloud: The Latest Innovations in Cloud DLPWe know that data privacy in today’s world is absolutely essential, and a key part of any company’s security practices. In this session, you’ll hear about the latest capabilities in Google Cloud’s Data Loss Prevention product and get some tips on protecting your sensitive data.8.How Airbnb Secured Access to Their Cloud With Context-Aware AccessThis session will show how Google’s BeyondCorp security model works in practice, and how Airbnb uses it to protect its apps. BeyondCorp uses identity and context instead of the corporate network as a perimeter, resulting in stronger security, broader access, and better user experiences.9.Minimizing Insider Risk From Cloud Provider AdministratorsInsider risk is a common concern when moving to a cloud provider. You have to be sure you can trust what your provider is doing with your data and that there are proper administrative controls in place. In this session, you’ll see how Access Transparency and Access Approval work in GCP, and how to set these controls up to get more visibility and control in the cloud.10.Applying Machine Learning and Analytics to Security in GCPSee how machine learning and data analytics help underpin Google Cloud’s security efforts. These modern tools help navigate the security complexities of cloud environments by pulling information together to make good decisions. The result is cloud security that saves time and reduces risk.  11.Preventing Data Exfiltration on GCPA robust cloud security strategy includes minimizing opportunities for data exfiltration. Check this session out to get the details on connecting securely to GCP services, and how to configure a deployment that isolates and protects your resources. You’ll also see how networking and security products work together for strong security.12.Twitter’s GCP Architecture for Its Petabyte-Scale Data Storage in Cloud Storage and User Identity ManagementSecurity needs to scale as a cloud deployment grows. This session with Twitter shows a real-life example of organizing, managing and securing petabytes of data with a hybrid cloud model. You’ll get a look at user-management tooling that manages account access and hear some of Twitter’s security lessons learned.13.A Use-Case Based, Demo-Led Approach to G Suite SecurityTake a look at some security use cases that can be solved in G Suite. You’ll see new products and view demos based around real issues, such as what to do when detecting phishing, finding a device is compromised, or seeing who’s sharing outside of your organization.14.Protections From Bleeding Edge Phishing, Malware AttacksPhishing, malware, scams and impersonation attacks are getting more sophisticated. Here’s a look at how G Suite can protect you and your users from these attacks, and ways to configure G Suite for real-time link scanning and anomalous file type blocking.For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there!
Quelle: Google Cloud Platform

Enabling precision medicine with integrated genomic and clinical data

Precision medicine tailors a patient's medical treatment by factoring in their genetic makeup and clinical data. The key to applying this methodology is integrating clinical data with an individual’s genomic data for the most complete longitudinal healthcare record to power the most precise and effective treatment.

Problem: data in silos, detached from the point of care

Currently, clinical information resides in silos (elecftronic healthcare records, radiological information systems, laboratory information systems, and picture archiving and communication systems), with little to no integration or interoperability between them. Furthermore, there is not just one genome for a patient, but multiple “omes” including the genome, proteome, transcriptome, epigenome, and microbiome and beyond. The lack of availability of a complete, integrated longitudinal patient record incorporating multiomics to power precision medicine has several detrimental effects. First and foremost, it results in less effective medicine, and suboptimal patient outcomes. It can also delay diagnoses where data required to support a clinical decision is not readily available. Working with an incomplete medical record can increase the risk of errors. Last but not least, this can exacerbate the lack of coordination across multidisciplinary care teams, resulting in suboptimal patient care and increased healthcare costs. For precision medicine, this presents a significant challenge around how to integrate clinical data systems and clinical genomic data. The cumulative result is the reduced feasibility of providing precision medicine at the point of care.

The solution: seamless connection of clinical data with genomic data

Kanteron Systems Platform is a patient-centric, workflow-aware, precision medicine solution. The solution integrates many key types of healthcare data for a complete patient longitudinal record to power precision medicine including medical imaging, digital pathology, clinical genomics, and pharmacogenomic data.

The figure below shows key data layers of the Kanteron Platform:

Benefits

The solution provides several key benefits to help fulfill the potential of precision medicine. First, it provides a clinical content management system across the full range of data types comprising the patient record. With the cost of full genomic sequencing now dipping below the $1,000 USD mark, a tsunami of genomic data is expected. Each genome record can take up to 150 GB or more of storage. The Kanteron Platform provides support for managing this massive growth in genomic data and paves the way for genomic sequencing at scale. Through the integration of data and support for multiomics, this solution can also be used to enable pharmacogenomics, in turn helping to increase medication efficacy and reduce adverse events. Artificial intelligence and machine learning are most powerful when applied to the full patient record, across the range of data types comprising this record. Through integration of key data types, the Kanteron Platform enables healthcare organizations to realize the full potential of artificial intelligence to improve patient outcomes and reduce healthcare costs.

Azure services that make a difference

Azure offers Kanteron’s customers a level of flexibility, scalability, security, and compliance that is not possible with on-premises installations. Azure is also available across 54 regions and 140 countries worldwide, and just expanded into South Africa, enabling healthcare organizations to deploy where required and satisfy any applicable data sovereignty requirements. Azure supports a vast range of compliance requirements as seen in the Compliance offerings. We now have 91 certifications and attestations. Key Azure services used to support the Kanteron Platform include both Azure Storage, and Virtual Machines.

Recommended Next steps

Explore how the Kanteron Systems Platform can power your precision medicine practice to the next level through integration of genomic and clinical data, and support for advanced artificial intelligence.
Quelle: Azure

The service mesh era: Istio’s role in the future of hybrid cloud

Welcome back to our blog post series on Service Mesh and Istio.In our previous posts, we talked about what the Istio service mesh is, and why it matters. Then, we dove into demos on how to bring Istio into production, from safe application rollouts and security, to SRE monitoring best practices.Today, leading up to Google Cloud NEXT ‘19, we’re talking all about using Istio across environments, and how Istio can help you unlock the power of hybrid cloud.Why hybrid?Hybrid cloud can take on many forms. Typically, hybrid cloud refers to operating across public cloud and private (on-premises) cloud, and multi-cloud means operating across multiple public cloud platforms.Adopting a hybrid- or multi-cloud architecture could provide a ton of benefits for your organization. For instance, using multiple cloud providers helps you avoid vendor lock-in, and allows you to choose the best cloud services for your goals. Using both cloud and on-premises environments allows you to simultaneously enjoy the benefits of the cloud (flexibility, scalability, reduced costs) and on-prem (security, lower latency, hardware re-use). And if you’re looking to move to the cloud for the first time, adopting a hybrid setup lets you do so at your own pace, in the way that works best for your business.Based on our experience at Google, and what we hear from our customers, we believe that adopting a hybrid service mesh is key to simplifying application management, security, and reliability across cloud and on-prem environments—no matter if your applications run in containers, or in virtual machines. Let’s talk about how to use Istio to bring that hybrid service mesh into reality.Hybrid Istio: a mesh across environmentsOne key feature of Istio is that it provides a services abstraction for your workloads (Pods, Jobs, VM-based applications). When you move to a hybrid topology, this services abstraction becomes even more crucial, because now you have not just one, but many environments to worry about.When you adopt Istio, you get all the management benefits for your microservices on one Kubernetes cluster—visibility, granular traffic policies, unified telemetry, and security. But when you adopt Istio across multiple environments, you are effectively giving your applications new superpowers. Because Istio is not just a services abstraction on Kubernetes. Istio is also a way to standardize networking across your environments. It’s a way to centralize API management and decouple JWT validation from your code. It’s a fast-track to a secure, zero-trust network across cloud providers.So how does all this magic happen? Hybrid Istio refers to a set of sidecar Istio proxies (Envoys) that sit next to all your services across your environments—every VM, every container—and know how to talk to each other across boundaries. These Envoy sidecars might be managed by one central Istio control plane, or by multiple control planes running in each environment.Let’s dive into some examples.  Multicluster Istio, one control planeOne way to enable hybrid Istio is by configuring a remote Kubernetes cluster that “calls home” to a centrally-running Istio control plane. This setup is useful if you have multiple GKE clusters in the same GCP project, but Kubernetes pods in both clusters need to talk to each other. Use cases for this include: production and test clusters through which you canary new features, standby clusters ready to handle failover, or redundant clusters across zones or regions.This demo spins up two GKE clusters in the same project, but across two different zones (us-central and us-east). We install the Istio control plane on one cluster, and Istio’s remote components (including the sidecar proxy injector) on the other cluster. From there, we can deploy a sample application spanning both Kubernetes clusters.The exciting thing about this single control plane approach is that we didn’t have to change anything about how our microservices talk to each other. For instance, the Frontend can still call CartService with a local Kubernetes DNS name (cartservice:port). This DNS resolution works because GKE pods in the same GCP project belong to the same virtual network, thus allowing direct pod-to-pod communication across clusters.Multicluster Istio, two control planesNow that we have seen a basic multi-cluster Istio example, let’s take it a step further with another demo.Say you’re running applications on-prem and in the cloud, or across cloud platforms. For Istio to span these different environments, pods inside both clusters must be able to cross network boundaries.This demo uses two Istio control planes—one per cluster—to form a single, two-headed logical service mesh. Rather than having the sidecar proxies talk directly to each other, traffic moves across clusters using Istio’s Ingress Gateways. An Istio Gateway is just another Envoy proxy, but it’s specifically dedicated for traffic in and out of a single-cluster Istio mesh.For this setup to work across a network partition, each Istio control plane has a special domain name server (DNS) configuration. In this dual-control-plane topology, Istio installs a secondary DNS server (CoreDNS) which resolves domain names for services outside of the local cluster. For those outside services, traffic moves between the Istio Ingress Gateways, then onwards to the relevant service.In the demo for this topology, we show how this installation works, then how to configure the microservices running across both clusters to talk to each other. We do this through the Istio  ServiceEntry resource. For instance, we deploy aservice entry for the Frontend (cluster 2) into cluster 1. In this way, cluster 1 knows about services running in cluster 2.Unlike the first demo, this dual control-plane Istio setup does not require a flat network between clusters. This means you can have overlapping GKE pod CIDRs between your clusters. All that this setup requires is that the Istio Gateways are exposed to the Internet. In this way, the services inside each cluster can stay safe in their own respective environments.Adding a virtual machine to the Istio meshMany organizations use virtual machines (VMs) to run their applications, instead of (or in addition to) containers. If you’re using VMs, you can still enjoy the benefits of an Istio mesh. This demo shows you how to integrate a Google Compute Engine instance with Istio running on GKE. We deploy the same application as before. But this time, one service (ProductCatalog) runs on an external VM, outside of the Kubernetes cluster.This GCE VM runs a minimal set of Istio components to be able to communicate with the central Istio Control Plane. We then deploy an Istio ServiceEntry object to the GKE cluster, which logically adds the external ProductCatalog service to the mesh.This Istio configuration model is useful because now, all the other microservices can reference ProductCatalog as if it were running internal to the Kubernetes cluster. From here, you could even add Istio policies and rules for ProductCatalog as if it were running in Kubernetes; for instance, you could enable mutual TLS for all inbound traffic to the VM.Note that while this demo uses a Google Cloud VM for demo purposes, you could run this same example on bare metal, or with an on-prem VM. In this way, you can bring Istio’s modern, cloud-native principles to virtual machines running anywhere.  Building the hybrid futureWe hope that one or more of these hybrid Istio demos resonates with the way your organization runs applications today. But we also understand that adopting a service mesh like Istio means taking on complexity and installation overhead, in addition to any complexity associated with moving to microservices and Kubernetes. In that case, adopting a hybrid service mesh is even more complex, because you’re dealing with different environments, each with their own technical specifications.Here at Google Cloud, we are dedicated to helping you simplify your day-to-day cloud operations with a consistent, modern, cross-platform setup. It’s why we created Istio on GKE, which provides a one-click install of Istio on Google Kubernetes Engine (GKE). And it’s the driving force behind our work on our Cloud Services Platform (CSP). CSP is a product to help your organization move to (and across) the cloud—at your own pace, and in the way that works best for you. CSP relies on an open cloud stack—Kubernetes and Istio—to emphasize portability. We are excited to make CSP a reality this year.Thank you for joining us in the service mesh series so far. Stay tuned for the Keynotes and Hybrid Cloud track at Google Cloud NEXT in April. After NEXT, we’ll continue the series with a few advanced posts on Istio operations.
Quelle: Google Cloud Platform