New whitepaper: Designing and deploying a data security strategy with Google Cloud

William Gibson said it best: “The future is already here—it’s just not evenly distributed.”The cloud has arrived. Data security in the cloud is too often a novel problem for our customers. Well-worn paths to security are lacking. We often see customers struggling to adapt their data security posture to this new reality. There is an understanding that data security is critical, but a lack of well understood principles to drive an effective data security program. Thus, we are excited to share a view of how to deploy a modern and effective data security program.    Today, we are releasing a new white paper “Designing and deploying a data security strategy with Google Cloud” that accomplishes exactly that. It was written jointly by Andrew Lance of Sidechain (Sidechain blog post about this paper) and Dr. Anton Chuvakin, with a fair amount of help from other Googlers, of course.Before we share some of our favorite quotes from the paper, let me spend a few more minutes explaining the vision behind it.Specifically, we wanted to explore both the question of starting a data security program in a cloud-native way, as well as adjusting your existing daily security program when you start utilizing cloud computing.Imagine you are migrating to the cloud and you are a traditional company. You have some data security capabilities, and most likely you have an existing daily security program, part of your overall security program. Perhaps you are deploying tools like DLP, encryption, data classification and possibly others. Suddenly, or perhaps not so suddenly, you’re migrating some of your data processing and some of your data to the cloud. What to do? Do my controls still work? Are my practices current? Am I looking at the right threats? How do I marry my cloud migration effort and my other daily security effort? Our paper seeks to address this scenario by giving you advice on the strategy, complete with Google Cloud examples.On the other hand, perhaps you are the company that was born in the cloud. In this case, you may not have an existing data security effort. However, if you plan to process sensitive or regulated data in the cloud, you need to create one. How does a cloud native data security program look like? Which of the lessons learned by others on premise I can ignore? What are some of the cloud-native ways for securing the data?As a quick final comment, the paper does not address the inclusion of privacy requirements. It is a worthwhile and valuable goal, just not the one we touched in the paper.Here are some of our favorite quotes from the paper:“Simply applying a data security strategy designed for on-premise workloads isn’t adequate [for the cloud]. It lacks the ability to address cloud-specific requirements and doesn’t take advantage of the great amount of [cloud] security services and capabilities”A solid cloud data security strategy should rely on three pillars: “Identity / Access Boundaries / Visibility” (the last item covers the spectrum of assessment, detection, investigation and other monitoring and observability needs)Useful questions to ponder include ”How does my data security strategy need to change to accommodate a shift to the cloud? What new security challenges for data protection do I need to be aware of in the cloud? What does my cloud provider offer that could streamline or replace my on-premise controls?”“You will invariably need to confront data security requirements in your journey to the cloud, and performing a “lift and shift” for your data security program won’t work to address the unique opportunities and challenges the cloud offers.”“As your organization moves its infrastructure and operations to the cloud, shift your data protection strategies to cloud-native thinking.”At Google Cloud, we strive to accelerate our customers’ digital transformations. As our customers leverage the cloud for business transformation, adapting data security programs to this new environment is essential. Enjoy the paper!Related ArticleImproving security, compliance, and governance with cloud-based DLP data discoveryData discovery, a key component of DLP technology, has never been more important. Here’s why.Read Article
Quelle: Google Cloud Platform

Take the first step toward SRE with Cloud Operations Sandbox

At Google Cloud, we strive to bring Site Reliability Engineering (SRE) culture to our customers not only through training on organizational best practices, but also with the tools you need to run successful cloud services. Part and parcel of that is comprehensive observability tooling—logging, monitoring, tracing, profiling and debugging—which can help you troubleshoot production issues faster, increase release velocity and improve service reliability. We often hear that implementing observability is hard, especially for complex distributed applications that are implemented in different programming languages, deployed in a variety of environments, that have different operational costs, and many other factors. As a result, when migrating and modernizing workloads onto Google Cloud, observability is often an afterthought. Nevertheless, being able to debug the system and gain insights into the system’s behavior is important for running reliable production systems. Customers want to learn how to instrument services for observability and implement SRE best practices using tools Google Cloud has to offer, but without risking production environments. With Cloud Operations Sandbox, you can learn in practice how to kickstart your observability journey and answer the question, “Will it work for my use-case?”Cloud Operations Sandbox is an open-source tool that helps you learn SRE practices from Google and apply them on cloud services using Google Cloud’s operations suite (formerly Stackdriver). Cloud Operations Sandbox has everything you need to get started in one click:Demo service – an application built using microservices architecture on modern, cloud-native stack (a modified fork of a Online Boutique microservices demo app)One-click deployment – automated script that deploys and configures the service to Google Cloud, including:Service Monitoring configurationTracing with OpenTelemetryCloud Profiling, Logging, Error Reporting, Debugging and moreLoad generator – a component that produces synthetic traffic on the demo serviceSRE recipes – pre-built tasks that manufacture intentional errors in the demo app so you can use Cloud Operations tools to find the root cause of problems like you would in productionAn interactive walkthrough to get started with Cloud Operations Getting startedLaunching the Cloud Operations Sandbox is as easy as can be. Simply:Go to Click on the “Open in Google Cloud Shell” button. This creates a new Google Cloud project. Within that project, a Terraform script creates a Google Kubernetes Engine (GKE) cluster and deploys a sample application to it. The microservices that make up the demo app are pre-instrumented with logging, monitoring, tracing, debugging and profiling as appropriate for each microservices language runtime. As such, sending traffic to the demo app generates telemetry that can be useful for diagnosing the cloud service’s operation. In order to generate production-like traffic to the demo app, an automated script deploys a synthetic load generator in a different geo-location than the demo app.It creates 11 custom dashboards (one for each microservice) to illustrate the four golden signals of monitoring as described in Google’s SRE book.It also adds and automatically configures uptime checks, service monitoring (SLOs and SLIs), log-based metrics, alerting policies and more.At the end of the provisioning script you’ll get a few URLs of the newly created project:You can follow the user guide to learn about the entire Cloud Operations suite of tools, including tracking microservices interactions in Cloud Trace (thanks to the OpenTelemetry instrumentation of the demo app) and see how to apply the learnings to your scenario. Finally, to remove the Sandbox once you’re finished using it, you can runNext stepsFollowing SRE principles is a proven method for running highly reliable applications in the cloud. We hope that the Cloud Operations Sandbox gives you the understanding and confidence you need to jumpstart your SRE practice. To get started, visit, explore the project repo, and follow along in the user guide.
Quelle: Google Cloud Platform

Docker Captain Take 5 – Elton Stoneman

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Elton Stoneman who has been a Docker Captain since 2016. He is a Container Consultant and Trainer and is based in Gloucestershire, United Kingdom.

How/when did you first discover Docker?

I was consulting as an API Architect, building out the backend services for a new Android device. My role was all about .NET services running in Azure, but we worked as a single team – and the people working on the operating system were using Docker to simplify their build tools. 

I started looking into their setup and I was just stunned at how you could run complex software with a single Docker command – and have it run the same way on any machine. That was way back in 2014, Docker was version 0.7 I think and I created my Docker Hub account in August of that year. Then I started blogging and speaking about Docker, and I’ve never looked back.

What is your favorite Docker command?

docker manifest inspect [image]

Multi-architecture images are hugely powerful. I work with a lot of platforms now but my heart is still in .NET. The latest .NET runtime works on Windows and Linux on Intel and Arm CPUs, and I love how you can target your apps for different infrastructures, using the same source code and a single Dockerfile.

What is your top tip you think other people don’t know for working with Docker?

Specifically for people who work on hybrid apps like me, with some parts on Windows and some on Linux: when you switch from Linux to Windows containers in Docker Desktop, your containers keep running. If you want to run a hybrid app on your dev machine, then you can do it by publishing ports from the containers on different operating systems and have them communicate over the network, via your host.docker.internal address.

What’s the coolest Docker demo you have done/seen?

I worked at Docker for a few years, and I was lucky enough to present at DockerCon during a couple of keynote sessions. My favorite was the Day 1 Keynote from 2019 where I did a demo with Harish talking about migrating old apps to Docker. We had a ton of fun writing and rehearsing that. A lot of people thought the DockerCon demos were done by actors, but they were all Docker staff, working overtime

What have you worked on in the past 6 months that you’re particularly proud of?

I’m a freelance consultant and trainer now, helping organizations on their container journeys, and I also create a lot of content to help practitioners learn the technologies and approaches. 

In the last six months I launched my weekly YouTube series Elton’s Container Show, finished writing my new book Learn Kubernetes in a Month of Lunches, published my 26th Pluralsight course Preparing Docker Apps for Production and my first Udemy course Docker for .NET Apps. It’s been busy…

What do you anticipate will be Docker’s biggest announcement this year?

I’d love to see the Docker Compose spec expanding to cover bits of the application which aren’t necessarily going to run in containers. It would be great to have a modelling language where I can express the architectural intent without going into the details of the technology. So my spec says I need a MySQL database, and when I run it on a local machine I get a MySQL container with a default password. But then deploy the exact same spec to the cloud and I’d get a managed MySQL service with a password generated and securely stored in a secret service. 

What do you think is going to be Docker’s biggest challenge in 2021?

Maybe it will be working out what new features and products are really the must-haves for customers. The product and engineering teams at Docker are first-rate, but it’s hard to pick out the next desirable feature when the product is ubiquitous amongst a very disparate IT industry. If you watch that DockerCon demo from 2019 I showed a bunch of features we were working on – Docker Assemble, Docker Pipeline, Docker Application Convertor – I don’t think any of those exist now. They addressed real problems in CI/CD and app migration, but they weren’t a big enough deal for enough customers for Docker to continue investing in them.

What are some personal goals for the next year with respect to the Docker community?

A lot of my focus is on helping people to skill up and learn how containers are used in the real world – but I want to keep the entry barrier low. When my Docker book came out in 2020 I did a YouTube series where I walked through a chapter in each episode. That helped people see how to use Docker in action, to ask questions and to learn without having to buy the book. I’ll be doing the same in 2021 when my Kubernetes book launches.

I’m also aware that learning materials can be pretty expensive for people, so one of my goals is to put out more Udemy courses where the content is great but the course is affordable. My plan is to get courses out to cover all the major areas – Docker and Kubernetes, observability, continuous delivery, security and service mesh architectures. Anytime I publish something I’ll promote it on Twitter, so be sure to follow @EltonStoneman to be the first to know.

And I’m always happy to speak at meetups, especially now that we’ll be virtual for a good while longer. If you need a speaker at an event, just ask.

What talk would you most love to see at DockerCon 2021?

I’ve presented at every DockerCon since 2017 so obviously I’d love to be there again in 2021. But if I can’t choose myself, it’d be one of my fellow Docker Captains talking about a project they’ve helped on. The real-world stuff is always super interesting for me.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

I really like how application modelling is starting to become abstracted from the technology that actually runs the app. The Docker Compose specification is really promising here: you can define your app in a fairly abstract way and deploy it to a single Docker machine, or to a Docker Swarm cluster, or to a managed container service like Azure Container Instances, or to a Kubernetes cluster running in the cloud or the datacenter.

There’s a balance between keeping the application model abstract and being able to make use of all the features your target platform provides. I think enough people are interested in that problem that we could see some nice advances there. Removing the operational load of creating and managing clusters will make containers even more attractive.

Rapid fire questions…

What new skill have you mastered during the pandemic?

Touch typing.

Salty, sour or sweet?

Mixed. But – is this a popcorn question? I tried cheddar cheese popcorn in the Docker office in San Francisco one time and it was revolting.

Dogs, cats, neither, both?

Cats, but my family are exerting a lot of dog pressure.

Beach or mountains?

Mountains – preferably running through them.

Your most often used emoji?

The smiley, smiley face
The post Docker Captain Take 5 – Elton Stoneman appeared first on Docker Blog.

Amazon ECS unterstützt jetzt VPC-Endpunktrichtlinien

Amazon Elastic Container Service (ECS) bietet Ihnen jetzt auch die Möglichkeit, IAM-Ressourcenrichtlinien an VPC-Endpunkte anzuhängen. So können Sie den Zugriff auf Ihre ECS-Ressourcen von VPC-Endpunkten aus kontrollieren und so Compliance- und gesetzliche Anforderungen erfüllen. 

Amazon CloudWatch Application Insights unterstützt Oracle-Datenbanküberwachung

Jetzt können Unternehmen mit Oracle-Datenbanken mit CloudWatch Application Insights ganz einfach Überwachungen, Warnmeldungen und Dashboards für ihre EC2- und RDS-Oracle-Instances auf AWS einrichten. CloudWatch Application Insights ist eine einfach anzuwendende Möglichkeit, Ihnen Überwachung und verbesserte Beobachtbarkeit ihrer auf AWS Ressourcen laufenden Unternehmensapplikationen, ermöglicht. Die neue Funktion richtet automatisch die Metriken, Telemetrie und Protokolle zur Überwachung des Zustands von Oracle-Datenbanken ein, die in AWS ausgeführt werden. 

Ankündigung von drei neuen digitalen Kursen für Amazon S3

Wir freuen uns, Ihnen drei kostenlose digitale Kurse vorstellen zu können, in denen Sie lernen, wie Sie Ihre Amazon-S3-Implementierung konfigurieren, optimieren, sichern und prüfen können. Diese Kurse der Mittelstufe sind für Cloud-Architekten, Speicherarchitekten, Entwickler und Betriebsingenieure konzipiert und umfassen Lesemodule, Demonstrationen, Quizfragen und optionale Übungen zum Selbststudium. Die Übungen zum Selbststudium kosten bis zu 15 USD pro Übung (diese Kosten sind im kostenlosen digitalen Training auf nicht enthalten).

AWS SDK für Go Version 2 ist jetzt allgemein verfügbar

Heute geben wir bekannt, dass die AWS SDKs und Tools des AWS SDK für Go, Version 2 (v2) allgemein verfügbar sind. Diese Version verfügt über eine modulare Architektur, die es Kunden ermöglicht, Service-Abhängigkeiten in ihrer Anwendung zu modellieren und Service-Client-Updates mithilfe von Go-Modulen unabhängig zu steuern. Eine deutliche Verbesserung der CPU- und Speicherauslastung stellt mehr Ressourcen für rechen- und speicherintensive Aufgaben der Anwendung zur Verfügung.

PCI DSS-Konformität für AWS Wavelength

PCI-berechtigte AWS-Services, die in AWS Wavelength bereitgestellt werden, können jetzt Karteninhaberdaten (cardholder data, CHD) oder sensible Authentifizierungsdaten (sensitive authentication data, SAD) speichern, verarbeiten oder übertragen, einschließlich Händler, Verarbeiter, Erwerber, Emittenten und Dienstleister. PCI DSS (Payment Card Industry Data Security Standard) ist ein proprietärer Informationssicherheitsstandard, der vom PCI Security Standard Council verwaltet wird. Viele Anwendungsfälle von Wavelength wie interaktive Live-Videostreams, AR/VR und Echtzeitspiele erfordern In-App-Käufe. Ab heute können Sie AWS Wavelength verwenden, um Anwendungen zu erstellen, bereitzustellen und auszuführen, die sensible Zahlungskartendaten in Übereinstimmung mit PCI DSS speichern und verwenden.