Feature Friday: DockerCon speakers sound off on Kubernetes, Service Mesh and More

DockerCon brings industry leaders and experts of the container world to one event where they share their knowledge, experience and guidance. This year is no different. For the next few weeks, we’re going to highlight a few of our amazing speakers and the talks they will be leading.
In this first highlight, we have a few of our own Docker speakers that are covering storage and networking topics, including everything from container-level networking on up to full cross-infrastructure and cross-orchestrator networking.
 
Persisting State for Windows Workloads in Kubernetes
More on their session here.

Anusha Ragunathan
Docker Software Engineer

Deep Debroy
Docker Software Engineer

What is your breakout about?
We’ll be talking about persistent storage options for Windows workloads on Kubernetes. While a lot of options exist for Linux workloads we will look at dynamic provisioning scenarios for Windows workloads.
Why should people go to your session?
Persistence in Windows containers is very limited. Our talk aims to tackle this hard problem and provide practical solutions. The audience will learn about ways to achieve persistent storage in their Windows container workloads and they will also hear about future direction.
What is your favorite DockerCon moment?
Deep: The Dockercon party in Austin.
What are you looking forward to the most?
Anusha: I’m looking forward to the Docker women’s summit and attending Black Belt sessions.

Just What Is A “Service Mesh”, And If I Get One Will It Make Everything OK?
More on Elton’s session here.

Elton Stoneman 
Docker Developer Advocate

What is your breakout about? 
I’m talking about service meshes – Linkerd and Istio in particular. It’s a technical session so there are lots of demos, but it’s grounded in the practical question – do you really need a service mesh, and is it worth the cost? You’ll learn what a service mesh can do, and how it helps to cut a lot of infrastructure concerns from your code.
What are you most excited about at DockerCon?
I can’t tell you, it’s a secret… But I’m involved in one of our big announcements and it’s going to be a real “this changes everything” moment.
What is your all time favorite DockerCon moment?
In Barcelona, I presented one of the keynote demos with my Docker buddy Lee. We were on the big stage for about 7 minutes, and rehearsing for that took all weekend. Lots of work but great fun and we had a ton of positive feedback.

Microservices-enabled API Server – Routing Across Any Infrastructure
More on their session here.

Brett Inman
Docker Engineering Manager

Alex Hokanson
Docker Infrastructure Engineer

What is your breakout about?
Our session is about how we do service discovery, load balancing, and rate limiting for high-traffic public-facing services like Docker Hub. At Docker we have developed a solution that allows routing web traffic across different workloads and environments . It doesn’t matter if the application is natively running on Ubuntu, via “docker container run”, in Docker Swarm, or in Kubernetes–our solution will get traffic to your service efficiently and even handle containers coming and going!
Why should people go to your session?
Our routing layer is the single most important piece of our infrastructure at Docker and moving that layer from host-based, native applications to Kubernetes was no small feat. Your routing layer shouldn’t slow developers down–see how we give our internal customers even more choice and flexibility!
What’s your favorite DockerCon moment?
Brett I: My favorite moment was meeting a large group of devops people in a Hallway Track and realizing we were all solving the same problems individually, how inefficient that was, and how powerful community and open source can be.
What are you most excited about for DCSF 19?
Alex: Meeting people and learning about how they operationalize Docker.

Thank you all so much and see you at DockerCon!

#DockerCon sneak peek: A chat with networking and storage breakout speakers. Register for @DockerCon 2019 todayClick To Tweet

For more information:

Register for DockerCon 2019, April 29 – May 2 in San Francisco.
Sign up and attend these additional events, running conjunction with DockerCon:

Women@DockerCon Summit, Monday, April 29th
Open Source Summit, Thursday, May 2nd
Official Docker Training and Certification
Docker Workshops

The post Feature Friday: DockerCon speakers sound off on Kubernetes, Service Mesh and More appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top 5 Ways to Enhance Your DockerCon Experience

DockerCon 2019 is coming soon to San Francisco and and we’ve significantly improved your DockerCon experience based on your feedback. If you haven’t reserved your spot, head over to register today.

DockerCon 2019 is coming soon to San Francisco and and we’ve significantly improved your DockerCon experience based on your feedback. If you haven’t reserved your spot, head over to register today.
After each conference, our team goes through all of your feedback and brainstorms adjustments big and small to make sure DockerCon remains a special experience for you. To everyone that filled out the event survey – thank you! We know it can seem tedious but we appreciate the feedback.
With that in mind, we wanted to share some of the new changes you’ll see in San Francisco:

Role-based Content: This year you can find dedicated tracks on how to use Docker for Developers and for IT Infrastructure & Operations teams. These tracks are led by Docker Captains, customers and Docker Solution Architects sharing their experiences and best practices in building and running apps in containers. And, to make the experience even better – we heard you – and have a new mobile app for attendees.
One Global DockerCon: The community has grown all around the world. To help reach more people with container education, starting with DockerCon San Francisco 2019, we are adjusting our events calendar to feature one DockerCon conference a year and a series of one-day summit events around the world.
No Registration Required: First Come, First Serve Talks: At the heart of experimentation is to try new things and adjust as we learn. Breakout sessions are returning to a first come, first served basis – you don’t need to register to attend. However, you can still add them to your schedule in the agenda builder.
Opportunities to Engage with Open Source: We are excited to spend time in our open source roots. This year, DockerCon features a dedicated track and Open Source Summit to provide a forum to learn about the latest innovation and the container native projects driving our industry. There will be group discussions and opportunities to collaborate with contributors and maintainers and exchange ideas with one another.
Empower Communities: Last year we held our first ever Women’s event featuring a panel discussion and happy hour. After receiving overwhelmingly positive feedback, we have expanded to a half day Women@DockerCon Summit with multiple panels, a workshop and a small registration fee to donate to Black Girls Code.

Join us April 29 – May 2 at the Moscone Center in San Francisco, CA.

What’s new with #DockerCon? 5 enhancements to your DockerCon experienceClick To Tweet

The post Top 5 Ways to Enhance Your DockerCon Experience appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Azure Search – New Storage Optimized service tiers available in preview

Azure Search is an AI-powered cloud search service for modern mobile and web app development. Azure Search is the only cloud search service with built-in artificial intelligence (AI) capabilities that enrich all types of information to easily identify and explore relevant content at scale. It uses the same integrated Microsoft natural language stack as Bing and Office, plus prebuilt AI APIs across vision, language, and speech. With Azure Search, you spend more time innovating on your websites and applications, and less time maintaining a complex search solution.

Today we are announcing the preview of two new service tiers for Storage Optimized workloads in Azure Search. These L-Series tiers offer significantly more storage at a reduced cost per terabyte when compared to the Standard tiers, ideal for solutions with a large amount of index data and lower query volume throughout the day, such as internal applications searching over large file repositories, archival scenarios when you have business data going back many years, or e-discovery applications.     

Searching over all your content

From finding a product on a retail site to looking up an account within a business application, search services power a wide range of solutions with differing needs. While some scenarios like product catalogs need to search over a relatively small amount of information (100MB to 1GB) quickly, for others it’s a priority to search over large amounts of information in order to properly research, perform business processes, and make decisions. With information growing at the rate of 2.5 quintillion bytes of new data per day, this is becoming a much more common–and costly– scenario, especially for businesses.

What’s new with the L-series tier

The new L-Series service tiers support the same programmatic API, command-line interfaces, and portal experience as the Basic and Standard tiers of Azure Search. Internally, Azure Search provisions compute and storage resources for you based on how you’ve scaled your service. Compared to the S-Series, each L-Series search unit has significantly more storage I/O bandwidth and memory, allowing each unit’s corresponding compute resources to address more data. The L-Series is designed to support much large indexes overall (up to 24 TB total on a fully scaled out L2) for applications.

 

Standard S1

Standard S2

Standard S3

Storage Optimized L1

Storage Optimized L2

Storage

25 GB/partition
(max 300 GB documents per service)

100 GB/partition
(max 1.2 TB documents per service)

200 GB/partition
(max 2.4 TB documents per service)

1 TB/partition

(max 12 TB documents per service)

2 TB/partition

(max 24 TB documents per service)

Max indexes per service

50

200

200 or 1000/partition in high density2 mode

10

10

Scale out limits

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Up to 36 units per service
(max 12 partitions; max 12 replicas)
up to 12 replicas in high density2 mode

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Please refer to the Azure Search pricing page for the latest pricing details.

Customer success and common scenarios

We have been working closely with Capax Global LLC, A Hitachi Group Company to create a service tier that works for one of their customers. Capax Global combines well-established patterns and practices with emerging technologies while leveraging a wide range of industry and commercial software development experience. In our discussions with them, we found that a storage optimized tier would be a good fit for their application since it offers the same search functionality at a significantly lower price than the standard tier. 

“The new Azure Search Storage Optimized SKU provides a cost-effective solution for customers with a tremendous amount of content. With it, we’re now able to enrich the custom solutions we build for our customers with a cloud hosted document-based search that meets the search demands of millions of documents while continuing to lead with Azure. This new SKU has further strengthened the array of services we have to utilize to help our customers solve their business problems through technology.”

– Mitch Prince, VP Cloud Productivity + Enablement at Capax Global LLC, A Hitachi Group Company

The Storage Optimized service tiers are also a great fit for applications that incorporate the new cognitive search capabilities in Azure Search, where you can leverage AI-powered components to analyze and annotate large volumes of content, such as PDFs, office documents, and rows of structured data. These data stores can result in many terabytes of indexable data, which becomes very costly to store in a query latency-optimized service tier like the S3. Cognitive search combined with the L-Series tiers of Azure Search provide a full-text query solution capable of storing terabytes of data and returning results in seconds.

Regional availability

For the initial public preview, the Storage Optimized service tiers will be available in the following regions:

West US 2
South Central US
North Central US
West Europe
UK South
Australia East

We’ll be adding additional regions over the coming weeks. If your preferred region is not supported, please reach out to us directly at azuresearch_contact@microsoft.com to let us know.

Getting started

For more information on these new Azure Search tiers and pricing, please visit our documentation, pricing page, or go to the Azure portal to create your own Search service.
Quelle: Azure

5 need-to-know networking sessions at Next ‘19

Google Cloud Next ‘19 has everything you need to navigate all the networking products, services, and innovations GCP has to offer. With almost 20 networking sessions at Google Cloud Next this year, we have something for you, whether you’re just starting to move data to Google Cloud or you’re looking to modernize your traffic management using the latest advancements in networking. Here are five sessions that you definitely shouldn’t miss.1. A Year in GCP NetworkingFerris Bueller said it best, “[Networking] moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” This session provides a 360-degree view of the advancements we have made in Networking over the past year across the 20+ networking products in our portfolio along the pillars of connect, secure, optimize, scale and modernize your network.  But, we just don’t talk about these advancements in theory. U.S. retailer Target will share how they are using some of the latest networking products and services, right now, to advance their business objectives. Learn more.2. The High-Performance NetworkGoogle’s network backbone has thousands of miles of fiber optic cable, uses advanced software-defined networking, and provides edge caching services to deliver fast, consistent, and scalable performance. Get an inside look at this premium global network—built around the world and under the sea—and see how Google’s software innovations are designed to make the internet faster. Learn more.3. Think Big, Think GlobalIf you’re a global organization, or you want to be one, Google’s global Virtual Private Cloud (VPC) offers the flexibility to scale and control how workloads connect regionally and globally. Learn the advantages of multi-region deployments and check out tips and tricks to keep your VPC secure, how to extend it to on-prem, how to deploy highly available services, and much more. Learn more.4. Traffic Director and Envoy-Based L7 ILB for Production-Grade Service Mesh and IstioService mesh is one of the most important networking paradigms to emerge for delivering multi-cloud applications and (micro)services. Istio is a leading open-source service mesh built using open proxies like Envoy. Be one of the first people to get a close look at Traffic Director, our new GCP-managed service that provides configuration and traffic control service for service mesh. Also get a preview of L7 internal load balancing, which is essentially fully managed Traffic Director and Envoy proxies under the hood but looks like a traditional load balancer, making it easier to bring the benefits of service mesh to brownfield environments. Learn more.5. Open Systems: Key to Unlocking Multi-Cloud and New Business With Lyft, Juniper, GoogleHear directly from leaders at Juniper, Google and Lyft as they unpack what “open” means to them and how open source, open interfaces, and open systems are paving the path to seamless multi-cloud services and new business models. You will also hear in-depth about several open-source projects including Kubernetes, gRPC, Envoy, Traffic Director, and Tungsten Fabric (Open Contrail) and get a chance to ask questions about bringing these technologies to your own environments. Learn more.While these five sessions are certainly highlights, it doesn’t end there. From network security, visibility, and monitoring to partner and third-party services discussions, Google Cloud Next ‘19 has the information you need to help you get the most from your network. Be sure to check out the session list here, and register here.
Quelle: Google Cloud Platform

Taking charge of your data: Understanding re-identification risk and quasi-identifiers with Cloud DLP

Preventing the exposure of personally identifiable information, a.k.a. PII, is a big concern for organizations—and not so easy to do. Google’s Cloud Data Loss Prevention (DLP) can help, with a variety of techniques to identify and hide PII that are exposed via an intuitive and flexible platform.In previous “Taking charge of your data” posts, we talked about how to use Cloud DLP to gain visibility into your data and how to protect sensitive data with de-identification, obfuscation, and minimization techniques. In this post, we’re going to talk about another kind of risk: re-identification, and how to measure and reduce it.A recent Google Research paper defines re-identification risk as “the potential that some supposedly anonymous or pseudonymous data sets could be de-anonymized to recover the identities of users.” In other words, data that can be connected to an individual can expose information about them and this can make the data more sensitive. For example, the number 54,392 alone isn’t particularly sensitive. However, if you learned this was someone’s salary alongside other details about them (e.g., their gender, zip code, alma mater), the risk of associating that data with them goes up.Thinking about re-identification risksThere are various factors that can increase or decrease re-identification risks and these factors can shift over time as data changes. In this blog post, we present a way to reason about these risks using a systematic and measurable approach.Let’s say you want to share data with an analytics team and you want to ensure lower risk of re-identification; there are two main types of identifiers to consider:Direct identifiers – These are identifiers that directly link to and identify an individual. For example, a phone number, email address, or social security number usually qualify as direct identifiers since they are typically associated with a single individual.Quasi-identifiers – These are identifiers that do not uniquely identify an individual in most cases but can in some instances or when combined with other quasi-identifiers. For example, data like someone’s job title may not identify most users in a population since many people might share these job title. But some values like “CEO” or “Vice President” may only be present for a small group or single individual.When assessing re-identification risk you want to consider how to address both direct and quasi identifiers. For direct identifiers you can consider options like redaction or replacement with a pseudonym or token. To identify risk in quasi-identifiers, one approach is to measure the statistical distribution to find any unique values. For example, take the data point “age 27”. How many people in your dataset are age 27? If there are very few people of “age 27” in your data set, there’s a higher potential risk of re-identification, whereas if there are a lot of people aged 27, the risk is reduced.Understanding k-anonymityK-anonymity is a property that indicates how many individuals share the same value or set of values. Continuing with the example above, imagine you have 1M rows of data including a column of ages, and in that 1M rows only one person has the age=27. In that case, the “age” column has a k value of 1. If there are at least 10 people for every age, then you have a k value of 10. You can measure this property across a single column, like age, or across multiple columns like age+zip-code. If there is only one person age 27 in zip code 94043 then that group (27, 94043) has a k value of 1.Understanding the lowest k value for a set of columns is important, but you also want to know the distribution of those k values. That is, does 10% of your data have a low k value or does 90% of your data have a low k value? In other words, can you simply drop the rows that have low k values or do you need to fix it another way? A technique called generalization can be helpful here by allowing you to retain more rows at the cost of revealing less information per row; for example, “bucketing” ages into five-year spans would replace age=27 with age=”26-30”, allowing you to retain utility in the data but make it less distinguishing.Understanding how much of your data is below a certain k threshold, and whether you drop the data or “generalize” the data, are all forms of measuring the re-identification risk vs. the data loss and utility value in the data. In this trade off you are asking questions like:What k threshold is acceptable for this use case?Am I okay to drop the percentage of data that is below that threshold?Does generalization allow me to retain more data value compared to dropping rows?Let’s walk through one more exampleImagine you have a database that contains users’ age and zip code and you want to ensure that no combination of age + zip is identifying below a certain threshold (like k=10). You can use Cloud DLP to measure this distribution and use Cloud Data Studio to visualize it (how-to guide here). Below is what this looks like on our sample dataset:This shows the percentage of rows (blue) and unique values (red) that correlate to a k-value. In the example above, we see that 100% of the data maps to fewer than 10 people. To fix this, without dropping 100% of rows, we applied generalization to convert ages to age ranges. Here is the graph after the transform:Now only 3.9% of the rows and 21.15% of the unique values fall below the k=10 threshold. So as a result, we reduced the re-identifiability while preserving much of the data utility, dropping only 3.9% of rows.All hands on deck to prevent data lossOf course, k-anonymity is just one way to assess quasi-identifiers and your risk of re-identification. Cloud DLP, for example, lets you assess other properties like l-diversity, k-map, and delta-presence. To learn more, check out this resource.In addition, we plan to present a research paper on Estimating Reidentifiability and Joinability of Large Data at Scale at the IEEE conference in May, covering techniques for doing this kind of analysis at incredibly large scale. We also explore how these techniques can be used to understand additional use cases around join-ability and data flow. These techniques are very useful for data owners who want to have a risk-based approach towards anonymization, while gaining insights into their data. Hope to see you there!
Quelle: Google Cloud Platform

Accelerate Java application development on GCP with Micronaut

Editor’s note: Want to develop microservices in Java? Today we hear from Object Computing, Inc. (OCI), a Google Cloud partner that is also the driving force behind the Micronaut JVM framework. Here, OCI senior software engineer Sergio del Amo talks about how to use Micronaut on GCP to build serverless applications, and walks you through an example.Traditional application architectures are being replaced by new patterns and technologies. Organizations are discovering great benefits to breaking so-called monolithic applications into smaller, service-oriented applications that work together in a distributed system. The new architectural patterns introduced by this shift call for the interaction of numerous, scope-limited, independent applications: microservices.To support microservices, modern applications are built on cloud computing technologies, such as those provided by Google Cloud. Rather than managing the health of servers and data centers, organizations can deploy their applications to platforms where the details of servers are abstracted away, and services can be scaled, redeployed, and monitored using sophisticated tooling and automation.In a cloud-native world, optimizing how a Java program’s logic is interpreted and run on cloud servers via annotations and other compilation details takes on new importance. Additionally, serverless computing adds incentive for applications to be lightweight and responsive and to consume minimal memory. Today’s JVM frameworks need to ease not just development, as they have done over the past decade, but also operations.  Enter Micronaut. Last year, a team of developers at OCI  released this open source JVM framework that was designed to simplify developing and deploying microservices and serverless applications.Micronaut comes with built-in support for GCP services and hosting. Then, in addition to out-of-the-box auto-configurations, job scheduling, and myriad security options, Micronaut provides a suite of built-in cloud-native features, including:Service discovery. Service discovery means that applications are able to find each other (and make themselves findable) on a central registry, eliminating the need to look up URLs or hardcode server addresses in configuration. Micronaut builds service-discovery support directly into the @Client annotation, meaning that performing service discovery is as simple as supplying the correct configuration and then using the “service ID” of the desired service.Load balancing. When multiple instances of the same service are registered, Micronaut provides a form of “round-robin” load-balancing, cycling requests through the available instances to ensure that no one instance is overwhelmed or underutilized. This is a form of client-side load-balancing, where each instance either accepts a request or passes it along to the next instance of the service, spreading the load across available instances automatically.Retry mechanism and circuit breakers. When interacting with other services in a distributed system, it’s inevitable that at some point, things won’t work out as planned—perhaps a service goes down temporarily or drops a request. Micronaut offers a number of tools to gracefully handle these mishaps. Retry provides the ability to invoke failed operations. Circuit breakers protect the system from repetitive failures.As a result of this natively cloud-native construction, you can use Micronaut in scenarios that would not be feasible with a traditional Model-View-Controller framework in the JVM, including low-memory microservices, Android applications, serverless functions, IoT deployments, and CLI applications.Micronaut also provides a reactive HTTP server and client based on Netty, an asynchronous networking framework that offers high performance and a reactive, event-driven programming model.Sample App: Google Cloud Translate APITo see how easy it is to integrate a Micronaut application with Google Cloud services, review this tutorial for building a sample application that consumes the Google Cloud Translation API.Step 1: Install MicronautYou can build Micronaut from the source on [Github]() or download it as a binary and install it on your shell path. However the recommended way to install Micronaut is via SDKMAN!. If you do not have SDKMAN! installed already, you can do so in any Unix-based shell with the following commands:You can now install Micronaut itself with the following SDKMAN! command (use sdk list micronaut to view available versions; at the time of this writing, the latest is 1.0.3):Confirm that you have installed Micronaut by running _mn -v_:Step 2: Create the projectThe mn command serves as Micronaut’s CLI. You can use this command to create your new Micronaut project. For this exercise, we will create a stock Java application, but you can also choose Groovy or Kotlin as your preferred language by supplying the -lang flag (-lang groovy or -lang kotlin).The `mn` command accepts a features flag, where you can specify features that add support for various libraries and configurations in your project. You can view available features by running mn profile-info service.We’re going to use the spock feature to add support for the Spock testing framework to our Java project. Run the following command:Note that we can supply a default package prefix (example.micronaut) to the project name (translator). If we did not do so, the project name would be used as a default package. This package will contain the Application class and any classes generated using the CLI commands (as we will do shortly).By default the create-app command generates a Gradle build. If you prefer Maven as your build tool, you can do so using the -build flag (e.g., -build maven). This exercise uses the default Gradle project.At this point, you can run the application using the Gradle run task.TIP: If you would like to run your Micronaut project using an IDE, be sure that your IDE supports Java annotation processors and that this support is enabled for your project. In the IntelliJ IDEA, the relevant setting can be found under Preferences -> Build, Execution, Deployment -> Compiler -> Annotation Processors -> Enabled.Step 3: Create a simple interfaceCreate a Java interface to define the translation contract:If I want to translate Hello World to Spanish, you can invoke any available implementations of the previous interface with translationService.translate( “Hello World”, “en”, “es”).We create a POJO to encapsulate the translation result.Step 4: Expose an endpointSimilar to other MVC frameworks such as Grails or Spring Boot, you can expose an endpoint by creating a controller.The endpoint, which we will declare in a moment, consumes a JSON payload that encapsulates the translation request. We can map such JSON payload with a POJO.src/main/java/example/micronaut/TranslationCommand.javaPlease note that the previous class uses the annotation @ javax.validation.constraint.NotBlank to declare text, source, and target as required. Micronaut’s validation is built in with the standard framework – JSR 380, also known as Bean Validation 2.0.Hibernate Validator is a reference implementation of the validation API. You need an implementation of the validation API in the classpath. Thus, add the next snippet to _build.gradle_Next, create a controller:src/ main/java/example/micronaut/TranslationController.javaThere are several things worth mentioning about the previous code listing:The Controller exposes a _/translate_ endpoint which could be invoked with a POST request.The value of _@Post_ and _@Controller_ annotations is a RFC-6570 URI template.Via constructor injection, Micronaut supplies a collaborator; _TranslatorService_.Micronaut controllers consume and produce JSON by default._@Body_ indicates that the method argument is bound from the HTTP body.To validate the incoming request, you need to annotate your controller with _@Validated_ and the binding POJO with _@Valid_.In addition to constructor injection, as illustrated in the previous snippet, Micronaut supports the following types of dependency injection: Field injection, JavaBean property injection or Method parameter injection.Integrate with Google Cloud Translation APINow you want to add a dependency to Google Cloud Translate library:Micronaut implements the JSR 330 specification for Java dependency injection, which provides a set of semantic annotations under the javax.inject package (such as @Inject and @Singleton) to express relationships between classes within the DI container.Create a singleton implementation of _TranslationService_ that uses the Google Cloud Translation API.Here are a few things to mention about the above  code:_@Singleton_ annotation is used to declare the class as a Singleton.A method annotated with _@PostConstruct_ will be invoked once the object is constructed and fully injected.Test the app Thanks to Micronaut’s fast startup time, it is easy to write functional tests with it.Here’s how to write a functional test that verifies the behavior of the whole application.Here are a few things to note about the above code:It’s easy to run the application from a test with the _EmbeddedServer_ interface.You can easily create an HTTP Client bean to consume the embedded server.Micronaut HTTP Client makes it easy to parse JSON into Java objects.Creating HTTP Requests is easy thanks to Micronaut’s fluid API.We verify the that server responds 400 (Bad request status code) when the validation of the incoming JSON payload fails.Deploy to Google CloudThere are multiple ways to deploy a Micronaut application to Google Cloud. You may choose to containerize your app or deploy it as a FAT jar. Check out these tutorials to learn more:Deploy a Micronaut application to Google Cloud App EngineDeploy a Micronaut application containerized with Jib to Google Kubernetes EngineMicronaut performanceIn addition to its cloud-native features, Micronaut also represents a significant step forward in microservice frameworks for the JVM, by supporting common Java framework features such as dependency injection (DI) and aspect-oriented programming (AOP), without compromising startup time, performance, and memory consumption.Micronaut features a custom-built DI and AOP model that does not use reflection. Instead, an abstraction over the Java annotation processor tool (APT) API and Groovy abstract syntax tree (AST) lets developers build efficient applications without giving up features they know and love.By moving the work of the DI container to the compilation phase, there is no longer a link between the size of the codebase and the time needed to start an application or the memory required to store reflection metadata. As a result, Micronaut applications written in Java typically start within a second.This approach has opened doors to a variety of framework features that are more easily achieved with AOT compilation and that are unique to Micronaut.ConclusionCloud-native development is here to stay, and Micronaut was built with this landscape in mind. Like the cloud-native architecture that motivated its creation, Micronaut’s flexibility and modularity allows developers to create systems that even its designers could not have foreseen.To learn more about using Micronaut for your cloud-based projects, check out the Micronaut user guide. Learn how to use Micronaut in concert with Google Cloud Platform services, such as Cloud SQL, Kubernetes, and Google’s Instance Metadata Server in our upcoming webinar. There’s also a small but growing selection of step-by-step tutorials, including guides for all three of Micronaut’s supported languages: Java, Groovy, and Kotlin.Finally, the Micronaut community channel on Gitter is an excellent place to meet other developers who are already building applications with the framework and interact directly with the core development team.
Quelle: Google Cloud Platform

Good vibes only—don’t miss these Cloud Next ‘19 sessions on inclusivity, sustainability

At Google Cloud, we’re excited to join with our customers and build a world that works for everyone. Technology and innovation can help businesses grow sustainably, create richer, more interactive learning experiences, power economies where more people have an opportunity to thrive, and advance inclusion for all. We’ve picked a few of the sessions at Next 2019 that focus on building a cleaner, accessible and more inclusive future for generations to come. These are the ideas that give us good vibes about building with Google Cloud, so be sure not to miss them!   1. Prioritizing diversity and inclusionWorkforce diversity starts with recruiting and hiring diversity. In Inclusive by Design: Engage and Recruit Diverse Talent with AI, you’ll hear how companies like Cox are reaching a larger and more diverse talent pool and making racial, ethnic and gender diversity a key driver of innovation and growth. It takes a more diverse workforce for companies to truly build for all customers. It also makes business sense. In The Business Case for Product Inclusion, you’ll hear from a panel of Google leaders and Google Cloud customers about demonstrating the business value of inclusive products. And in the Chief Diversity Officer Panel: Building Dynamic Inclusive Cultures, you’ll hear from a panel of Chief Diversity Officers about how they are advancing vibrant, inclusive cultures across their organizations.Another amazing panel during Next ‘19 will share the stories of female technical leaders across Google. In Women of Cloud: How to Grow our Clout 2.0, senior women will discuss their past year in both the field and their careers, and answer your questions about career development and company culture. They’ll likely touch on allyship, or advocating for groups who have been historically excluded from the tech industry. If you want to learn more about allyship, and practice it, Allyship: The Fundamentals will run on both Wednesday and Thursday. This session will give you the chance to practice identity-based leadership by examining your position in social struggles and putting yourself in another’s shoes.2. Building a more sustainable futureIf you’re curious about your environmental impact as a cloud user, join us for Building Sustainability Into Our Infrastructure, Your Goals and New Products. We’ll share what it took to build a cloud with sustainability built-in, and National Geographic will share how they incorporate a corporate focus on the earth into an IT one. SunPower will also join for an exciting announcement as part of their journey toward making home solar accessible to all.  Making renewable energy like solar a primary energy source for the globe is a big challenge, and so are the challenges faced in global ocean exploitation. The oceans are big—140 million square miles big, or about 70% of the earth’s surface. But less than 5% has been explored. That presents a problem for sustainable fishing management, particularly with dark vessels that do not have any associated location data and could be fishing illegally. In Making Planet-Scale GIS Possible with Google Earth Engine and BigQuery GIS, Google Cloud customer Global Fishing Watch will share how they use Google Earth Engine to automatically extract vessel locations from massive amounts of radar imagery, then use BigQuery GIS to elucidate the dark vessels.Overfishing is just one example of the natural resource challenges we face. In 2018, the global demand for resources was 1.7 times what the earth can support in one year. Google Cloud and SAP came together to help address this challenge by hosting a sustainability contest for social entrepreneurs. In Circular Economy 2030: Cloud Computing for a Sustainable Revolution, you can learn more about how cloud computing can be mobilized for a sustainable future with responsible consumption and production, and hear the anticipated announcement of the five finalists of Circular Economy 2030.3. Nonprofit organizations making a positive impactGlobal nonprofit organizations are tackling big challenges with Google technology. One area with a promising future is using data analytics and other new technologies to solve some of the world’s greatest challenges, such as unemployment or sustainable development. In Data for Good: Driving Social and Environmental Impact with Big Data Solutions, you’ll hear about how Google Cloud is working to empower nonprofits around the world, and how we’ve collaborated with organizations like the Global Partnership for Sustainable Development Data (GPSDD) to mobilize data for sustainable development across our Data Solutions for Change, Visualize 2030, and Circular Economy 2030 initiatives. In Empowering Global Nonprofits to Drive Impact with G Suite, we’ll talk about how nonprofits are embracing technology to improve how they collaborate, engage with their community, and fundraise for their cause. You’ll hear from two Bay Area organizations making a positive impact on the lives of local youth.4. Using technology to help the visually impairedAs part of a strategic initiative by the Library of Congress to support users who are visually impaired, one Google Cloud customer is building an app to make reading books more easily available. In Making Books Accessible to the Visually Impaired, SpringML will share how users can now search for and play an audiobook from a Google Home device. Hear about the development process SpringML went through to make almost 1 TB of audio content available via their application. In addition to partnering with our customers on applications like SpringML is building, we’re working on improving accessibility with Google Cloud products. In Empowering Entrepreneurs and Employees With Disabilities Using G Suite, the Blind Institute of Technology will share how they used G Suite to establish workflows that are effective and efficient for their employees, some who happen to be visually impaired. There’s more on this in the G Suite and Chrome Accessibility Features for an Inclusive Organization session, which will go into depth on the built-in accessibility features of G Suite and Chromebooks.And finally, a session focused on those who will be building this new world in a few years. Did you know that more than half of school-aged children in the U.S. use Google in their classrooms? Join For Parents and Guardians: How Your Child Uses Google in Class to learn about the tools that are transforming learning outcomes, curriculum, and opportunities for children across the nation.For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there.
Quelle: Google Cloud Platform

Exploring container security: the shared responsibility model in GKE

Editor’s note: This post is part of our blog post series on container security at Google.Security in the cloud is a shared responsibility between the cloud provider and the customer. Google Cloud is committed to doing its part to protect the underlying infrastructure, like encryption at rest by default, and in providing capabilities you can use to protect your workloads, like access controls in Cloud Identity and Access Management (IAM). As newer infrastructure models emerge, though, it’s not always easy to figure out what you’re responsible for versus what’s the responsibility of the provider. In this blog post, we aim to clarify for Google Kubernetes Engine (GKE) what we do and don’t do—and where to look for resources to lock down the rest.Google Cloud’s shared responsibility modelThe shared responsibility model depends on the workload—the more we manage, the more we can protect. This starts from the bottom of the stack and moves upwards, from the infrastructure as a service (IaaS) layer where only the hardware, storage, and network are the provider’s responsibility, up to software as a service (SaaS) where almost everything except the content and its access are up to the provider. (For a deep dive check out the Google Infrastructure Security Design Overview whitepaper). Platform as a service (PaaS) layers like GKE fall somewhere in the middle, hence the ambiguity that arises.For GKE, at a high level, we are responsible for protecting:The underlying infrastructure, including hardware, firmware, kernel, OS, storage, network, and more. This includes encrypting data at rest by default, encrypting data in transit, using custom-designed hardware, laying private network cables, protecting data centers from physical access, and following secure software development practices.The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.The Kubernetes distribution. GKE provides the latest upstream versions of Kubernetes, and supports several minor versions. Providing updates to these, including patches, is our responsibility.The control plane. In GKE, we manage the control plane, which includes the master VMs, the API server and other components running on those VMs, as well as the etcd database. This includes upgrades and patching, scaling, and repairs, all backed by an SLO.Google Cloud integrations, for IAM, Cloud Audit Logging, Stackdriver, Cloud Key Management Service, Cloud Security Command Center, etc. These enable controls available for IaaS workloads across Google Cloud on GKE as well.Conversely, you are responsible for protecting:The nodes that run your workloads, including VM images and their configurations. This includes keeping your nodes updated, as well as leveraging Compute Engine features and other Google Cloud products to help protect your nodes. Note that we already manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading.The workloads themselves, including your application code, dockerfiles, container images, data, RBAC/IAM policy, and containers and pods that you are running. This means leveraging GKE features and other Google Cloud products to help protect your containers.Hardening the control plane is Google’s responsibilityGoogle is responsible for making the control plane more secure – which is the component of Kubernetes that manages how Kubernetes communicates with the cluster, and applies the user’s desired state. The control plane includes the master VM, API server, scheduler, controller manager, cluster CA, root-of-trust key material, IAM authenticator and authorizer, audit logging configuration, etcd, and various other controllers. All of your control plane components run on Compute Engine instances that we own and operate. These instances are single tenant, meaning each instance runs the control plane and its components for only one customer. (You can learn more about GKE control plane security here.)We make changes to the control plane to further harden these components on an ongoing basis—as attacks occur in the wild, when vulnerabilities are announced, or when new patches are available. For example, we updated clusters to use RBAC rather than ABAC by default, and locked down and eventually disable the Kubernetes dashboard.How we respond to vulnerabilities depends on which component the vulnerability is found in:The kernel or an operating system: We apply the patch to affected components, including obtaining and applying the patch to the host images for Kubernetes, COS and Ubuntu. We automatically upgrade the master VMs, but you are responsible for upgrading nodes. Spectre/Meltdown and L1TF are examples of such vulnerabilities.Kubernetes: With Googlers on the Kubernetes Product Security Team, we often help develop and test patches for Kubernetes vulnerabilities when they are discovered. Since GKE is an official distribution, we receive the patch as part of the Private Distributors’ List. We’re responsible for rolling out these changes to the master VMs, but you are responsible for upgrading your nodes. Take a look at these security bulletins for the latest examples of such vulnerabilities, CVE-2017-1002101, CVE-2017-1002102, and CVE-2018-1002015.Component used in Kubernetes Engine’s default configuration, like Calico components for Network Policy, or etcd: We don’t control the open-source projects used in GKE, however, we select open-source projects that have demonstrated robust security practices and that take security seriously. For these projects, we may receive a patch from upstream Kubernetes, a partner, or the distributor list of another open-source project. We are responsible for rolling out these changes, and/or notifying you if there is action required. TTA-2018-001 is an example of such a vulnerability that we patched automatically.GKE: If a vulnerability is discovered in GKE, for example through our Vulnerability Reward Program, we are responsible for developing and applying the fix.In all of these cases, we make these patches available as part of general GKE releases (patch releases and bug fixes) as soon as possible given the level of risk, embargo time, and any other contextual factors.We do most of the hard work to protect nodes, but it’s your responsibility to upgrade and reap the benefitsYour worker nodes in Kubernetes Engine consist of a few different surfaces that need to be protected, including the node OS, the container runtime, Kubernetes components like the kubelet and kube-proxy, and Google system containers for monitoring and logging. We’re responsible for developing and releasing patches for these components, but you are responsible for upgrading your system to apply these patches.Kubernetes components like kube-proxy and kube-dns, and Google-specific add-ons to provide logging, monitoring, and other services run in separate containers. We’re responsible for these containers’ control plane compatibility, scalability, upgrade testing, as well as security configurations. If these need to be patched, it’s your responsibility to upgrade to apply these patches.To ease patch deployment, you can use node auto-upgrade. Node auto-upgrade applies updates to nodes on a regular basis, including updates to the operating system and Kubernetes components from the latest stable version. This includes security patches. Notably, if a patch contains a critical fix and can be rolled out before the public vulnerability announcement without breaking embargo, your GKE environment will be upgraded before the vulnerability is even announced.Protecting workloads is still your responsibilityWhat we’ve been talking about so far is the underlying infrastructure that runs your workload—but you of course still have the workload itself. Application security and other protections to your workload are your responsibility.You’re also responsible for the Kubernetes configurations that pertain to your workloads. This includes setting up a NetworkPolicy to restrict pod to pod traffic and using a PodSecurityPolicy to restrict pod capabilities. For an up-to-date list of the best practices we recommend to protect your clusters, including node configurations, see Hardening your cluster’s security.If there is a vulnerability in your container image, or application, however, it is also fully your responsibility to patch it. However, there are tools you can use to help:Google managed base images, which are regularly patched by Google for known vulnerabilities.Container Registry vulnerability scanning to analyze your container images and packages for potential known vulnerabilities.Cloud Security Scanner (alpha) to help you detect common application vulnerabilities.Incident response in GKESo what if you’ve done your part, we’ve done ours, and your cluster is still attacked? Damn! Don’t panic.Google Cloud takes the security of our infrastructure—including where user workloads run—very seriously, and we have documented processes for incident response. Our security team’s job is to protect Google Cloud from potential attacks and protect the components outlined above. For the pieces you’re responsible for, if you’re looking to further protect yourself from potential container-specific attacks, Google Cloud already has a range of container security partners integrated with the Cloud Security Command Center.If you are responding to an incident, you can leverage Stackdriver Incident Response & Management (alpha) to help you reduce your time to incident mitigation, refer to sample queries for Kubernetes audit logs, and check out the Cloud Forensics 101 talk from Next ‘18 to learn more about conducting forensics.What’s the tl;dr of GKE security? For GKE, we’re responsible for protecting the control plane, which includes your master VM, etcd, and controllers; and you’re responsible for protecting your worker nodes, including deploying patches to the OS, runtime and Kubernetes components, and of course securing your own workload. An easy way to do  your part is touse node-autoupgradeprotect your workload from common image and application vulnerabilities, andfollow the Google Kubernetes Engine hardening guide.If you follow those three steps, together we can build GKE environments that are resilient to attacks and vulnerabilities, to deliver great uptime and performance.
Quelle: Google Cloud Platform