Introducing Patterns: Prebuilt Blocks for Beautifully Designed Websites

The WordPress Editor is a powerful tool that can help bring your design ideas to life but one of the best parts is, you don’t have to start from scratch. Building sophisticated designs can be as easy as picking Patterns from our growing library, and snapping them together to create beautiful-looking posts and pages. As of today, we’re now offering over 100 individual Patterns — with more being added all the time!

If you’ve never used Patterns before we’ve got an introduction to help you get started and also highlight some new features.

The best way to introduce Patterns is to use them. Here’s how you can add them to a post or a page on

Head to the WordPress Editor and click the + icon to add a new block.Click on the Patterns tab.Click on the Pattern you’d like to see in your document and it’ll be inserted at the location of your cursor.

Here’s a quick demo that shows how to add an image gallery. 

If you’re familiar with the Block Editor, the process will look similar. Once you’ve inserted a Pattern into a post or a page, you’ll be able to see how you can customize and edit the Pattern by clicking on different areas. The image below reveals the editing options that appear with our example. 

Each Pattern is a collection of different blocks carefully put together to help you produce great looking blog posts and pages in the Editor. In the example above, it’s a collection of Image, Paragraph, Spacer, and Column Blocks. All pre-arranged into a simple but elegant Pattern for displaying images. Using Patterns in the Editor is kind of like having a WordPress web designer right there with you building up a design element by element.

The idea is that, once you’ve inserted a Pattern, you can start customizing it to make it yours.

For even more customization options with Patterns, try combining them with the updated fonts on

Over 100 Patterns to Choose From

This is where the number of Patterns gets exciting. Think of it like having over 100 templates you can add to your posts and pages. You can browse by category to see all the available Pattern options.

Taking a look at a few all together might be helpful. Here are some of my recent favorites. 

They’re not favorites because they look great, but instead because these Patterns use so many different Blocks to produce a unique and useful design. Take the center Registration Form Pattern, for example. It combines a Heading Block, Paragraph Blocks, the Form Block, and the Columns Block into one Pattern that together, can make up an entire page.

More Patterns are on the Way

We’re just getting started creating new Patterns for you. What type of Pattern would make it easier to create Posts and Pages on your site? More are on the way and we’d love to hear your ideas and feedback so we can make your publishing and site-building experience even better.

And if you have anything to share that you’ve made with a Pattern or with the Editor let us know! We’d love to see and hear how you’re using Patterns on
Quelle: RedHat Stack

Expert Advice: How to Improve Remote Education Collaboration

As we’re witnessing with schools and learning communities around the world, education is shifting dramatically. With the right set of tools, your class, team, or group can learn to communicate and collaborate more efficiently online. Since our company was founded over fifteen years ago, the people behind the scenes at have worked from home — or from anywhere they choose in the world — and have learned a lot along the way.

A tool we call P2 has been indispensable to us, and to a growing number of educators. Want to learn our tips and tricks? Join us for a free webinar on Thursday, November 5, so you and your team can learn to make the most of this tool for remote collaboration. You can also sign up for the free beta version of P2 that is now available.

Date: Thursday, November 5, 2020Time: 10:00 am PT | 12:00 pm CT | 1:00 pm ET | 18:00 UTCRegistration link:’s invited: Anyone looking to improve internal team collaboration or build a public forum with P2 are welcome, but this webinar is specially designed for educators and teachers.

Register for the webinar today! We look forward to seeing you.
Quelle: RedHat Stack

All treats, no tricks with product recommendation reference patterns

In all things technology, change is the only constant. This year alone has brought more uncertainty than ever before, and the IT shadows have felt full of perils. With the onset of the pandemic, the way consumers shop has shifted faster than anyone could have predicted. The move to online shopping vs. brick and mortar stores was already happening, but it’s significantly accelerated just this year alone. Shoppers have quickly transitioned to online purchasing, resulting in increased traffic and varying fulfillment needs. Shopper expectations have evolved as well, with 66% of online purchasers choosing a retailer based on convenience, while only 47% choosing a retailer based on price/value, according to Catalyst and Kantar research.So the pressure is on for retailers to become digital and make sure shoppers are happy. But there’s no reason to be spooked out. Done right, you can serve customers better with an understanding of their customers’ purchasing behavior and patterns using predictive analytics. Deep, data-driven insights are important to ensuring customer demand and preferences are accurately met. To make it easier to treat (not trick) your customers to better recommendations, we recently introduced Smart Analytics reference patterns, which are technical reference guides with sample code for common analytics use cases with Google Cloud, including predicting customer lifetime value, propensity to purchase, product recommendation systems, and more. We heard from many customers that you needed an easy way to put your analytics tools into practice, and that these are some common use cases.Understanding product recommendation systemsProduct recommendation systems are an important tool for understanding customer behavior. They’re designed to generate and provide suggestions for items or content a specific user would like to purchase or engage with. A recommendation system creates an advanced set of complex connections between products and users, and compares and ranks these connections in order to recommend products or services as customers browse your website, for example. A well-developed recommendation system will help you improve your shoppers’ experience on a website and result in better customer acquisition and retention. These systems can significantly boost sales, revenues, click-through-rates, conversions, and other important metrics because personalizing a user’s preferences creates a positive effect, in turn translating to customer satisfaction, loyalty, and even brand affinity. Instead of building from scratch and reinventing the wheel every time, you can take advantage of these reference patterns to quickly start serving customers. It’s important to emphasize that recommender systems are not new, and you can build your own in-house or from any cloud provider. Google Cloud’s unique ability to handle massive amounts of structured and unstructured data, combined with our advanced capabilities in machine learning and artificial intelligence, provide a powerful set of products and solutions for retailers to leverage across their business.Using reference patterns for real-world casesIn this reference pattern, you will learn step-by-step how to build a recommendation system by using BigQuery ML (a.k.a. BigQu-eerie ML

Modernize your Java apps with Spring Boot and Spring Cloud GCP

It’s an exciting time to be a Java developer: there are new Java language features being released every 6 months, new JVM languages like Kotlin, and the shift from traditional monolithic applications to microservices architectures with modern frameworks like Spring Boot. And with Spring Cloud GCP, we’re making it easy for enterprises to modernize existing applications and build cloud-native applications on Google Cloud. First released two years ago, Spring Cloud GCP allows Spring Boot applications to easily utilize over a dozen Google Cloud services with idiomatic Spring Boot APIs. This means you don’t need to learn a Google Cloud-specific client library, but can still utilize and realize the benefits of the managed services:If you have an existing Spring Boot application, you can easily migrate to Google Cloud services with little to no code changes.If you’re writing a new Spring Boot application, you can leverage Google Cloud services with the framework APIs you already know.Major League Baseball recently started their journey to the cloud with Google Cloud. In addition to modernizing their infrastructure with GKE and Anthos, they are also modernizing with a microservices architecture. Spring Boot is already the standard Java framework within the organization. Spring Cloud GCP allowed MLB to adopt Google Cloud quickly with existing Spring Boot knowledge.”We use the Spring Cloud GCP to help manage our service account credentials and access to Google Cloud services.” – Joseph Davey, Principal Software Engineer at MLBSimilarly,, an online retailer, was able to develop their Spring Boot applications on GCP more easily with Spring Cloud GCP.”[] heavily builds on top of Spring Boot, but we only have a limited capacity to build our own modules on top of Spring Boot to integrate our Spring Boot applications with GCP. Spring Cloud GCP has taken that burden from us and makes it a lot easier to provide the integration to Google Cloud Platform.” – Maurice Zeijen, Software Engineer at bol.comDeveloper productivity, with little to no custom codeWith Spring Cloud GCP, you can develop a new app, or migrate an existing app, to adopt a fully managed database, create event-driven applications, add distributed tracing and centralized logging, and retrieve secrets—all with little to no custom code or custom infrastructure to maintain. Let’s look at some of the integrations that Spring Cloud GCP brings to the table. DataFor a regular RDBMS, like PostgreSQL, MySQL, and MS SQL, you can use Cloud SQL and continue to use Hibernate with Spring Data, and connect to Cloud SQL simply by updating the JDBC configuration. But what about Google Cloud databases like Firestore, Datastore, and the globally-distributed RDBMS Cloud Spanner? Spring Cloud GCP implements all the data abstractions needed so you can continue to use Spring Data, and its data repositories, without having to rewrite your business logic. For example, you can start using Datastore, a fully-managed NoSQL database, just as you would any other database that Spring Data supports.You can annotate a POJO class with Spring Cloud GCP annotations, similar to how you would annotate Hibernate/JPA classes:Then, rather than implementing your own data access objects, you can extend a Spring Data Repository interface to get full CRUD operations, as well as custom query methods.Spring Data and Spring Cloud GCP automatically implement the CRUD operations and generate the query for you. Best of all, you can use built-in Spring Data features like auditing and capturing data change events.You can find full samples for Spring Data for Datastore, Firestore, and Spanner on GitHub.MessagingFor asynchronous message processing and event-driven architectures, rather than manually provision and maintain complicated distributed messaging systems, you can simply use Pub/Sub. By using higher-level abstractions like Spring Integration, or Spring Cloud Streams, you can switch from an on-prem messaging system to Pub/Sub with just a few configuration changes.For example, by using Spring Integration, you can define a generic business interface that can publish a message, and then configure it to send a message to Pub/Sub:You can consume messages in the same way. The following is an example of using Spring Cloud Stream and the standard Java 8 streaming interface to receive messages from Pub/Sub by simply configuring the application:You can find full samples with Spring Integration and Spring Cloud Stream on GitHub.ObservabilityIf a user request is processed by multiple microservices and you would like to visualize that whole call stack across microservices, then you can add distributed tracing to your services. On Google Cloud, you can store all the traces in Cloud Trace, so you don’t need to manage your own tracing servers and storage.Simply add the Spring Cloud GCP Trace starter to your dependencies, and all the necessary distributed tracing context (e.g., trace ID, span ID, etc) is captured, propagated, and reported to Cloud Trace.This is it—no custom code required. All the instrumentation and trace capabilities use Spring Cloud Sleuth. Spring Cloud GCP supports all of Spring Cloud Sleuth’s features, so distributed tracing is automatically integrated with Spring MVC, WebFlux, RestTemplate, Spring Integration, and more.Cloud Trace generates a distributed trace graph. But notice the “Show Logs” checkbox. This Trace/Log correlation feature can associate log messages to each trace so you can see the logs associated with a request to isolate issues. You can use Spring Cloud GCP Logging starter and its predefined logging configuration to automatically produce the log entry with the trace correlation data.You can find full samples with Logging and Trace  on GitHub.SecretsYour microservice may also need access to secrets, such as database passwords or other credentials. Traditionally, credentials may be stored in a secret store like HashiCorp Vault. While you can continue to use Vault on Google Cloud, Google Cloud also provides the Secret Manager service for this purpose. Simply add the Spring Cloud GCP Secret Manager starter so that you can start referring to the secret values using standard Spring properties:In the file, you can refer to the secret values using a special property syntax:You can find a full sample with Secret Manager on GitHub.More in the works, in open sourceSpring Cloud GCP closely follows the Spring Boot and Spring Cloud release trains. Currently, Spring Cloud GCP 1.2.5 works with Spring Boot 2.3 and Spring Cloud Hoxton release train. Spring Cloud GCP 2.0 is on its way and it will support Spring Boot 2.4 and the Spring Cloud Ilford release train.In addition to core Spring Boot and Spring Cloud integrations, the team has been busy developing new components to meet developers’ needs:Cloud Monitoring support with MicrometerSpring Cloud Function’s GCP Adapter for Cloud Functions Java 11Cloud Spanner R2DBC driver and Cloud SQL R2DBC connectors to enable scalable and fully reactive servicesExperimental Graal VM support for our client libraries, so you can compile your Java code into native binaries, to significantly reduce your startup times and memory footprint.Developer success is important to us. We’d love to hear your feedback, feature requests, and issues on GitHub, so we can understand your needs and prioritize our development work. Try it out!Want to see everything in action? Check out the Developer Hands-on Keynote from Google Cloud Next ‘20: On Air, where Daniel Zou shows how to leverage Spring Boot and Spring Cloud GCP when modernizing your application with Anthos, Service Mesh, and more:You can also easily try Spring Cloud GCP with many samples. Or, you can take the guided Spring Boot on GCP course on Qwiklab or Coursera. Last but not least, you can find out about detailed features and configurations in the reference documentation.Related ArticleAnnouncing Spring Cloud GCP 1.1: deepening ties to Pivotal’s Spring FrameworkHere at Google we have been working hard with Pivotal’s Spring team to integrate the Spring Framework and Google Cloud Platform (GCP). La…Read Article
Quelle: Google Cloud Platform

Cloud Storage object lifecycle management gets new controls

Managing your cloud storage costs and reducing the risk of overspending is critical in today’s changing business environments. Today, we’re excited to announce the immediate availability of two new Object Lifecycle Management (OLM) rules designed to help protect your data and lower the total cost of ownership (TCO) within Google Cloud Storage. You can now transition objects between storage classes or delete them entirely based on when versioned objects became noncurrent (out-of-date), or based on a custom time stamp you set on your objects. The end result: more fine grained controls to reduce TCO and improve storage efficiencies. Delete objects based on archive time Many customers who leverage OLM protect their data against accidental deletion with Object Versioning. However, without the ability to automatically delete versioned objects based on their age, the storage capacity and monthly charges associated with old versions of objects can grow quickly. With the non-current time condition, you can filter based on archive time and use it to apply any/all lifecycle actions that are already supported, including delete and change storage class. In other words, you can now set a lifecycle condition to delete an object that is no longer useful to you, reducing your overall TCO. Here is a sample rule to delete all the noncurrent object versions that became versioned (noncurrent) more than 30 days ago:This rule downgrades all the noncurrent object versions that became versioned (noncurrent) before January 31, 1980 in Coldline to Archive:Set custom timestampsThe second new Cloud Storage feature is the ability to set a custom timestamp in the metadata field to to assign a lifecycle management condition to OLM. Before this launch, the only timestamp that could be used for OLM was given to an object when writing to the Cloud Storage bucket. However, this object creation timestamp may not actually be the date that you care the most about. For example, you may have migrated data to Cloud Storage from another environment and want to preserve the original create dates from before the transfer. In order to set lifecycle rules based on dates that make more sense to you and your business case, you can now set a specific date and time and apply lifecycle rules to objects. All existing actions, including delete and change storage class are supported. If you’re running applications such as backup and disaster recovery applications, content serving, or a data lake, you can benefit from this feature by preserving the original creation date of an object when ingesting data into Cloud Storage. This feature delivers fine-grained OLM controls, resulting in cost savings and efficiency improvements, as a result of being able to set your own timestamps directly to the assets themselves. This sample rule deletes all objects in a bucket more than 2 years old since the specified custom timestamp:This rule downgrades all objects with custom timestamp older than May 27, 2019 in Coldline to Archive:The ability to use age or custom dates with Cloud Storage object lifecycle management is now generally available. To get started or for more information, visit the Cloud Storage Lifecycle Documentation page or navigate to the Google Cloud Console.Related ArticlePut your archive data on ice with new storage offeringThe new storage class called Archive, our coldest Cloud Storage offering yet, is now available for data backup and storage.Read Article
Quelle: Google Cloud Platform

Cloud Acceleration Program: More reasons for SAP customers to migrate to Google Cloud

The arrival of COVID-19 caused massive disruption for companies around the globe and made digital transformation a more urgent priority. That’s why it’s so important that enterprises running their businesses on SAP have the agility, uptime and advanced analytics that Google Cloud can offer. But given the drain on financial and human resources that the pandemic has caused, many organizations are worried about the risks of migrating to the cloud and have considered hitting the brakes on their cloud migrations, just when they should be pressing the gas pedal. Last year we launched the Cloud Acceleration Program (CAP), which has significantly helped SAP customers speed their transitions to the cloud. This first-of-its-kind program empowers customers with solutions from both Google Cloud and our partners to simplify their cloud migrations. Google Cloud is also providing CAP participants with upfront financial incentives to defray infrastructure costs for SAP cloud migrations and help customers ensure that duplicate costs are not incurred during migration. Here’s what customers are saying about the program:“We had plans to migrate to the cloud, but COVID brought into sharp focus the need to accelerate our SAP migration to the cloud. With help from Google Cloud and their Cloud Acceleration Program, we were able to get the skills and the funding to accelerate this effort dramatically. With our new strategic relationship with Google Cloud, we feel significantly better positioned for the future to take advantage of the elastic, scalable computing capabilities and vast amounts of innovation that [are] constantly being developed.”  —Maneesh Gidwani, CIO of FIFCO, a global food, beverage, retail and hospitality organization“The ability to leverage Managecore through the Cloud Acceleration Program dramatically reduced the risk and costs of our SAP migration to Google Cloud. The CAP program enabled Pegasystems to offset the upfront migration expense and significantly expedite our go-live process. With the help of Managecore we were able to focus on running our ERP business operations in the Cloud, rather than the technical elements of the project.”  —David Vidoni, Vice President of IT, Pegasystems, a CRM and BPM software developer company Cloud Acceleration Program Partners step up for SAP customersGoogle Cloud’s strongecosystem of partners are stepping up to the plate more than ever to help customers de-risk their SAP migrations to the cloud. By completing their migrations faster and with minimal cost, customers are now shifting their conversations from concerns about infrastructure and deployment, to higher-value topics such as optimizing costs and driving business value with analytics and machine learning tools.  “As one of the early partner participants in the Cloud Acceleration Program, we have been able to apply these significant resources to help multiple Enterprise customers in their SAP cloud migration engagements. CAP allows HCL to efficiently get tools and resources to the customer to ease their migration risk concerns & costs. Our customers are now engaging to drive strategic conversations on how they can leverage the SAP platform on the Google Cloud to drive new insights, improve business KPIs and create new business models with capabilities such as Google Cloud analytics and machine learning tools.”—Sanjay Singh, Senior VP & Global Head, HCL Google Ecosystem Unit”At NIMBL, we’ve seen both a great deal of interest in Google Cloud by our SAP customers as well as the significant results being realized by those who have deployed. A common concern for many other customers still on this journey – however – continues to be the overall disruption that a cloud migration may cause. Our migration expertise combined with the industry best tools and resources that Cloud Acceleration Program (CAP) offers, helps provide customers with a clear and confident path to the cloud. As a CAP partner, Google Cloud continues to set us up for success with the resources and support we need to deliver these critical customer deployments.”—Sergio Cipolla, Managing Partner, NIMBL Techedge GroupGoogle Cloud is a great place to run SAPAs pressures to transform increase for SAP enterprises, customers are looking to modernize on a smarter cloud.Google Cloud continues to be a great place to run SAP. Like no other for these unprecedented times, our Cloud Acceleration Program gets customers one step closer by reducing the complexities of migration and technical and financial risk management.Contact us to learn more aboutGoogle Cloud for SAP.
Quelle: Google Cloud Platform

Comparing containerization methods: Buildpacks, Jib, and Dockerfile

As developers we work on source code, but production systems don’t run source, they need a runnable thing. Starting many years ago, most enterprises were using Java EE (aka J2EE) and the runnable “thing” we would deploy to production was a “.jar”, “.war”, or “.ear” file. Those files consisted of the compiled Java classes and would run inside of a “container” running on the JVM. As long as your class files were compatible with the JVM and container, the app would just work.That all worked great until people started building non-JVM stuff: Ruby, Python, NodeJS, Go, etc. Now we needed another way to package up apps so they could be run on production systems. To do this we needed some kind of virtualization layer that would allow anything to be run. Heroku was one of the first to tackle this and they used a Linux virtualization system called “lxc” – short for Linux Containers. Running a “container” on lxc was half of the puzzle because still a “container” needed to be created from source code, so Heroku invented what they called “Buildpacks” to create a standard way to convert source into a container.A bit later a Heroku competitor named dotCloud was trying to tackle similar problems and went a different route which ultimately led to Docker, a standard way to create and run containers across platforms including Windows, Mac, Linux, Kubernetes, and Google Cloud Run. Ultimately the container specification behind Docker became a standard under the Open Container Initiative (OCI) and the virtualization layer switched from lxc to runc (also an OCI project).The traditional way to build a Docker container is built into the docker tool and uses a sequence of special instructions usually in a file named Dockerfile to compile the source code and assemble the “layers” of a container image.Yeah, this is confusing because we have all sorts of different “containers” and ways to run stuff in those containers. And there are also many ways to create the things that run in containers. The bit of history is important because it helps us categorize all of this into three parts:Container Builders – Turn source code into a Container ImageContainer Images – Archive files containing a “runnable” applicationContainers – Run Container ImagesWith Java EE those three categories map to technologies like:Container Builders == Ant or MavenContainer Images == .jar, .war, or .earContainers == JBoss, WebSphere, WebLogicWith Docker / OCI those three categories map to technologies like:Container Builders == Dockerfile, Buildpacks, or JibContainer Images == .tar files usually not dealt with directly but through a “container registry”Containers == Docker, Kubernetes, Cloud RunJava Sample ApplicationLet’s explore the Container Builder options further on a little Java server application. .  If you want to follow along, clone my comparing-docker-methods project:git clone comparing-docker-methodsIn that project you’ll see a basic Java web server in src/main/java/com/google/ that just responds with “hello, world” on a GET request to /. Here is the source:This project uses Maven with a minimal pom.xml build config file for compiling and running the Java server:If you want to run this locally make sure you have Java 8 installed and from the project root directory, run:./mvnw compile exec:javaYou can test the server by visiting: http://localhost:8080Container Builder: BuildpacksWe have an application that we can run locally so let’s get back to those Container Builders. Earlier you learned that Heroku invented Buildpacks to create standard, polyglot ways to go from source to a Container Image. When Docker / OCI Containers started gaining popularity Heroku and Pivotal worked together to make their Buildpacks work with Docker / OCI Containers. That work is now a sandbox Cloud Native Computing Foundation project: use Buildpacks you will need to install Docker and the pack tool. Now from the command line tell Buildpacks to take your source and turn it into a Container Image:pack build – comparing-docker-methods:buildpacksMagic! You didn’t have to do anything and the Buildpacks knew how to turn that Java application into a Container Image. It even works on Go, NodeJS, Python, and .Net apps out-of-the-box. So what just happened?  Buildpacks inspect your source and try to identify it as something it knows how to build. In the case of our sample application it noticed the pom.xml file and decided it knows how to build Maven-based applications. The –builder flag told it where to get the Buildpacks from. In this case, are the Container Image coordinates to Google Cloud’s Buildpacks. Alternatively you could use the Heroku or Paketo Buildpacks. The parameter comparing-docker-methods:buildpacks is the Container Image coordinates for where to store the output. In this case it stores on the local docker daemon. You can now run that Container Image locally with docker:docker run -it -ePORT=8080 -p8080:8080 comparing-docker-methods:buildpacksOf course you can also run that Container Image anywhere that runs Docker / OCI Containers like Kubernetes and Cloud Run.Buildpacks are nice because in many cases they just work and you don’t have to do anything special to turn your source into something runnable. But the resulting Container Images created from Buildpacks can be a bit bulky. Let’s use a tool called dive to examine what is in the created container image:dive comparing-docker-methods:buildpacksHere you can see the Container Image has 11 layers and a total image size of 319MB. With dive you can explore each layer and see what was changed. In this Container Image the first 6 layers are the base operating system. Layer 7 is the JVM and layer 8 is our compiled application. Layering enables great caching so if only layer 8 changes, then layers 1 through 7 do not need to be re-downloaded. One downside of Buildpacks is how (at least for now) all of the dependencies and compiled application code are stored in a single layer. It would be better to have separate layers for the dependencies and the compiled application.To recap, Buildpacks are the easy option that “just works” right out-of-the-box. But the Container Images are a bit large and not optimally layered.Container Builder: JibThe open source Jib project is a Java library for creating Container Images with Maven and Gradle plugins. To use it on a Maven project (like the one we from above), just add a build plugin to the pom.xml file:Now a Container Image can be created and stored in the local docker daemon by running:./mvnw compile jib:dockerBuild -Dimage=comparing-docker-methods:jibUsing dive we will see that the Container Image for this application is now only 127MB thanks to slimmer operating system and JVM layers. Also, on a Spring Boot application we can see how Jib layers the dependencies, resources, and compiled application for better caching:In this example the 18MB layer contains the runtime dependencies and the final layer contains the compiled application. Unlike with Buildpacks the original source code is not included in the Container Image. Jib also has a great feature where you can use it without docker being installed, as long as you store the Container Image on an external Container Registry (like DockerHub or the Google Cloud Container Registry). Jib is a great option with Maven and Gradle builds for Container Images that use the JVM.Container Builder: DockerfileThe traditional way to create Container Images is built into the docker tool and uses a sequence of instructions defined in a file usually named Dockerfile. Here is a Dockerfile you can use with the sample Java application:In this example, the first four instructions start with the AdoptOpenJDK 8 Container Image and build the source to a Jar file. The final Container Image is created from the AdoptOpenJDK 8 JRE Container Image and includes the created Jar file. You can run docker to create the Container Image using the Dockerfile instructions:docker build -t comparing-docker-methods:dockerfile Using dive we can see a pretty slim Container Image at 209MB:With a Dockerfile we have full control over the layering and base images. For example, we could use the Distroless Java base image to trim down the Container Image even further. This method of creating Container Images provides a lot of flexibility but we do have to write and maintain the instructions.With this flexibility we can do some cool stuff. For example, we can use GraalVM to create a “native image” of our application. This is an ahead-of-time compiled binary which can reduce startup time, reduce memory usage, and alleviate the need for a JVM in the Container Image. And we can go even further and create a statically linked native image which includes everything needed to run so that even an operating system is not needed in the Container Image. Here is the Dockerfile to do that:You will see there is a bit of setup needed to support static native images. After that setup the Jar is compiled like before with Maven. Then the native-image tool creates the binary from the Jar. The FROM scratch instruction means the final container image will start with an empty one. The statically linked binary created by native-image is then copied into the empty container.Like before you can use docker to build the Container Image:docker build -t comparing-docker-methods:graalvm .Using dive we can see the final Container Image is only 11MB!And it starts up super fast because we don’t need the JVM, OS, etc. Of course GraalVM is not always a great option as there are some challenges like dealing with reflection and debugging. You can read more about this in my blog, GraalVM Native Image Tips & Tricks.This example does capture the flexibility of the Dockerfile method and the ability to do anything you need. It is a great escape hatch when you need one.Which Method Should You Choose?The easiest, polyglot method: BuildpacksGreat layering for JVM apps: JibThe escape hatch for when those methods don’t fit: DockerfileCheck out my comparing-docker-methods project to explore these methods as well as the mentioned Spring Boot + Jib example.Related ArticleAnnouncing Google Cloud buildpacks—container images made easyGoogle Cloud buildpacks make it much easier and faster to build applications on top of containers.Read Article
Quelle: Google Cloud Platform