Webdienstleister: Cloudfare hatte zwischenzeitlich Störungen

Am 2. Juli 2019 kam es bei Cloudfare zu Störungen, die dort gehostete Angebote betrafen. Der Netzwerkverkehr sank deswegen zwischenzeitlich um 82 Prozent. Der Grund war aber kein Angriff, sondern eine Fehlkonfiguration von Abwehrmechanismen, die eigentlich Webseiten schützen sollen. (Cloudflare, Unternehmenssoftware)
Quelle: Golem

In eigener Sache: ITler und Board kommen zusammen

Klappt der Austausch zwischen IT und Unternehmensleitung nicht, schadet das allen. Golem.de schafft mit einem besonderen Workshop-Konzept Abhilfe: In Theorie und Praxis lernen IT-Profis, ihre Expertise nach oben zu bringen. C-Levels lernen wiederum, diese Expertise optimal zu nutzen. (Golem.de, Internet)
Quelle: Golem

Intro Guide to Dockerfile Best Practices

There are over one million Dockerfiles on GitHub today, but not all Dockerfiles are created equally. Efficiency is critical, and this blog series will cover five areas for Dockerfile best practices to help you write better Dockerfiles: incremental build time, image size, maintainability, security and repeatability. If you’re just beginning with Docker, this first blog post is for you! The next posts in the series will be more advanced.
Important note: the tips below follow the journey of ever-improving Dockerfiles for an example Java project based on Maven. The last Dockerfile is thus the recommended Dockerfile, while all intermediate ones are there only to illustrate specific best practices.
Incremental build time
In a development cycle, when building a Docker image, making code changes, then rebuilding, it is important to leverage caching. Caching helps to avoid running build steps again when they don’t need to.
Tip #1: Order matters for caching

However, the order of the build steps (Dockerfile instructions) matters, because when a step’s cache is invalidated by changing files or modifying lines in the Dockerfile, subsequent steps of their cache will break. Order your steps from least to most frequently changing steps to optimize caching.
Tip #2: More specific COPY to limit cache busts

Only copy what’s needed. If possible, avoid “COPY  .” When copying files into your image, make sure you are very specific about what you want to copy. Any changes to the files being copied will break the cache. In the example above, only the pre-built jar application is needed inside the image, so only copy that. That way unrelated file changes will not affect the cache.
Tip #3: Identify cacheable units such as apt-get update & install

Each RUN instruction can be seen as a cacheable unit of execution. Too many of them can be unnecessary, while chaining all commands into one RUN instruction can bust the cache easily, hurting the development cycle. When installing packages from package managers, you always want to update the index and install packages in the same RUN: they form together one cacheable unit. Otherwise you risk installing outdated packages.
Reduce Image size
Image size can be important because smaller images equal faster deployments and a smaller attack surface.
Tip #4: Remove unnecessary dependencies

Remove unnecessary dependencies and do not install debugging tools. If needed debugging tools can always be installed later. Certain package managers such as apt, automatically install packages that are recommended by the user-specified package, unnecessarily increasing the footprint. Apt has the –no-install-recommends flag which ensures that dependencies that were not actually needed are not installed. If they are needed, add them explicitly.
Tip #5: Remove package manager cache

Package managers maintain their own cache which may end up in the image. One way to deal with it is to remove the cache in the same RUN instruction that installed packages. Removing it in another RUN instruction would not reduce the image size.
There are further ways to reduce image size such as multi-stage builds which will be covered at the end of this blog post. The next set of best practices will look at how we can optimize for maintainability, security, and repeatability of the Dockerfile.
Maintainability
Tip #6: Use official images when possible

Official images can save a lot of time spent on maintenance because all the installation stamps are done and best practices are applied. If you have multiple projects, they can share those layers because they use exactly the same base image.
Tip #7: Use more specific tags

Do not use the latest tag. It has the convenience of always being available for official images on Docker Hub but there can be breaking changes over time. Depending on how far apart in time you rebuild the Dockerfile without cache, you may have failing builds.
Instead, use more specific tags for your base images. In this case, we’re using openjdk. There are a lot more tags available so check out the Docker Hub documentation for that image which lists all the existing variants.
Tip #8: Look for minimal flavors

Some of those tags have minimal flavors which means they are even smaller images. The slim variant is based on a stripped down Debian, while the alpine variant is based on the even smaller Alpine Linux distribution image. A notable difference is that debian still uses GNU libc while alpine uses musl libc which, although much smaller, may in some cases cause compatibility issues. In the case of openjdk, the jre flavor only contains the java runtime, not the sdk; this also drastically reduces the image size.
Reproducibility
So far the Dockerfiles above have assumed that your jar artifact was built on the host. This is not ideal because you lose the benefits of the consistent environment provided by containers. For instance if your Java application depends on specific libraries it may introduce unwelcome inconsistencies depending on which computer the application is built.
Tip #9: Build from source in a consistent environment
The source code is the source of truth from which you want to build a Docker image. The Dockerfile is simply the blueprint.

You should start by identifying all that’s needed to build your application. Our simple Java application requires Maven and the JDK, so let’s base our Dockerfile off of a specific minimal official maven image from Docker Hub, that includes the JDK. If you needed to install more dependencies, you could do so in a RUN step.
The pom.xml and src folders are copied in as they are needed for the final RUN step that produces the app.jar application with mvn package. (The -e flag is to show errors and -B to run in non-interactive aka “batch” mode).
We solved the inconsistent environment problem, but introduced another one: every time the code is changed, all the dependencies described in pom.xml are fetched. Hence the next tip.
Tip #10: Fetch dependencies in a separate step

By again thinking in terms of cacheable units of execution, we can decide that fetching dependencies is a separate cacheable unit that only needs to depend on changes to pom.xml and not the source code. The RUN step between the two COPY steps tells Maven to only fetch the dependencies.
There is one more problem that got introduced by building in consistent environments: our image is way bigger than before because it includes all the build-time dependencies that are not needed at runtime.
Tip #11: Use multi-stage builds to remove build dependencies (recommended Dockerfile)

Multi-stage builds are recognizable by the multiple FROM statements. Each FROM starts a new stage. They can be named with the AS keyword which we use to name our first stage “builder” to be referenced later. It will include all our build dependencies in a consistent environment.
The second stage is our final stage which will result in the final image. It will include the strict necessary for the runtime, in this case a minimal JRE (Java Runtime) based on Alpine. The intermediary builder stage will be cached but not present in the final image. In order to get build artifacts into our final image, use COPY –from=STAGE_NAME. In this case, STAGE_NAME is builder.

Multi-stage builds is the go-to solution to remove build-time dependencies.
We went from building bloated images inconsistently to building minimal images in a consistent environment while being cache-friendly. In the next blog post, we will dive more into other uses of multi-stage builds.

New blog from #Docker: Intro guide to Dockerfile best practicesClick To Tweet

Additional Resources:  

DockerCon Talk Recording: Dockerfile Best Practices
DockerCon Talk Slides: Dockerfile Best Practices
Docker Docs: Dockerfile References

The post Intro Guide to Dockerfile Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Solving the productivity crisis: Digital process automation for deep deployments

The way we do work today isn’t working for many employees and employers. Employers have fewer qualified people to perform complex tasks, while employees get bogged down with low-value tasks. What slowly emerges is a productivity crisis. How can companies solve it?
Automation is one of the go-to solutions, but automation fixes are still segmented, focusing on improving isolated parts of a process versus the whole process as part of an outcome-focused automation strategy. This is where deep deployment of automation technologies comes in.
What is digital process automation for deep deployments?
Deep deployments target the seamless orchestration of whole processes, traditionally associated with business process automation (BPM), at a cross-enterprise level, while solving long-tail challenges with simple business applications in order to improve productivity and end-to-end customer or employee experiences.
To see how deep deployments of automation could impact productivity, take a typical loan officer. Currently, she spends most of her time on low-value tasks like inputting data from documents and generating reports. If her workload is augmented by automation, she can focus on higher-value work like building relationships with clients or finding new business opportunities.

To achieve the type of business outcome that helps our loan officer, deep deployments require a broader set of automation technologies beyond traditional workflow or BPM. To go deeper, it’s helpful to understand such concepts as decision automation, content management, intelligent capture, robotic process automation (RPA) and the effects of AI and machine learning. For this reason, pragmatic enterprises often seek out automation platforms that provide a full spectrum of capabilities in order to address the many opportunities and challenges driving productivity, now and in the future.
What’s new in deep deployments?

AI and machine learning. These technologies are emerging as a natural next step for improving productivity by optimizing workstream automation. As operational data becomes more accessible to organizations, AI and machine learning algorithms can find automation patterns for offloading low-value tasks to technology, freeing employees to focus on high-value work, and patterns for assisting them in higher-value, expert work. For example, AI-enabled digital agents can extract and classify unstructured data in documents and make certain insights or recommendations available to employees.
Low code builder tools. For automation to be successful at scale, users should also be able to solve productivity problems with minimal IT involvement. Low code builder tools put the ability to create automated business applications in the hands of the business users, allowing IT to focus on creating reusable automation patterns that support it. As an example, the loan officer from the above example may have a business app that is created to automatically approve loans below certain criteria. If there is a sudden change in risk appetite, she can quickly and easily adjust criteria without the need for IT involvement.

How are deep deployment solutions evaluated?
Forrester Research recently evaluated 10 vendors on their digital process automation for deep (DPA Deep) deployments options to help application development and delivery professionals select the right provider for their needs. In their report, The Forrester Wave: Software for Digital Process Automation for Deep Deployments, Q2 2019, IBM was named a leader.
According to the Forrester report, “IBM has consolidated its content management, decision management, and process automation offerings under a single executive and engineering team with a unified go-to-market execution. At the same time, it has done some of the most pragmatic integration of IBM’s Watson AI capabilities to drive very process-specific business value. The result is a highly integrated solution well-tuned for handling deep processes.” Additionally, “IBM has a strategy to extend its process and case management platform to enable more of the generalized low-code development that more extensive DPA often requires.”
IBM Business Automation Workflow, which was evaluated by Forrester in the report, is part of the IBM Automation Platform for Digital Business. Our platform enables clients to automate workflows and decisions while deriving insight from the content within those business processes with speed and at scale. IBM clients have created and are running more than 50,000 applications on this platform as they seek to improve productivity and customer experiences.
Register to get the full Forrester report comparing software for digital process automation for deep deployments. 
The post Solving the productivity crisis: Digital process automation for deep deployments appeared first on Cloud computing news.
Quelle: Thoughts on Cloud