Hot Off the Press: New WordPress.com Themes for August 2023

The WordPress.com team is always working on new design ideas to bring your website to life. Check out the latest themes in our library, featuring beautiful new options with bloggers, diarists, and creators in mind.

All WordPress.com Themes

Tenaz

Tenaz (“tenacious” in Spanish) is a classic magazine theme with a rich, dense homepage perfect for professional bloggers or small media networks. Customizable elements are somewhat limited for this theme, but the black and white color palette is versatile and timeless.

Click here to view a demo of this theme.

Tomoni

Takashi, the designer for this theme, was inspired by a visit to a design museum in the Roppongi district of Tokyo. Among the beautiful designs displayed there, the wall labels, written in both Japanese and English, also caught his attention. Takashi then started thinking about other examples of when dual language displays are needed, from road signs to posters in large immigrant communities. This theme, tomoni (“together” in Japanese), came to life from that idea.

Tomoni uses Noto Sans and Noto Sans Japanese, which makes both texts visually harmonic. There are eight color variations to suit a wide array of aesthetic flavors.

Click here to view a demo of this theme.

Entry

Entry is a uniquely styled block theme designed specifically for journalling. It features a blocky grid layout for posts, with every element contained in squared or rectangular shapes. This is a design that makes a statement.

Click here to view a demo of this theme.

Trellick

Trellick is a minimalist, raw blog theme that strips away the polished veneer of the modern “sameness” of web design — this theme shows the untamed essence of the digital realm. Embracing the architectural concept of Brutalism, Trellick showcases a bold, unapologetic aesthetic.

This design features three columns. The left (header) and the right (sidebar) columns are sticky, and only the main content in the middle column scrolls. The small, square-shaped featured image contributes a distinctive look. Trellick is available in four different color schemes.

Click here to view a demo of this theme.

Covr

Covr is a beautifully crafted theme that boasts a clean and modern design, specifically created to showcase images in an immersive and captivating way. Its full-width home template elegantly displays your portfolio, photography, or personal blog posts, providing an enjoyable browsing experience for your audience. Whether you’re a photographer, artist, or blogger, Covr is the perfect choice to present your work and make an impact.

Click here to view a demo of this theme.

To install any of the above themes, click the name of the theme you like, which brings you right to the installation page. Then click the “Activate this design” button. You can also click “Open live demo,” which brings up a clickable, scrollable version of the theme for you to preview.

Premium themes are available to use at no extra charge for customers on the Premium plan or above. Partner themes are third-party products that can be purchased for $79/year each.

You can explore all of our themes by navigating to the “Themes” page, which is found under “Appearance” in the left-side menu of your WordPress.com dashboard. Or you can click below:

All WordPress.com Themes

Quelle: RedHat Stack

Docker Desktop 4.22: Resource Saver, Compose ‘include’, and Enhanced RBAC Functionality

Docker Desktop 4.22 is now available and includes a new Resource Saver feature that massively reduces idle memory and CPU utilization to ensure efficient use of your machine’s resources. Docker Compose include allows splitting complex Compose projects into subprojects to make it easier to modularize complex applications into sub-Compose files. Role-based access control (RBAC) has also been enhanced with the addition of an Editor role to allow admins to delegate repository management tasks.

Resource Saver 

In 4.22 we have added a new Resource Saver feature for Mac and Windows which detects when Docker Desktop is not running containers and massively reduces its memory and CPU footprint (WSL has CPU optimizations only at this stage). When Resource Saver detects that Docker Desktop is idle without any active containers for a duration of 30 seconds, it automatically reduces the memory and CPU footprint. This optimizes Docker Desktop for your system and helps to free up resources on your machine for other tasks. When a container needs resources, they’re quickly allocated on demand.

To see this feature in action, start Docker Desktop and leave it idle for 30 seconds with no containers running. A leaf will appear over the whale icon in your Docker Desktop menu and the sidebar of the Docker Desktop dashboard, indicating that Resource Saver mode is activated.

Figure 1: The Docker Desktop menu and the macOS menu bar show Resource Saver mode running.

Previously, Docker Desktop introduced some CPU optimizations of Resource Saver, which, at the time of writing, are already saving up to a staggering 38,500 CPU hours every single day across all Docker Desktop users.

Split complex Compose projects into multiple subprojects with ‘include’

If you’re working with complex applications, use the new include section in your Compose file to split your project into manageable subprojects. Compared to merging files with CLI flags or using extends to share common attributes of a single service from another file, include loads external Compose files as self-contained building blocks, making it easier to collaborate on services across teams and share common dependency configurations within your organization.

For more on how you can try out this feature, read “Improve Docker Compose Modularity with `include`” or refer to the documentation.

Figure 2: A compose.yaml file that is using the new ‘include’ section to define subprojects.

Editor role available for organizations

With the addition of the Editor role, admins can provision users to manage repositories without full administrator privileges. Users assigned to the Editor role can:

Create public and private repositories

Pull, push, view, edit, and delete a repository

Update repository description

Assign team permissions to repos

Update scanning settings

Delete tags

Add webhooks

Change repo visibility settings

For further details on roles and permissions, refer to the documentation. 

Organization owners can assign the Editor role to a member of their organization in either Docker Hub or Docker Admin.

Figure 3: The Editor role functionality in Docker Hub.

Conclusion

Upgrade now to explore what’s new in the 4.22 release of Docker Desktop. Do you have feedback? Leave feedback on our public GitHub roadmap and let us know what else you’d like to see in upcoming releases.

Learn more

Read Improve Docker Compose Modularity with include.

Get the latest release of Docker Desktop.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Sentiment Analysis and Insights on Cryptocurrencies Using Docker and Containerized AI/ML Models

Learning about and sharing ways in which those in the Docker community leverage Docker to drive innovation is always exciting. We are learning about many interesting AI/ML solutions that use Docker to accelerate development and simplify deployment. 

In this blog post, Saul Martin shares how Prometeo.ai leverages Docker to deploy and manage its machine learning models allowing its developers to focus on innovation, not infrastructure/deployment management, for their sentiment analysis of a range of cryptocurrencies to provide insights for traders.

The digital assets market, which is famously volatile and swift, necessitates tools that can keep up with its speed and provide real-time insights. At the forefront of these tools is Prometeo.ai, which has harnessed the power of Docker to build a sophisticated, high-frequency sentiment analysis platform. This tool sifts through the torrent of emotions that drive the cryptocurrency market, providing real-time sentiments of the top 100 assets, which is a valuable resource for hedge funds and financial institutions.

Prometeo.ai’s leveraging of Docker’s containerization capabilities allows it to deploy and manage complex machine learning models with ease, making it an example of modern, robust, scalable architecture.

In this blog post, we will delve into how Prometeo.ai is utilizing Docker for its sentiment analysis tool, highlighting the critical aspects of its data collection, machine learning model implementations, storage, and deployment processes. This exploration will give you a clear understanding of how Docker can transform machine learning application deployment, presenting a case study in the form of Prometeo.ai.

Data collection and processing: High-frequency sentiment analysis with Docker

Prometeo.ai’s comprehensive sentiment analysis capability hinges on an agile, near real-time data collection and processing infrastructure. This framework captures, enriches, and publishes an extensive range of sentiment data from diverse platforms:

Stream/data access: Platform-specific data pipelines tasked with real-time harvesting of cryptocurrency-related discussions hosted on siloed Docker containers.

Tokenization and sentiment analysis: The harvested data undergoes tokenization, transforming each content piece into a format suitable for analysis. An internal Sentiment Analysis API further enriches this tokenized data, inferring sentiment attributes from the raw information.

Publishing: Enriched sentiment data is published within one minute of collection, facilitating near real-time insights for users. During periods of content unavailability from a data source, the system generates and publishes an empty dictionary.

All these operations transpire within Docker containers, guaranteeing the necessary scalability, isolation, and resource efficiency to manage high-frequency data operations.

For efficient data storage, Prometeo.ai relies on:

NoSQL database: DynamoDB is used for storing minute-level sentiment aggregations. The primary key is defined such that it allows quick access to data based on time-range queries. These aggregations are critical for providing real-time insights to users and for hourly and daily aggregation calculations.

Object storage: For model retraining and data backup purposes, the raw data, including raw content, is exported in batches and stored in Amazon S3 buckets. This robust storage mechanism ensures data durability and aids in maintaining data integrity.

Relational database: Metadata related to different assets, including links, tickers, IDs, descriptions, and others, are hosted in PostgreSQL. This provides a relational structure for asset metadata and promotes accessible, structured access when required.

NLP models

Prometeo.ai makes use of two Bidirectional Encoder Representations from Transformers (BERT) models, both of which operate within a Docker environment for natural language processing (NLP). The following models run multi-label classification pipelines that have been fine-tuned on an in-house dataset of 50k manually labeled tweets.

proSENT model: This model specializes in identifying 28 unique emotional sentiments. It owes its comprehensive language understanding to training on a corpus of more than 1.5 million unique cryptocurrency-related social media posts.

proCRYPT model: This model is fine-tuned for crypto sentiment analysis, classifying sentiments as bullish, bearish, or neutral. The deployment architecture encapsulates both these models within a Docker container alongside a FastAPI server. This internal API acts as the conduit for running inferences.

To ensure a seamless and efficient build process, Hugging Face’s model hub is used to store the models. The models and their binary files are retrieved directly from Hugging Face during the Docker container’s build phase. This approach keeps the Docker container lean by downloading the models at runtime, thereby optimizing the build process and contributing to the overall operational efficiency of the application.

Deployment

Prometeo.ai’s deployment pipeline is composed of GitHub Actions, AWS CodeDeploy, and accelerated computing instances. This pipeline forms a coherent system for efficiently handling application updates and resource allocation:

GitHub Actions: The onset of the pipeline employs GitHub Actions, which are programmed to autonomously instigate a fresh deployment upon the push of any modifications to the production branch. This procedural design ensures the application continually operates on the most recent, vetted code version.

AWS CodeDeploy: The subsequent phase involves AWS CodeDeploy, which is triggered once GitHub Actions have successfully compiled and transferred the Docker image to the Elastic Container Registry (ECR). CodeDeploy is tasked with the automatic deployment of this updated Docker image to the GPU-optimized instances. This robust orchestration ensures smooth rollouts and establishes a reliable rollback plan if necessary.

Accelerated computing: Prometeo leverages NVIDIA Tesla GPUs for the computational prowess needed for executing complex BERT models. These GPU-optimized instances are tailored for NVIDIA-CUDA Docker image compatibility, thereby facilitating GPU acceleration, which significantly expedites the processing and analysis stages.

Below is a snippet demonstrating the configuration to exploit the GPU capabilities of the instances:

deploy:
resources:
reservations:
devices:
– driver: nvidia
capabilities: [gpu]

Please note that the image must be the same version as your CUDA version after running nvidia-smi:

FROM nvidia/cuda:12.1.0-base-ubuntu20.04

To maintain optimal performance under fluctuating load conditions, an autoscaling mechanism is incorporated. This solution perpetually monitors CPU utilization, dynamically scaling the number of instances up or down as dictated by the load. This ensures that the application always has access to the appropriate resources for efficient execution.

Conclusion

By harnessing Docker’s containerization capabilities and compatibility with NVIDIA-CUDA images, Prometeo.ai successfully manages intensive, real-time emotion analysis in the digital assets domain. Docker’s role in this strategy is pivotal, providing an environment that enables resource optimization and seamless integration with other services.

Prometeo.ai’s implementation demonstrates Docker’s potential to handle sophisticated computational tasks. The orchestration of Docker with GPU-optimized instances and cloud-based services exhibits a scalable and efficient infrastructure for high-frequency, near-real-time data analysis.

Do you have an interesting use case or story about Docker in your AI/ML workflow? We would love to hear from you and maybe even share your story.

Learn more

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Container Security and Why It Matters

Are you thinking about container security? Maybe you are on a security team trying to manage rogue cloud resources. Perhaps you work in the DevOps space and know what container security is, but you want to figure out how to decrease the pain around security triaging your containers for everyone involved. 

In this post, we’ll look at security for containers in a scalable environment, how deployment to that environment can affect your rollout of container security, and how Docker can help.

What is container security?

Container security is knowing that a container image you run in your environment includes only the libraries, base image, and any custom bits you declare in your Dockerfile,  and not malware or known vulnerabilities. (We’d also love to say no zero days, but such is the nature of the beast.)

You want to know that those libraries used to build your image and any base image behind it come from sources you expect — open source or otherwise — and are free from critical vulnerabilities, malware, and other surprises. 

The base image is usually a common image (for example, Alpine Linux, Ubuntu, or BusyBox) that is a building block upon which other companies add their own image layers. Think of an image layer as a step in the install process. Whenever you take a base image and add new libraries or steps to it for creation, you are essentially creating a new image.  

We’ve talked about the most immediate piece of container security, the image layers, but how is the image built and what is the source of those image layers?

Container image provenance

Here’s where container security gets tricky: the image build and source tracking process. You want assurances that your images, libraries, and any base images you depend on contain what you expect them to and not anything nefarious. So you should care about image provenance: where an image gets built, who builds it, and where it gets stored. 

You should pay attention to any infrastructure or automation used to build your images, which typically means continuous integration (CI) tooling such as GitHub Actions, AWS CodeBuild, or CircleCI. You need to ensure any workloads running your image builds are on build environments with minimal access and potential attack surfaces. You need to consider who has access to your GitHub actions runners, for example. Do you need to create a VPN connection from your runner to your cloud account? If so, what are the security protections on that VPN connection? Consider the confidentiality and integrity of your image pipeline carefully. 

To put it more directly: Managing container provenance in cloud workloads can make deployments easier, but it can also make it easier to deploy malware at scale if you aren’t careful. The nature of the cloud is that it adds complexity, not necessarily security.

Software Bill of Materials (SBOM) attestations can also help ensure that only what you want is inside your images. With an SBOM, you can review a list of all the libraries and dependencies used to build your image and ensure the versioning and content matches what you expect by viewing an SBOM attestation. Docker Engine provides this with docker sbom and Docker BuildKit provides it in versions newer than 0.11. 

Other considerations with SBOM attestations include attestation provider trust and protection from man-in-the-middle attacks, such as replacing libraries in the image. Docker is working to create signed SBOM attestations for images to create strong assurances around SBOM to help strengthen this part of image security.

You also want to consider software composition analysis (SCA) against your images to ensure open source tooling and licenses are as expected. Docker Official Images, for example, have a certified seal of provenance behind them for your base image, which provides assurance around a base image you might be using.

Vulnerability and malware scanning

And what about potential CVEs and malware? How do you scan your images at scale for those issues? 

A number of static scanning tools are available for CVE scanning, and some provide dynamic malware scanning. When researching tools in this space, consider what you use for your image repository, such as Docker Hub, Amazon Elastic Container Registry (ECR), Artifact Registry, or an on-premises/in-colocation option like Nexus. Depending on the dynamics and security controls you have in place on your registry, one tooling option might make more sense than another. For example, AWS ECR offers some static vulnerability scanning out of the box. Some other options bundle software composition analysis (SCA) scanning of images as well. 

The trick is to find a tool with the right signal-to-noise mix for your team. For example, you might want static scanning but minimal false positives and the ability to create exclusions. 

As with any static vulnerability scanning tool, the Common Vulnerability Scoring System (CVSS) score of a vulnerability is just a starting point. Only you and your team can determine the exploitability, possible risks, and attack surface of a particular vulnerability and whether those factors outweigh the potential effects of upgrading or changing an image deployed at scale in your environment.

In other words, a scanning tool might find some high or critical (per CVSS scoring) vulnerabilities in some of your images. Still, those vulnerabilities might not be exploitable because the affected images are only used internally inside a virtual private cloud (VPC) in your environment with no external access. But you’d want to ensure that the image stays internal and isn’t used for production. So guardrails, monitoring, and gating around the use of that image and it staying in internal workloads only is a must. 

Finally, imagine an image that is pervasive and used across all your workloads. The effort to upgrade that image might take several sprint cycles for your engineering teams to safely deploy and require service downtime as you unravel the library dependencies. Regarding vulnerability rating for the two examples — an internal-only image and a pervasive image that is difficult to upgrade — you might want to lower the priority of the vulnerability in the former and slowly track progress toward remediating the latter. 

Docker’s Security Team is intimately familiar with two of the biggest blockers security teams face: time and resources. Your team might not be able to triage and remediate all vulnerabilities across production, development, and staging environments, especially if your team is just starting its journey with container security. So start with what you can and must do something about: production images.

Production vs. non-production

Only container images that have gone through appropriate approval and automation workflows should be deployed in production. Like any mature CI/CD workflow, this means thorough testing in non-production environments, scanning before release to production, and monitoring and guardrails around images that are already live in production with things like cloud resource tagging, version control, and appropriate role-based access control around who can approve an image’s deployment to production. 

At its root, this means that Security teams that have not previously had their feet in the infrastructure or DevOps team’s ocean of work in your company’s cloud accounts should. Just as DevOps culture has caused a shift for developers in handling infrastructure, scaling, and service decisions in the cloud, the same shift is happening in the security community with DevSecOps culture and Security Engineering. It is in the middle of this intersection where container security resides.

Not only does your tool choice matter in terms of best-fit for your environment’s landscape with container security, your ability to collaborate with your infrastructure, engineering, and DevOps teams matters even more for this work. To reiterate, to get a good handle on gating production deployments and having good automation and monitoring tied to those production deployments and resources, security teams must familiarize themselves with this space and get comfortable in this intersection. Good tooling can help make all the difference in fostering that culture of collaboration, especially for a security team new to this space.

Container security tools: What to look for

Like any well-thought-out tool selection, sometimes what matters most is not the number of bells and whistles a tool offers but the tool’s fit to your organization’s needs and gaps.

Avoid container security tools that promise to be the silver bullet. Instead, think of tools that will help your team conquer small challenges today and work to build on goals for the larger challenges down the road. (Security folks know that any tool on the market promising to be a silver bullet is just selling something and isn’t a reality with the ever-changing threat landscape.)

In short, tools for container security should enable your workflow and build trust and facilitate cross-team collaboration from Engineering to Security to DevOps, not tools that become a landscape of noise and overwhelming visuals for your engineers. And here’s where Docker Scout can help.

Docker Scout

Docker engineers have been working on a new product to help increase container security: Docker Scout. Scout gives you the list of discovered vulnerabilities in your container images and offers guidance for remediation in an iterative small-improvements style. You can compare your scores from one deployment to the next and show improvement to create a sense of accomplishment for your teams, not an overwhelming bombardment of vulnerabilities and risk that seems insurmountable.

Docker Scout lets you set target goals for your images and markers for iterative improvement. You can define different goals for production images versus development or staging images so that each environment gets the level of security it needs.

Conclusion

As with most security problems, there is no silver bullet with container security. The technical, operational, and organizational moving pieces that go into protecting your company’s container images often reside at the boundaries between teams, functions, and responsibilities. This adds complexity to an already complex problem. Rather than further adding to the burdens created by this complexity, you should look for tools that enable your teams to work together and reach a deeper understanding of where goals, risks, and priorities overlap and coexist.

Even more importantly, look for container security solutions that are clear about what they can offer you and extend help in areas where they do not have offerings. 

Whether you are a security team member new to the ocean of DevOps and container security or have been in these security waters for a while, Docker is here to help support you and get to more stable waters. We are beside you in this ocean and trying to make the space better for ourselves, our customers, and developers who use Docker all over the world.

Learn more

Get the latest release of Docker Desktop.

Try Docker Scout.

Learn about Docker Security.

Generate the SBOM for Docker images.

Learn about SBOM attestations.

Check out Docker Official Images.

Visit Docker Hub.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

Amazon Connect führt Flows-UI-Werkzeugleiste und die Möglichkeit ein, Notizen hinzuzufügen

Der Amazon Connect-Flow-Designer enthält jetzt eine Werkzeugleiste mit Verknüpfungen zu neuen Bearbeitungsfunktionen wie Rückgängig machen (einschließlich eines Verlaufs früherer Aktionen) und Wiederherstellen sowie zu vorhandenen Kürzeln wie Kopieren und Einfügen. Sie können jetzt auch Notizen zu einem Flow hinzufügen, sodass Sie beispielsweise dokumentieren können, was der Flow tut, oder eine Aufgabenliste mit den Aktualisierungen, die Sie vornehmen möchten. Sie können Notizen an einen bestimmten Flow-Block anhängen und mithilfe der Werkzeugleiste nach Notizen suchen.
Quelle: aws.amazon.com

AWS aktiviert die Informationen über den fälligen Rechnungssaldo und die Rechnungssumme in der Fakturierungskonsole

In der AWS-Fakturierungskonsole können Sie jetzt den „Fälligen Saldo“ für eine Rechnung und den „Rechnungsbetrag“ einsehen. Die Spalte „Fälliger Saldo“ gibt den ausstehenden Gesamtbetrag für jede Rechnung an, während die Spalte „Rechnungsbetrag“ den Wert der Rechnung zum Zeitpunkt der Rechnungsstellung an Sie angibt. Die Informationen über den fälligen Saldo werden in regelmäßigen Abständen aktualisiert, sobald die Zahlungen von AWS realisiert worden sind.
Quelle: aws.amazon.com

Amazon Connect-Unterstützung bei der Planung des Urlaubssalden und der Gruppenfreibeträge für Kundendienstmitarbeiter

Die Amazon Connect-Planung bietet jetzt neue Funktionen für Urlaubssalden und Gruppenfreibeträge, mit denen Contact Center-Manager und Kundendienstmitarbeiter Urlaube effizienter verwalten können. Vor dieser Einführung mussten Manager die Urlaubssalden vor der Annahme oder Ablehnung von Anträgen manuell überprüfen, und Kundendienstmitarbeiter mussten ihre Manager per E-Mail oder Tools von Drittanbietern kontaktieren, um Urlaubszeiten zu beantragen oder zu ändern. Mit dieser Einführung können Manager auf einfache Weise Urlaubssalden und Gruppenfreibeträge für Kundendienstmitarbeiter stapelweise aus HR-Systemen von Drittanbietern importieren (z. B. 120 Stunden Urlaubszeit, 40 Stunden Krankheitszeit) und entweder einen automatisierten oder manuellen Genehmigungsworkflow für ihre Gruppen auswählen. Kundendienstmitarbeiter können Urlaub beantragen und automatische Genehmigungen (oder Ablehnungen) erhalten, die auf ihren Urlaubssalden und Gruppenfreibeträgen sowie auf anderen Abwesenheitsregeln basieren.
Quelle: aws.amazon.com

Die Amazon Connect-Planung umfasst jetzt eine automatisierte flexible Tagesplanung

Mit der Amazon Connect-Planung können Contact Center-Manager jetzt automatisch Pläne für Kundendienstmitarbeiter mit einer Kombination aus festen und flexiblen Arbeitstagen pro Woche erstellen, d. h. Kundendienstmitarbeiter haben bestimmte obligatorische Arbeitstage, während andere Tage je nach Bedarf geplant werden. Vor dieser Markteinführung mussten Manager einen Teil der Zeitpläne der Kundendienstmitarbeiter manuell anpassen, um sie an ihre flexiblen Arbeitsverträge und die regionalen Arbeitsgesetze anzupassen. Mit dieser Einführung schlägt das System automatisch flexible Zeitpläne vor, sodass Manager optimiertere, gewerkschaftskonforme Personaleinsatzpläne erstellen können, wodurch wertvolle Zeit für wichtigere Aufgaben gewonnen wird.
Quelle: aws.amazon.com