Meet Studio: Your New Favorite Way to Develop WordPress Locally

Say goodbye to manual tool configuration, slow site setup, and clunky local development workflows, and say hello to Studio by WordPress.com, our new, free, open source local WordPress development environment.

We’ve built Studio to be the fastest and simplest way to build WordPress sites locally.

Designed to empower developers, designers, and site builders, Studio offers a seamless solution for creating and running WordPress sites directly on your local machine, as well as showcasing work-in-progress sites with your clients, teams, and colleagues.

Check out a few of our favorite features in the video below:

A new way to develop WordPress locally, available for free

Studio is now available to use for free on Mac*, and you can get up and running with a new local site in just a few minutes:

Download Studio for Mac.

Install and open Studio.

Click Add site, and you’re done!

Once you have a local site running, you can access WP Admin, the Site Editor, global styles, and patterns, all with just one click—and without needing to remember and enter a username or password.

You can even open your local sites in your favorite development tools, such as VS Code, PhpStorm, Terminal, and Finder, making it even easier to add Studio to your existing development workflow.

Plus, Studio is open source; feel free to fork away on GitHub.

*A Windows version of Studio is coming soon, and you can request early access here. 

Effortlessly share your work and keep moving forward

In the realm of web development, showcasing local work has often been a challenge when projects live solely on your machine. With Studio’s demo sites, you have a convenient, built-in solution for sharing your progress with your team, clients, or designers. 

These publicly-accessible demo sites, hosted on WordPress.com, are a convenient way to share your work without the need for complex server setups or lengthy deployments. In less than 15 seconds, you can have a shareable link to your local site that stays active for seven days.

The best part? Demo sites can be refreshed to reflect your latest build, allowing you to easily convey any updates or changes!

Breaking free from traditional constraints

Unlike traditional local environment tools like MAMP or Docker, Studio takes a fresh approach to local WordPress development. Studio is a lightweight and efficient solution that minimizes overhead and maximizes simplicity by forgoing the need for web servers, MySQL servers, or virtualization technologies.

Behind the scenes, Studio uses WordPress Playground, the WebAssembly-powered PHP binary. Thanks to this technology, there is no need to use a traditional web server, making your development experience much quicker and smoother.

Say goodbye to complex setups and compatibility issues. Studio makes it easier than ever to build and manage WordPress sites locally.

Let’s get building

At WordPress.com, we’re committed to making your website management experience seamless. In the last few years alone, we launched staging sites with synchronization features, SSH and WP-CLI access, global edge caching, GitHub Deployments, and more. 

Studio is yet another powerful feature to add to your toolkit. Stay tuned for more exciting updates, and remember to follow our blog to stay in the loop.

And, of course, download Studio today. Your local development workflow will thank you.

Major kudos to the Studio team on this launch! Antonio Sejas, Antony Agrios, Kateryna Kodonenko, Philip Jackson, Carlos García Prim, David Calhoun, Derek Blank, Siobhan Bamber, Tanner Stokes, Matt West, Adam Zielinski, Brandon Payton, Berislav Grgicak, Alexa Peduzzi, Jeremy Massel, Gio Lodi, Olivier Halligon, Matthew Denton, Ian Stewart, Daniel Bachhuber, Kei Takagi, Claudiu Filip, Niranjan Uma Shankar, Noemí Sánchez, and our beta testers.
Quelle: RedHat Stack

Better Debugging: How the Signal0ne Docker Extension Uses AI to Simplify Container Troubleshooting

Consider this scenario: You fire up your Docker containers, hit an API endpoint, and … bam! It fails. Now what? The usual drill involves diving into container logs, scrolling through them to understand the error messages, and spending time looking for clues that will help you understand what’s wrong. But what if you could get a summary of what’s happening in your containers and potential issues with the proposed solutions already provided?

In this article, we’ll dive into a solution that solves this issue using AI. AI can already help developers write code, so why not help developers understand their system, too? 

Signal0ne is a Docker Desktop extension that scans Docker containers’ state and logs in search of problems, analyzes the discovered issues, and outputs insights to help developers debug. We first learned about Signal0ne as the winning submission in the 2023 Docker AI/ML Hackathon, and we’re excited to show you how to use it to debug more efficiently. 

Introducing Signal0ne Docker extension: Streamlined debugging for Docker

The magic of the Signal0ne Docker extension is its ability to shorten feedback loops for working with and developing containerized applications. Forget endless log diving — the extension offers a clear and concise summary of what’s happening inside your containers after logs and states are analyzed by an AI agent, pinpointing potential issues and even suggesting solutions. 

Developing applications these days involves more than a block of code executed in a vacuum. It is a complex system of dependencies, and different user flows that need debugging from time to time. AI can help filter out all the system noise and focuses on providing data about certain issues in the system so that developers can debug faster and better. 

Docker Desktop is one of the most popular tools used for local development with a huge community, and Docker features like Docker Debug enhance the community’s ability to quickly debug and resolve issues with their containerized apps.

Signal0ne Docker extension’s suggested solutions and summaries can help you while debugging your container or editing your code so that you can focus on bringing value as a software engineer. The term “developer experience” is often used, but this extension focuses on one crucial aspect: shortening development time. This translates directly to increased productivity, letting you build containerized applications faster and more efficiently.

How does the Docker Desktop extension work?

Between AI co-pilots, highly integrated in IDEs that help write code, and browser AI chats that help understand software development concepts in a Q&A way, there is one piece missing: logs and runtime system data. 

The Signal0ne Docker Desktop extension consists of three components: two hosted on the user’s local system (UI and agent) and one in the Signal0ne cloud backend service. The agent scans the user’s local environment in the search of containers with invalid states, runtime issues, or some warnings or errors in the logs, after issue discovery, it collects additional data from container definition for enhanced analysis. 

After the Signal0ne agent discovery, data is sent to the backend service, where a combination of pre-trained LLM and solution search retrieval service performs the analysis. The analysis of the issues can be seen from the Signal0ne extension UI, including: 

Short log summary — Outlines what is happening within a particular container, including logs on which analysis was based can be accessed from the sources dropdown if you wish.

Solutions summary — One paragraph about possible solutions to the analyzed issue with some explanation of why this particular solution was proposed.

Solution sources — Links to websites where the issue has already been solved by the community (for example, a GitHub issue or StackOverflow post).

Watch this short video demonstration for more details: 

Technical details

The issue analysis pipeline is powered by the LangChain tooling framework, which utilizes open source Mistral 7b LLM with LLM-optimized web search that browses sources of existing solutions on the open web (Figure 1).

Figure 1: Signal0ne architecture overview.

The following section shows sample issue analysis pipeline output:

Logs from Docker container:

“$ irbn>> "12341:asdf"[/d+/]n# => "12341" n>> "12341:asdf"[/d*/]n# => "12341" n>> "12341:asdf"[0..5]n# => "12341:" n>> "12341:asdf"[0…5]n# => "12341" n>> "12341:asdf"[0, ':']nTypeError: can't convert String into Integern from (irb):5:in”

Summary: 

The user is trying to extract a substring from the string 12341:asdf using different Ruby methods, such as [/\d+/], [/\d\*/], [0..5], [0…5], and [0, ‘:’]. However, when attempting to use the indexing method [0, ‘:’], an error occurs because the method expects an integer argument but receives a string instead.

Predicted solutions: 

“To extract a substring from the given string 12341:asdf without encountering an error when using indexing methods, consider the following options:

1. Using regular expressions:

To extract a substring using regular expressions, you can use the scan method with a regular expression pattern. For instance, to extract the digits before the colon, you can use the following code snippet:

str = "12341:asdf"
substring = str.scan(/[\d]+/)
puts substring.first
# Output: ["12341"]
puts substring.first[0]
# Output: 12341

2. Slicing:

Another alternative to indexing is slicing, which allows you to extract a specified range of characters from a string. To extract the digits before the colon using slicing, you can use the following code snippet:

str = "12341:asdf"
substring = str[0…str.index(":")]
puts substring
# Output: "12341"
puts substring.to_i
# Output: 12341

These two methods should help you extract the desired substring from the given string without encountering an error.

Getting started

Getting started with the Signal0ne Docker Desktop extension is a straightforward process that allows developers to leverage the benefits of unified development. 

Here are the steps for installing Signal0ne Docker extension:

1. Install Docker Desktop.

2. Choose Add Extensions in the left sidebar. The Browse tab will appear by default (Figure 2).

Figure 2: Signal0ne extension installation from the marketplace.

3. In the Filters drop-down, select the Utility tools category.

4. Find Signal0ne and then select Install (Figure 3).

Figure 3: Extension installation process.

5. Log in after the extension is installed (Figure 4).

Figure 4: Signal0ne extension login screen.

6. Start developing your apps, and, if you face some issues while debugging, have a look at the Signal0ne extension UI. The issue analysis will be there to help you with debugging.

Make sure the Signal0ne agent is enabled by toggling on (Figure 5):

Figure 5: Agent settings tab.

Figure 6 shows the summary and sources:

Figure 6: Overview of the inspected issue.

Proposed solutions and sources are shown in Figures 7 and 8. Solutions sources will redirect you to a webpage with predicted solution:

Figure 7: Overview of proposed solutions to the encountered issue.

Figure 8: Overview of the list of helpful links.

If you want to contribute to the project, you can leave feedback via the Like or Dislike button in the issue analysis output (Figure 9).

Figure 9: You can leave feedback about analysis output for further improvements.

To explore Signal0ne Docker Desktop extension without utilizing your containers, consider experimenting with dummy containers using this docker compose to observe how logs are being analyzed and how helpful the output is with the insights:

services:
broken_bulb: # c# application that cannot start properly
image: 'Signal0neai/broken_bulb:dev'
faulty_roger: #
image: 'Signal0neai/faulty_roger:dev'
smoked_server: # nginx server hosting the website with the miss-configuration
image: 'Signal0neai/smoked_server:dev'
ports:
– '8082:8082'
invalid_api_call: # python webserver with bug
image: 'Signal0neai/invalid_api_call:dev'
ports:
– '5000:5000'

broken_bulb: This service uses the image Signal0neai/broken_bulb:dev. It’s a C# application that throws System.NullReferenceException during the startup. Thanks to that application, you can observe how Signal0ne discovers the failed container, extracts the error logs, and analyzes it.

faulty_roger: This service uses the image Signal0neai/faulty_roger:dev. It is a Python API server that is trying to connect to an unreachable database on localhost.

smoked_server: This service utilizes the image Signal0neai/smoked_server:dev. The smoked_server service is an Nginx instance that is throwing 403 forbidden while the user is trying to access the root path (http://127.0.0.1:8082/). Signal0ne can help you debug that.

invalid_api_call: API service with a bug in one of the endpoints, to generate an error call http://127.0.0.1:5000/create-table  after running the container. Follow the analysis of Signal0ne and try to debug the issue.

Conclusion

Debugging containerized applications can be time-consuming and tedious, often involving endless scrolling through logs and searching for clues to understand the issue. However, with the introduction of the Signal0ne Docker extension, developers can now streamline this process and boost their productivity significantly.

By leveraging the power of AI and language models, the extension provides clear and concise summaries of what’s happening inside your containers, pinpoints potential issues, and even suggests solutions. With its user-friendly interface and seamless integration with Docker Desktop, the Signal0ne Docker extension is set to transform how developers debug and develop containerized applications.

Whether you’re a seasoned Docker user or just starting your journey with containerized development, this extension offers a valuable tool that can save you countless hours of debugging and help you focus on what matters most — building high-quality applications efficiently. Try the extension in Docker Desktop today, and check out the documentation on GitHub.

Learn more

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

A Promising Methodology for Testing GenAI Applications in Java

In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers.

Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet:

public static void main(String[] args) {
var llm = OpenAiChatModel.builder()
.apiKey("demo")
.modelName("gpt-3.5-turbo")
.build();
System.out.println(llm.generate("Hello, how are you?"));
}

This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels.

Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop.

In this article, I will share a methodology that I find promising for testing GenAI applications.

Project overview

The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. 

An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. 

In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive.

I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval.

Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary.

When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document.

This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM.

Technical stack

The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j.

For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers.

I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs.

The challenge of testing

The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors.

static String question = "How I can install Testcontainers Desktop?";
@Test
void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
assertThat(answer).contains("https://testcontainers.com/desktop/");
}

This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response.

An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. 

This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
– Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
– Answer must indicate to use brew to install Testcontainers Desktop in MacOS
– Answer must be less than 5 sentences
""";
@Test
void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
double cosineSimilarity = getCosineSimilarity(reference, answer);
assertThat(cosineSimilarity).isGreaterThan(0.8);
}

However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process.

Toward a more effective method

The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. 

However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. 

This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information

By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.

static String question = "How I can install Testcontainers Desktop?";
static String reference = """
– Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/
– Answer must indicate to use brew to install Testcontainers Desktop in MacOS
– Answer must be less than 5 sentences
""";

@Test
void verifyStraightAgentFailsToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message();
ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
assertThat(validate.response()).isEqualTo("no");
}

@Test
void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
assertThat(validate.response()).isEqualTo("yes");
}

We can even test more complex responses where the LLM should suggest a better alternative to the user’s question.

static String question = "How I can find the random port of a Testcontainer to connect to it?";
static String reference = """
– Answer must not mention using getMappedPort() method to find the random port of a Testcontainer
– Answer must mention that you don't need to find the random port of a Testcontainer to connect to it
– Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port
– Answer must be less than 5 sentences
""";

@Test
void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() {
String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message();
ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference);
assertThat(validate.response()).isEqualTo("yes");
}

Validator Agent

The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions:

public interface ValidatorAgent {
@SystemMessage("""
### Instructions
You are a strict validator.
You will be provided with a question, an answer, and a reference.
Your task is to validate whether the answer is correct for the given question, based on the reference.

Follow these instructions:
– Respond only 'yes', 'no' or 'unsure' and always include the reason for your response
– Respond with 'yes' if the answer is correct
– Respond with 'no' if the answer is incorrect
– If you are unsure, simply respond with 'unsure'
– Respond with 'no' if the answer is not clear or concise
– Respond with 'no' if the answer is not based on the reference

Your response must be a json object with the following structure:
{
"response": "yes",
"reason": "The answer is correct because it is based on the reference provided."
}

### Example
Question: Is Madrid the capital of Spain?
Answer: No, it's Barcelona.
Reference: The capital of Spain is Madrid
###
Response: {
"response": "no",
"reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid."
}
""")
@UserMessage("""
###
Question: {{question}}
###
Answer: {{answer}}
###
Reference: {{reference}}
###
""")
ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference);

record ValidatorResponse(String response, String reason) {}
}

As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict.

Conclusion

The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. 

The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.

Learn more

Check out the GenAI Stack to get started with adding AI to your apps. 

Subscribe to the Docker Newsletter.

Get the latest release of Docker Desktop.

Vote on what’s next! Check out our public roadmap.

Have questions? The Docker community is here to help.

New to Docker? Get started.

Quelle: https://blog.docker.com/feed/

AWS HealthOmics kündigt Unterstützung für das Lesen von Sequenzspeichern über Amazon-S3-APIs an

Wir freuen uns, Ihnen mitteilen zu können, dass AWS HealthOmics jetzt das Lesen von Sequenzspeicherobjekten mithilfe von Amazon-S3-APIs unterstützt. AWS HealthOmics ist ein vollständig verwalteter Service, der es Organisationen aus dem Gesundheitswesen und den Biowissenschaften ermöglicht, Omics-Daten zu speichern, abzufragen, zu analysieren und Erkenntnisse zu gewinnen, um die Gesundheit zu verbessern und wissenschaftliche Entdeckungen voranzutreiben. Mit dieser Version können Kunden HealthOmics-Datenspeicher einfacher in ihr Bioinformatiksystem integrieren und gleichzeitig von Domain-spezifischen Metadaten, Kosteneinsparungen und Skalierbarkeit profitieren.
Quelle: aws.amazon.com