Selbstauskunft: Roboter sagen, dass sie keine Rebellion planen
Sie geben sich harmlos – aber meinen sie es auch so? Bei einer Tagung der Vereinten Nationen haben sich Roboter zum Thema Weltherrschaft geäußert. (Roboter, KI)
Quelle: Golem
Sie geben sich harmlos – aber meinen sie es auch so? Bei einer Tagung der Vereinten Nationen haben sich Roboter zum Thema Weltherrschaft geäußert. (Roboter, KI)
Quelle: Golem
Was denkt die neue BSI-Präsidentin zu den Themen Chatkontrolle, Hackbacks und Zero-Days? Innenministerin Faeser dürften die Antworten gefallen haben. Ein Bericht von Friedhelm Greis (BSI, Internet)
Quelle: Golem
OpenAI hat offiziell die allgemeine Verfügbarkeit der API für GPT-4 angekündigt. Das bedeutet nicht, dass der Zugang kostenlos ist. (GPT-4, Server-Applikationen)
Quelle: Golem
1,7 Milliarden Euro möchte Amazon für den Roomba-Hersteller bezahlen. Die Übernahme wird jetzt untersucht. (iRobot, Amazon)
Quelle: Golem
Durch eine Tootroot genannte Sicherheitslücke konnten Hacker ganze Mastodon-Instanzen übernehmen und Root-Zugriff auf den Servern erlangen. (Sicherheitslücke, Server)
Quelle: Golem
Vorerst will Oceangate keine Erkundungsfahrten mehr anbieten. Das Unternehmen war nach einer Katastrophenfahrt zur Titanic mit fünf Toten in die Kritik geraten. (Unternehmen, Wirtschaft)
Quelle: Golem
Die Marke Lemokey soll bei Keychron für gute Gaming-Tastaturen stehen. Die Lemokey L3 kann etwa drahtlos per 2,4-GHz-Dongle genutzt werden. (Tastatur, Eingabegerät)
Quelle: Golem
Docker Desktop 4.21 is now available and includes Docker Init support for Rust, new Wasm runtimes support, enhancements to Docker Scout dashboards, Builds view (Beta), performance and filesystem enhancements to Docker Desktop on macOS, and more. Docker Desktop in 4.21 also uses substantially less memory, allowing developers to run more applications simultaneously on their machines without relying on swap.
Added support for new Wasm runtimes
Docker Desktop 4.21 now has added support for the following Wasm runtimes: Slight, Spin, and Wasmtime. These runtimes can be downloaded on demand when the containerd image store is enabled. The following steps outline the process:
In Docker Desktop, navigate to the settings by clicking the gear icon.
Select the Features in development tab.
Check the boxes for Use containerd for pulling and storing images and Enable Wasm.
Select Apply & restart.
When prompted for Wasm Runtimes Installation, select Install.
After installation, these runtimes can be used to run Wasm workloads locally with the corresponding flags, for example:–runtime=io.containerd.spin.v1 –platform=wasi/wasm32
Docker Init (Beta) added support for Rust
In the 4.21 release, we’ve added Rust server support to Docker Init. Docker Init is a CLI command in beta that simplifies the process of adding Docker to a project. (Learn more about Docker Init in our blog post: Docker Init: Initialize Dockerfiles and Compose files with a single CLI command.)
You can try Docker Init with Rust by updating to the latest version of Docker Desktop and typing docker init in the command line while inside a target project folder.
The Docker team is working on adding more languages and frameworks for this command, including Java and .Net. Let us know if you want us to support a specific language or framework. We welcome feedback as we continue to develop and improve Docker Init (Beta).
Docker Scout dashboard enhancements
The Docker Scout Dashboard helps you share the analysis of images in an organization with your team. Developers can now see an overview of their security status across all their images from both Docker Hub and Artifactory (more registry integrations coming soon) and get remediation advice at their fingertips. Docker Scout analysis helps team members in roles such as security, compliance, and operations to know what vulnerabilities and issues they need to focus on.
Figure 1: A screenshot of the Docker Scout vulnerabilities overview
Visit the Docker Scout vulnerability dashboard to get end-to-end observability into your supply chain.
Docker Buildx v0.11
Docker Buildx component has been updated to a new version, enabling many new features. For example, you can now load multi-platform images into the Docker image store when containerd image store is enabled.
The buildx bake command now supports matrix builds, allowing defining multiple configurations of the same build target that can all be built together.
There are also multiple new experimental commands for better debugging support for your builds. Read more from the release changelog.
Builds (Beta)
Docker Desktop 4.21 includes our Builds view beta release. Builds view gives you visibility into the active builds currently running on your system and enables analysis and debugging of your completed builds.
All builds started with docker build or docker buildx build commands will automatically appear in the Builds view. From there, you can inspect all the properties of a build invocation, including timing information, build cache usage, Dockerfile source, etc. Builds view also provides you full access to all of the logs and properties of individual build steps.
If you are working with multiple Buildx builder instances (for example, running builds inside a Docker container or Kubernetes cluster), Builds view include a new Builders settings view to make it even easier to manage additional builders or set default builder instances.
Builds view is currently in beta as we are continuing to improve them. To enable them, go to Settings > Features in development > Turn on Builds view.
Figure 2: Builds view — List of active and completed builds
Figure 3: Builds view — Build details with logs visible
Figure 4: Builds view — Builder settings with default builder expanded
Faster startup and file sharing for macOS
Launching Docker Desktop on Apple Silicon Macs is at least 25% quicker in 4.21 compared to previous Docker Desktop versions. Previously the start time would scale linearly with the amount of memory allocated to Docker, which meant that users with higher-spec Macs would experience slower startup. This bug has been fixed and now Docker starts in four seconds on Apple Silicon.
Docker Desktop 4.21 uses VirtioFS by default on macOS 12.5+, which provides substantial performance gains when sharing host files with containers (for example, via docker run -v). The time taken to build the Redis engine drops from seven minutes on Docker Desktop 4.20 to only two minutes on Docker Desktop 4.21, for example.
Conclusion
Upgrade now to explore what’s new in the 4.21 release of Docker Desktop. Do you have feedback? Leave feedback on our public GitHub roadmap and let us know what else you’d like to see.
Learn more
Get the latest release of Docker Desktop.
Have questions? The Docker community is here to help.
New to Docker? Get started.
Quelle: https://blog.docker.com/feed/
In today’s fast-paced digital era, conversational AI chatbots have emerged as a game-changer in delivering efficient and personalized user interactions. These artificially intelligent virtual assistants are designed to mimic human conversations, providing users with quick and relevant responses to queries.
A crucial aspect of building successful chatbots is the ability to handle frequently asked questions (FAQs) seamlessly. FAQs form a significant portion of user queries in various domains, such as customer support, e-commerce, and information retrieval. Being able to provide accurate and prompt answers to common questions not only improves user satisfaction but also frees up human agents to focus on more complex tasks.
In this article, we’ll look at how to use the open source Rasa framework along with Docker to build and deploy a containerized, conversational AI chatbot.
Meet Rasa
To tackle the challenge of FAQ handling, developers turn to sophisticated technologies like Rasa, an open source conversational AI framework. Rasa offers a comprehensive set of tools and libraries that empower developers to create intelligent chatbots with natural language understanding (NLU) capabilities. With Rasa, you can build chatbots that understand user intents, extract relevant information, and provide contextual responses based on the conversation flow.
Rasa allows developers to build and deploy conversational AI chatbots and provides a flexible architecture and powerful NLU capabilities (Figure 1).
Figure 1: Overview of Rasa.
Rasa is a popular choice for building conversational AI applications, including chatbots and virtual assistants, for several reasons. For example, Rasa is an open source framework, which means it is freely available for developers to use, modify, and contribute to. It provides a flexible and customizable architecture that gives developers full control over the chatbot’s behavior and capabilities.
Rasa’s NLU capabilities allow you to extract intent and entity information from user messages, enabling the chatbot to understand and respond appropriately. Rasa supports different language models and machine learning (ML) algorithms for accurate and context-aware language understanding.
Rasa also incorporates ML techniques to train and improve the chatbot’s performance over time. You can train the model using your own training data and refine it through iterative feedback loops, resulting in a chatbot that becomes more accurate and effective with each interaction.
Additionally, Rasa can scale to handle large volumes of conversations and can be extended with custom actions, APIs, and external services. This capability allows you to integrate additional functionalities, such as database access, external API calls, and business logic, into your chatbot.
Why containerizing Rasa is important
Containerizing Rasa brings several important benefits to the development and deployment process of conversational AI chatbots. Here are four key reasons why containerizing Rasa is important:
1. Docker provides a consistent and portable environment for running applications.
By containerizing Rasa, you can package the chatbot application, its dependencies, and runtime environment into a self-contained unit. This approach allows you to deploy the containerized Rasa chatbot across different environments, such as development machines, staging servers, and production clusters, with minimal configuration or compatibility issues.
Docker simplifies the management of dependencies for the Rasa chatbot. By encapsulating all the required libraries, packages, and configurations within the container, you can avoid conflicts with other system dependencies and ensure that the chatbot has access to the specific versions of libraries it needs. This containerization eliminates the need for manual installation and configuration of dependencies on different systems, making the deployment process more streamlined and reliable.
2. Docker ensures the reproducibility of your Rasa chatbot’s environment.
By defining the exact dependencies, libraries, and configurations within the container, you can guarantee that the chatbot will run consistently across different deployments.
3. Docker enables seamless scalability of the Rasa chatbot.
With containers, you can easily replicate and distribute instances of the chatbot across multiple nodes or servers, allowing you to handle high volumes of user interactions.
4. Docker provides isolation between the chatbot and the host system and between different containers running on the same host.
This isolation ensures that the chatbot’s dependencies and runtime environment do not interfere with the host system or other applications. It also allows for easy management of dependencies and versioning, preventing conflicts and ensuring a clean and isolated environment in which the chatbot can operate.
Building an ML FAQ model demo application
By combining the power of Rasa and Docker, developers can create an ML FAQ model demo that excels in handling frequently asked questions. The demo can be trained on a dataset of common queries and their corresponding answers, allowing the chatbot to understand and respond to similar questions with high accuracy.
In this tutorial, you’ll learn how to build an ML FAQ model demo using Rasa and Docker. You’ll set up a development environment to train the model and then deploy the model using Docker. You will also see how to integrate a WebChat UI frontend for easy user interaction. Let’s jump in.
Getting started
The following key components are essential to completing this walkthrough:
Docker Desktop
Figure 2: Docker container with pre-installed Rasa dependencies and Volume mount point.
Deploying an ML FAQ demo app is a simple process involving the following steps:
Clone the repository.
Set up the configuration files.
Initialize Rasa.
Train and run the model.
Bring up the WebChat UI app.
We’ll explain each of these steps below.
Cloning the project
To get started, you can clone the repository by running the following command:
https://github.com/dockersamples/docker-ml-faq-rasa
docker-ml-faq-rasa % tree -L 2
.
├── Dockerfile-webchat
├── README.md
├── actions
│ ├── __init__.py
│ ├── __pycache__
│ └── actions.py
├── config.yml
├── credentials.yml
├── data
│ ├── nlu.yml
│ ├── rules.yml
│ └── stories.yml
├── docker-compose.yaml
├── domain.yml
├── endpoints.yml
├── index.html
├── models
│ ├── 20230618-194810-chill-idea.tar.gz
│ └── 20230619-082740-upbeat-step.tar.gz
└── tests
└── test_stories.yml
6 directories, 16 files
Before we move to the next step, let’s look at each of the files one by one.
File: domain.yml
This file describes the chatbot’s domain and includes crucial data including intents, entities, actions, and answers. It outlines the conversation’s structure, including the user’s input, and the templates for the bot’s responses.
version: "3.1"
intents:
– greet
– goodbye
– affirm
– deny
– mood_great
– mood_unhappy
– bot_challenge
responses:
utter_greet:
– text: "Hey! How are you?"
utter_cheer_up:
– text: "Here is something to cheer you up:"
image: "https://i.imgur.com/nGF1K8f.jpg"
utter_did_that_help:
– text: "Did that help you?"
utter_happy:
– text: "Great, carry on!"
utter_goodbye:
– text: "Bye"
utter_iamabot:
– text: "I am a bot, powered by Rasa."
session_config:
session_expiration_time: 60
carry_over_slots_to_new_session: true
As shown previously, this configuration file includes intents, which represent the different types of user inputs the bot can understand. It also includes responses, which are the bot’s predefined messages for various situations. For example, the bot can greet the user, provide a cheer-up image, ask if the previous response helped, express happiness, say goodbye, or mention that it’s a bot powered by Rasa.
The session configuration sets the expiration time for a session (in this case, 60 seconds) and specifies whether the bot should carry over slots (data) from a previous session to a new session.
File: nlu.yml
The NLU training data are defined in this file. It includes input samples and the intents and entities that go with them. The NLU model, which connects user inputs to the right actions, is trained using this data.
version: "3.1"
nlu:
– intent: greet
examples: |
– hey
– hello
– hi
– hello there
– good morning
– good evening
– moin
– hey there
– let’s go
– hey dude
– goodmorning
– goodevening
– good afternoon
– intent: goodbye
examples: |
– cu
– good by
– cee you later
– good night
– bye
– goodbye
– have a nice day
– see you around
– bye bye
– see you later
– intent: affirm
examples: |
– yes
– y
– indeed
– of course
– that sounds good
– correct
– intent: deny
examples: |
– no
– n
– never
– I don’t think so
– don’t like that
– no way
– not really
– intent: mood_great
examples: |
– perfect
– great
– amazing
– feeling like a king
– wonderful
– I am feeling very good
– I am great
– I am amazing
– I am going to save the world
– super stoked
– extremely good
– so so perfect
– so good
– so perfect
– intent: mood_unhappy
examples: |
– my day was horrible
– I am sad
– I don’t feel very well
– I am disappointed
– super sad
– I’m so sad
– sad
– very sad
– unhappy
– not good
– not very good
– extremly sad
– so saad
– so sad
– intent: bot_challenge
examples: |
– are you a bot?
– are you a human?
– am I talking to a bot?
– am I talking to a human?
This configuration file defines several intents, which represent different types of user inputs that the chatbot can recognize. Each intent has a list of examples, which are example phrases or sentences that users might type or say to express that particular intent.
You can customize and expand upon this configuration by adding more intents and examples that are relevant to your chatbot’s domain and use cases.
File: stories.yml
This file is used to define the training stories, which serve as examples of user-chatbot interactions. A series of user inputs, bot answers, and the accompanying intents and entities make up each story.
version: "3.1"
stories:
– story: happy path
steps:
– intent: greet
– action: utter_greet
– intent: mood_great
– action: utter_happy
– story: sad path 1
steps:
– intent: greet
– action: utter_greet
– intent: mood_unhappy
– action: utter_cheer_up
– action: utter_did_that_help
– intent: affirm
– action: utter_happy
– story: sad path 2
steps:
– intent: greet
– action: utter_greet
– intent: mood_unhappy
– action: utter_cheer_up
– action: utter_did_that_help
– intent: deny
– action: utter_goodbye
The stories.yml file you provided contains a few training stories for a Rasa chatbot. These stories represent different conversation paths between the user and the chatbot. Each story consists of a series of steps, where each step corresponds to an intent or an action.
Here’s a breakdown of steps for the training stories in the file:
Story: happy path
User greets with an intent: greet
Bot responds with an action: utter_greet
User expresses a positive mood with an intent: mood_great
Bot acknowledges the positive mood with an action: utter_happy
Story: sad path 1
User greets with an intent: greet
Bot responds with an action: utter_greet
User expresses an unhappy mood with an intent: mood_unhappy
Bot tries to cheer the user up with an action: utter_cheer_up
Bot asks if the previous response helped with an action: utter_did_that_help
User confirms that it helped with an intent: affirm
Bot acknowledges the confirmation with an action: utter_happy
Story: sad path 2
User greets with an intent: greet
Bot responds with an action: utter_greet
User expresses an unhappy mood with an intent: mood_unhappy
Bot tries to cheer the user up with an action: utter_cheer_up
Bot asks if the previous response helped with an action: utter_did_that_help
User denies that it helped with an intent: deny
Bot says goodbye with an action: utter_goodbye
These training stories are used to train the Rasa chatbot on different conversation paths and to teach it how to respond appropriately to user inputs based on their intents.
File: config.yml
The configuration parameters for your Rasa project are contained in this file.
# The config recipe.
# https://rasa.com/docs/rasa/model-configuration/
recipe: default.v1
# The assistant project unique identifier
# This default value must be replaced with a unique assistant name within your deployment
assistant_id: placeholder_default
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline:
# # No configuration for the NLU pipeline was provided. The following default pipeline was used to train your model.
# # If you’d like to customize it, uncomment and adjust the pipeline.
# # See https://rasa.com/docs/rasa/tuning-your-model for more information.
# – name: WhitespaceTokenizer
# – name: RegexFeaturizer
# – name: LexicalSyntacticFeaturizer
# – name: CountVectorsFeaturizer
# – name: CountVectorsFeaturizer
# analyzer: char_wb
# min_ngram: 1
# max_ngram: 4
# – name: DIETClassifier
# epochs: 100
# constrain_similarities: true
# – name: EntitySynonymMapper
# – name: ResponseSelector
# epochs: 100
# constrain_similarities: true
# – name: FallbackClassifier
# threshold: 0.3
# ambiguity_threshold: 0.1
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you’d like to customize them, uncomment and adjust the policies.
# # See https://rasa.com/docs/rasa/policies for more information.
# – name: MemoizationPolicy
# – name: RulePolicy
# – name: UnexpecTEDIntentPolicy
# max_history: 5
# epochs: 100
# – name: TEDPolicy
# max_history: 5
# epochs: 100
# constrain_similarities: true
The configuration parameters for your Rasa project are contained in this file. Here is a breakdown of the configuration file:
1. Assistant ID:
Assistant_id: placeholder_default
This placeholder value should be replaced with a unique identifier for your assistant.
2. Rasa NLU configuration:
Language: en
Specifies the language used for natural language understanding.
Pipeline:
Defines the pipeline of components used for NLU processing.
The pipeline is currently commented out, and the default pipeline is used.
The default pipeline includes various components like tokenizers, featurizers, classifiers, and response selectors.
If you want to customize the pipeline, you can uncomment the lines and adjust the pipeline configuration.
3. Rasa core configuration:
Policies:
Specifies the policies used for dialogue management.
The policies are currently commented out, and the default policies are used.
The default policies include memoization, rule-based, and TED (Transformer Embedding Dialogue) policies
If you want to customize the policies, you can uncomment the lines and adjust the policy configuration.
File: actions.py
The custom actions that your chatbot can execute are contained in this file. Retrieving data from an API, communicating with a database, or doing any other unique business logic are all examples of actions.
# This files contains your custom actions which can be used to run
# custom Python code.
#
# See this guide on how to implement these action:
# https://rasa.com/docs/rasa/custom-actions
# This is a simple example for a custom action which utters "Hello World!"
# from typing import Any, Text, Dict, List
#
# from rasa_sdk import Action, Tracker
# from rasa_sdk.executor import CollectingDispatcher
#
#
# class ActionHelloWorld(Action):
#
# def name(self) -> Text:
# return "action_hello_world"
#
# def run(self, dispatcher: CollectingDispatcher,
# tracker: Tracker,
# domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
#
# dispatcher.utter_message(text="Hello World!")
#
# return []
Explanation of the code:
The ActionHelloWorld class extends the Action class provided by the rasa_sdk.
The name method defines the name of the custom action, which in this case is “action_hello_world”.
The run method is where the logic for the custom action is implemented.
Within the run method, the dispatcher object is used to send a message back to the user. In this example, the message sent is “Hello World!”.
The return [] statement indicates that the custom action has completed its execution.
File: endpoints.yml
The endpoints for your chatbot are specified in this file, including any external services or the webhook URL for any custom actions.
Initializing Rasa
This command initializes a new Rasa project in the current directory ($(pwd)):
docker run -p 5005:5005 -v $(pwd):/app rasa/rasa:3.5.2 init –no-prompt
It sets up the basic directory structure and creates essential files for a Rasa project, such as config.yml, domain.yml, and data/nlu.yml. The -p flag maps port 5005 inside the container to the same port on the host, allowing you to access the Rasa server. <IMAGE>:3.5.2 refers to the Docker image for the specific version of Rasa you want to use.
Training the model
The following command trains a Rasa model using the data and configuration specified in the project directory:
docker run -v $(pwd):/app rasa/rasa:3.5.2 train –domain domain.yml –data data –out models
The -v flag mounts the current directory ($(pwd)) inside the container, allowing access to the project files. The –domain domain.yml flag specifies the domain configuration file, –data data points to the directory containing the training data, and –out models specifies the output directory where the trained model will be saved.
Running the model
This command runs the trained Rasa model in interactive mode, enabling you to test the chatbot’s responses:
docker run -v $(pwd):/app rasa/rasa:3.5.2 shell
The command loads the trained model from the models directory in the current project directory ($(pwd)). The chatbot will be accessible in the terminal, allowing you to have interactive conversations and see the model’s responses.
Verify Rasa is running:
curl localhost:5005
Hello from Rasa: 3.5.2
Now you can send the message and test your model with curl:
curl –location ‘http://localhost:5005/webhooks/rest/webhook’
–header ‘Content-Type: application/json’
–data ‘{
"sender": "Test person",
"message": "how are you ?"}’
Running WebChat
The following command deploys the trained Rasa model as a server accessible via a WebChat UI:
docker run -p 5005:5005 -v $(pwd):/app rasa/rasa:3.5.2 run -m models –enable-api –cors "*" –debug
The -p flag maps port 5005 inside the container to the same port on the host, making the Rasa server accessible. The -m models flag specifies the directory containing the trained model. The –enable-api flag enables the Rasa API, allowing external applications to interact with the chatbot. The –cors “*” flag enables cross-origin resource sharing (CORS) to handle requests from different domains. The –debug flag enables debug mode for enhanced logging and troubleshooting.
docker run -p 8080:80 harshmanvar/docker-ml-faq-rasa:webchat
Open http://localhost:8080 in the browser (Figure 3).
Figure 3: WebChat UI.
Defining services using a Compose file
Here’s how our services appear within a Docker Compose file:
services:
rasa:
image: rasa/rasa:3.5.2
ports:
– 5005:5005
volumes:
– ./:/app
command: run -m models –enable-api –cors "*" –debug
Webchat:
image: harshmanvar/docker-ml-faq-rasa:webchat
build:
context: .
dockerfile: Dockerfile-webchat
ports:
– 8080:80
Your sample application has the following parts:
The rasa service is based on the rasa/rasa:3.5.2 image.
It exposes port 5005 to communicate with the Rasa API.
The current directory (./) is mounted as a volume inside the container, allowing the Rasa project files to be accessible.
The command run -m models –enable-api –cors “*” –debug starts the Rasa server with the specified options.
The webchat service is based on the harshmanvar/docker-ml-faq-rasa:webchat image. It builds the image using the Dockerfile-webchat file located in the current context (.). Port 8080 on the host is mapped to port 80 inside the container to access the webchat interface.
You can clone the repository or download the docker-compose.yml file directly from GitHub.
Bringing up the container services
You can start the WebChat application by running the following command:
docker compose up -d –build
Then, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:
docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-ml-faq-rassa-rasa-1 harshmanvar/docker-ml-faq-rasa:3.5.2 "rasa run -m models …" rasa 6 seconds ago Up 5 seconds 0.0.0.0:5005->5005/tcp
docker-ml-faq-rassa-webchat-1 docker-ml-faq-rassa-webchat "/docker-entrypoint.…" webchat 6 seconds ago Up 5 seconds 0.0.0.0:8080->80/tcp
Viewing the containers via Docker Dashboard
You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application (Figure 4):
Figure 4: View containers with Docker Dashboard.
Conclusion
Congratulations! You’ve learned how to containerize a Rasa application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you quickly build and deploy an ML FAQ Demo Model app in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing.
Check out Rasa on DockerHub.
Quelle: https://blog.docker.com/feed/
Amazon Aurora Serverless v2, die nächste Version von Aurora Serverless, ist jetzt in Asien-Pazifik (Melbourne) verfügbar.
Quelle: aws.amazon.com