Why I Still Use jQuery

jQuery is a household name among web developers who have been around the block. Initially released in 2006, it took the web development world by storm with its easy and intuitive syntax for navigating a document, selecting DOM elements, handling events, and making AJAX requests. At its peak in 2015, jQuery featured on 62.7 percent of the top one million websites and 17 percent of all Internet websites.

A decade later, jQuery is not the shiny new kid on the block anymore. Most of the original pain points jQuery solved, such as DOM manipulation and inconsistent browser behavior, are gone thanks to modern browser APIs. 

But jQuery is still widely used. According to SimilarWeb, as of August 11, 2025, nearly 195 million websites use it. That means many developers, like me, still use it every day. And like me, you might prefer it in certain cases. 

So, in this article, I’ll share when it still makes sense to use jQuery and when not. Don’t worry: I’m not arguing we should replace React with jQuery. And I’m not here to romanticize 2008. In 2025, I simply still find myself reaching for jQuery because it’s the right tool for the job. 

A Brief History of jQuery

To determine when it makes sense to use jQuery and when not, it helps to know why it was created in the first place and what problems it aimed to solve.

When John Resig launched jQuery at BarCamp NYC in January 2006, the web was a different place. Features we take for granted today were absent from most browsers:

No querySelectorAll: Selecting DOM elements across browsers was messy. In the mid-2000s, none of the available element selectors, like getElementById or getElementsByClassName, could select elements using complex CSS queries.

Inconsistent event handling: addEventListener wasn’t universal. While browsers like Firefox, Safari, and Chrome supported the W3C event model with addEventListener, Internet Explorer (before IE9) used Microsoft’s proprietary model with attachEvent. These two models differed from each other in almost all functional aspects.

Different browsers had different APIs for XMLHttpRequest. While browsers like Firefox and Safari offered the familiar XMLHttpRequest, Internet Explorer (before IE7) used ActiveX objects to give JavaScript network capabilities. This meant you had to use a bunch of if-else blocks to make an AJAX request.

CSS manipulation quirks: In the 2000s and early 2010s, many CSS features were implemented inconsistently across browsers, which made it difficult to manipulate CSS with JS.

jQuery solved all of this with a simple, chainable syntax and consistent cross-browser behavior. It offered a streamlined, chainable API for DOM traversal, event handling, and AJAX—far simpler than cross-browser native JavaScript at the time. These features made jQuery become the go-to JavaScript library in the 2010s, powering everything from personal blogs to Fortune 500 sites. In 2012, a W3Techs survey found that jQuery was running on 50 percent of all websites, and by 62.7 percent of the top 1M websites used it.

Where jQuery Still Makes Sense

Although jQuery’s glory days are clearly behind us, it still works well in some situations. Here are the scenarios where I still choose jQuery:

Legacy Projects

Even now in 2025, a W3Techs survey shows that jQuery is used in 77.8 percent of the top 10M websites in 2025. This is mostly legacy usage—old apps that use jQuery because switching to a more modern framework is a costly endeavour. This is clear when you look at the version statistics. In a 2023 survey across 500 organizations, only 44 percent use maintained versions (3.6.0 or newer), while 59 percent run older versions (1.x to 3.5.1)

I maintain a few legacy projects like these that were written with jQuery, and I can tell you why they’re still around: they just work. So as the adage goes, “If it ain’t broke, don’t fix it.” 

Many large enterprises, government sites, corporate intranets, and many WordPress plugins and themes still rely on jQuery. Rewriting these sites to pure JavaScript or a modern framework is a time-consuming, expensive endeavour that can also introduce new challenges and bugs. Most of the time, all that effort and risk aren’t worth the relatively small benefits in the short term.

The truth is this: the codebase I inherited, built in the jQuery era, works. The business logic is robust, the profit margins are healthy, and—most surprisingly—shipping new features feels like slipping into a worn leather jacket: unfashionable, but comfortable. – Marc Boisvert-Duprs

Yes, most jQuery plugins are no longer actively maintained or have been deprecated, so depending on them is a security risk. Abandoned plugins may become incompatible or insecure as browsers continue to evolve. So, legacy projects that use jQuery and jQuery plugins should eventually migrate away from jQuery.

Quick Prototyping without Build Tools

Developers often need to prototype very simple frontend apps, be it for throwaway demos, internal tools, or proof-of-concept pages. Sometimes the spec may even require a very basic frontend with minimal interactivity (for example, a static page with a simple form and a button).

jQuery is a perfect choice for these situations. Simply drop in a <script> tag from a CDN and get animations, DOM manipulation, and AJAX in minutes—no need for npm, bundlers, transpilers, or complicated frameworks with hundreds of dependencies. It’s also great for running quick commands from the DevTools console, especially if you want to experiment with an app. 

But why not use a more modern but lightweight framework like Alpine.js? Personally, I’m intimately familiar with jQuery: I’ve used it since the beginning of my web development journey. I love its simplicity and ease of use. The minor improvements a new framework can make in this scenario don’t offset the time spent learning a new tool.

Complex DOM Manipulation in Different Browser Contexts

Hopefully, you don’t have to support older browsers that lack the standard querySelector, or browsers like Internet Explorer, notorious for their non-standard behavior. Unfortunately, some of us still need to maintain apps that run on these browsers.

While native JS is perfectly fine for modern browsers, if you’re building something that has to run on older embedded browsers (think: kiosk software, older enterprise or university intranets, or web apps inside legacy desktop apps), jQuery’s normalization saves you from manual polyfilling, and its CSS selector lets you perform complex DOM manipulations easily.

Simple Animations without CSS Keyframes

As someone who primarily works with backend apps, I don’t often need to code animations for the frontend. But when I do need to create basic chained animations (fading, sliding, sequencing multiple elements, etc.), jQuery’s .animate()is simpler (and more lightweight) to write than juggling CSS animations and JS event callbacks.

Simple AJAX with HTML Server Responses

I was recently tasked to make some upgrades to an ancient app with a PHP backend. Imagine my surprise when I discovered that the server returns HTML fragments, and not JSON APIs. In this case, jQuery’s .load() and .html() methods can be simpler and more efficient than writing fetch() boilerplate with DOM parsing.

For example, I can extract a DOM element from the results of an AJAX request, and load it into an element like so:

// Replace #comments with just the #comments-list from the server response
$('#comments').load('/article/1 #comments-list');

Whereas the same thing in native JS would be:

fetch('/article/1')
.then(res => res.text())
.then(html => {
const doc = new DOMParser().parseFromString(html, 'text/html');
const comments = doc.querySelector('#comments-list');
document.querySelector('#comments').innerHTML = comments.outerHTML;
})

Yes, while the jQuery syntax is more straightforward, both approaches are doing the same thing under the hood, so there’s not a huge performance gain. In the jQuery version, you also have the added overhead of bundling the jQuery library. So, it’s a tradeoff between simplicity and bundle size.

When You Should Not Use jQuery

While jQuery still makes sense in some situations, there are some cases where I would never use jQuery.

Building a Modern, Component-Driven Frontend

If I’m building a modern frontend app with lots of reactivity and reusable components, I’d use a modern framework like React or Vue with native features for DOM manipulation.

Frameworks like React, Vue, Svelte, or Angular handle DOM rendering in a virtualised way. Direct DOM manipulation with jQuery conflicts with their data-binding approach, causing state mismatches and bugs. 

For example, in React, calling $(‘#el’).html(‘…’) bypasses React’s virtual DOM and React won’t know about the change. This will inevitably lead to bugs that are difficult to diagnose.

When Simple Vanilla JS Is Enough

Most of jQuery’s once-killer features, such as selectors, AJAX, events, and animations, are now native in JavaScript:

document.querySelectorAll() replaces $().

fetch() replaces $.ajax().

element.classList replaces .addClass()/ .removeClass().

element.animate() handles animations.

If I’m just toggling classes or making a fetch call, adding jQuery is wasteful.

Targeting Modern Browsers Only

jQuery’s major draw between 2008 and 2015 was its cross-browser compatibility, which was necessary due to quirks in IE6–IE9. It simply wasn’t practical to write browser-specific JS for all the different versions of IE. With jQuery, the quirks were abstracted away.

When IE was discontinued, this usefulness is no longer relevant. 

So if the app I’m working on needs to support only modern browsers, I don’t need most of jQuery’s compatibility layer.

Projects Already Using Modern Tooling

Mixing jQuery and framework code leads to a “hybrid monster” that’s difficult to maintain. 

jQuery can conflict with existing frameworks, which can cause hard-to-fix bugs. If my project is already written in another framework, I avoid including jQuery.

Alternatives to jQuery

Sometimes, I need to use some features of jQuery, but I can’t justify including it in its entirety.  Here are some libraries I use in cases like these. 

DOM Selection and Traversal

Native DOM API (most common replacement) using document.querySelector() and document.querySelectorAll()

Cash: jQuery-like API, tiny (~10KB), works with modern browsers

Zepto.js: lightweight jQuery-compatible library for mobile-first projects

AJAX/HTTP Requests

Native fetch() API

Axios: promise-based HTTP client with interceptors and JSON handling.

Event Handling

Native events using element.addEventListener()

delegate-it: small utility for jQuery-style event delegation

Animations

CSS transitions and animations (native, GPU-accelerated)

Web Animations API

GSAP: Powerful animation library, much more capable than .animate() in jQuery.

Utilities

* Lodash: collection iteration, object/array utilities, throttling, debouncing

* Day.js: date manipulation in a tiny package (instead of jQuery’s date plugins)

All-in-One Mini jQuery Replacements

If you still like a single API but want it lighter than jQuery:

Umbrella JS: ~3KB, jQuery-like API

Bliss: focuses on modern features, syntactic sugar, and chaining

Cash: as mentioned above, the closest modern equivalent

 jQuery Still Has a Job

In 2025, jQuery isn’t the cutting-edge choice for building complex, highly interactive single-page applications that it was in the 2010s, and that’s perfectly fine. While modern frameworks dominate the headlines, jQuery remains a reliable, well-understood tool that solves the problems it was designed for, simply and effectively.

In the end, the “right” tool is the one that meets your project’s needs, and for countless developers and businesses, jQuery continues to be that.
Quelle: https://blog.docker.com/feed/

How to add MCP Servers to OpenAI’s Codex with Docker MCP Toolkit

AI assistants are changing how we write code, but their true power is unleashed when they can interact with specialized, high-precision tools. OpenAI’s Codex is a formidable coding partner, but what happens when you connect it directly to your running infrastructure?

Enter the Docker MCP Toolkit.

The Model Context Protocol (MCP) Toolkit acts as a secure bridge, allowing AI models like Codex to safely discover and use any of the 200+ MCP servers from the trusted MCP catalog curated by Docker.

In this post, we’ll walk through an end-to-end demo, just like our Claude Code and Gemini CLI tutorials. But this time, we’re pairing Codex with Neo4j MCP servers.

First, we’ll connect Codex to the Neo4j server using the MCP Toolkit. Then, we’ll show a fun example: building a graph of Pokémon species and their types, and exploring the data visually. While playful, this example highlights how Codex + MCP can be applied to real-world, semi-structured data pipelines.

Read on to see how a generic AI assistant, when supercharged with Docker and MCP, can evolve into a specialized data engineering powerhouse!

Why use Codex with Docker MCP

While Codex provides powerful AI capabilities and MCP provides the protocol, Docker MCP Toolkit makes automated data modeling and graph engineering practical. Without containerization, building a knowledge graph means managing local Neo4j installations, dealing with database driver versions, writing boilerplate connection and authentication code, and manually scripting the entire data validation and loading pipeline. A setup that should take minutes can easily stretch into hours for each developer.

Docker MCP Toolkit eliminates this friction:

200+ pre-built MCP servers in the Catalog

One-click deployment through Docker Desktop

Neo4j Data Modeling MCP for schema design and validation

Neo4j Cypher MCP for direct database queries and ingestion

Secure credential management for database passwords

Consistent configuration across Mac, Windows, and Linux

Automatic updates when new server versions are released

We built Docker MCP Toolkit to meet developers where they are. If you’re using Codex, you should be able to engineer a knowledge graph without wrestling with database infrastructure.

The Setup: Connecting Codex to Neo4j Tools

Prerequisites

First, we need to give Codex access to the specialized Neo4j tools. 

Install Codex and run it at least once to get authentication out of the way

Install Docker Desktop 4.40 or later

Enable MCP Toolkit 

Step 1: Add the Neo4j MCP Servers

The Neo4j Cypher and Data Modeling servers are available out-of-the-box in the main MCP Toolkit catalog.

In Docker Desktop, navigate to the MCP Toolkit tab.

Click the Catalog tab.

Search for “Neo4j” and click + Add for both the Neo4j Cypher and Neo4j Data Modeling servers.

They will now appear in your “My servers” list.

Step 2: Connect Codex to the MCP Toolkit

With our tools ready, we run a one-time command to make Codex aware of the MCP Toolkit:

docker mcp-client configure codex

We can also do that from the Docker Desktop UI, navigate to the clients tab, and smash that connect button next to Codex and any other assistants you use:

Docker will edit the corresponding configuration files and next time Codex starts, it’ll connect to the MCP toolkit and you’ll have the tools at your disposal!

Step 3: Start and Configure Neo4j

We still need to configure the Neo4j Cypher MCP server to connect to the Neo4j database, so we’ll set this up now. We’ll use Codex to start our Neo4j database and configure the connection. First, we ask Codex to create the container:

› Spin up a Neo4j container for me in Docker please.

Codex will run the necessary Docker command, and get our Neo4j container running. You can of course do this manually, use a cloud service, or download the Neo4j Desktop application, but since we’re having fun in Codex – then why not make it do these mundane things for us.

With Neo4j available in the container we now need to configure the Neo4j Cypher MCP server to connect to it. The Neo4j Data Modeling MCP server works without any configuration. To simplify you can take a screenshot of the configuration form and ask Codex for the information on how to fill it to connect to the container we just started.

Next, we asked Codex for help, pointing it to a screenshot of the configuration page:

› There is a screenshot in this directory. It has the information we need to configure, MCP server to connect to the Neo4j database. Please look at it and give me the information to configure the MCP server to connect to our Neo4j container.

Codex inspected the container, found the running container, and determined its authentication details (e.g., NEO4J_AUTH=neo4j/password). It then provided the correct configuration values. We just need to update the Neo4j Cypher MCP server configuration in the Docker Desktop UI with the neo4j-cypher.url (neo4j://localhost:7687) and the password.

Fill the form in the Docker MCP toolkit for the Neo4j Cypher MCP server and restart codex now.

Use Case: Building a Pokémon Graph with Codex + Neo4j

Now we’re ready to tackle our original problem of building the graph of Pokémon species. First of all, enter the YOLO mode by typing /approvals and selecting the mode where it has freedom to do whatever it pleases:

Next, we can throw the full power of Codex models at our task. We give it the full problem statement: 

› Here's the page with all pokemon in generation 1: https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_index_number_in_Generation_I
and here's the types information: https://bulbapedia.bulbagarden.net/wiki/Type

we're going to create a graph with data about these pokemon (ignore MissingNo entries – they are a glitch and ignore other glitched pokemon).
you have a neo4j container running, and the MCP tools for modelling and cypher.

Codex will accept the task, break it down into a plan, and start cranking at it. 

There are a few interesting moments in how it approaches it, though it largely depends on the model you configure Codex with. For me it was “model: gpt-5-codex” and it tends to use scripting more than chaining curl commands for example. 

Here’s the plan you might see if you’re following along:

• I’m preparing to build a Pokemon graph by fetching data from a website using network access. Since this is a complex task, I’ll create a detailed plan involving data gathering, modeling
entities like Pokémon and Types, defining relationships such as evolutions and type effectiveness, and then using tools like MCP Docker to build the Neo4j graph. The first step is to draft
• Updated Plan
└ □ Inspect requirements and gather Generation I Pokémon data with types and evolution chains
□ Define Neo4j graph data model for Pokémon, Types, and relationships
□ Load data into Neo4j via Cypher ingestion and validate

Then it immediately started cranking python scripts with requests, BeautifulSoup and other web scraping friends to fetch the data from the provided wiki. It’s interesting that the Pokemon data is not actually on a single page, but spread through subpages for every Pokemon species, so perhaps scraping with Python makes a ton of sense over curling the pages and spending tokens on processing the data in the model. And it leaves the scripts behind so you can verify and reproduce the results if needed. 

Eventually the model proceeds to analyze the data, and here the MCP tools become really helpful. You can see below that it’s calling the MCP_DOCKER.validate_data_model from the server we configured earlier.

I'm outlining a comprehensive data model for Pokemon, types, and their relationships with precise property types and units converted for better clarity. I plan to generate ingestion-ready
JSON files via a Python script to integrate cleanly with Neo4j ingestion tools, ensuring the data pipeline is robust and structured.
└ wrote data model to data/gen1_pokemon_model.json

• Called MCP_DOCKER.validate_data_model({"data_model":"data/gen1_pokemon_model.json","return_validated":true})
└ Input validation error: 'data/gen1_pokemon_model.json' is not of type 'object'
….

It took Codex a few attempts to figure out the data model json schema and produce one for the Pokémon that satisfied the Neo4j Data Modelling MCP server. 

Then it returned to Python for creating the data ingestion script and loaded the data into the Neo4j instance.

A few MCP tool calls later to query the data with cypher (query language for graph databases) which it can do because it has access to the MCP server for Neo4j Cypher. And with it, Codex and the MCP servers can answer analytical questions about our data.

– Greedy type-coverage search suggests trios such as (Rhydon, Parasect, Dragonite) or (Rhydon, Parasect, Jynx) hit 13 of the 15 defending types super-effectively; no trio can cover Normal/Rock simultaneously because Normal has no offensive 2× matchup.

Now what’s really fun about Neo4j is that it comes with a terrific console where you can explore the data. 

While our Neo4j container with the Pokémon data is still running we can go to http://localhost:7474, enter neo4j/password credentials and get to explore the data in a visual way. 

Here for example is a subset of the Pokémon and their type relationships.

And if you know Cypher or have an AI assistant that can generate Cypher queries (and verify they work with an MCP tool call), you can generate more complex projections of your data, for example this (generated by Codex) shows all Pokémon, their evolution relationships and primary/secondary types.

MATCH (p:Pokemon)
CALL {
WITH p
OPTIONAL MATCH (p)-[:EVOLVES_TO*1..]->(evo:Pokemon)
WITH collect(DISTINCT evo) AS evos
RETURN [e IN evos WHERE e IS NOT NULL | {node: e, relType: 'EVOLVES_TO'}] AS evolutionConnections
}
CALL {
WITH p
OPTIONAL MATCH (p)-[:HAS_TYPE]->(type:Type)
WITH type
ORDER BY type.name // ensures a stable primary/secondary ordering
RETURN collect(type) AS orderedTypes
}
WITH p, evolutionConnections, orderedTypes,
CASE WHEN size(orderedTypes) >= 1 THEN orderedTypes[0] END AS primaryType,
CASE WHEN size(orderedTypes) >= 2 THEN orderedTypes[1] END AS secondaryType
WITH p,
evolutionConnections +
CASE WHEN primaryType IS NULL THEN [] ELSE [{node: primaryType, relType: 'HAS_PRIMARY_TYPE'}] END +
CASE WHEN secondaryType IS NULL THEN [] ELSE [{node: secondaryType, relType: 'HAS_SECONDARY_TYPE'}] END AS connections
UNWIND connections AS connection
RETURN p AS pokemon,
connection.node AS related,
connection.relType AS relationship
ORDER BY pokemon.name, relationship, related.name;

Turn Your AI Coding Assistant into a Data Engineer, Architect, Analyst and More

While this Pokémon demo is a fun example, it’s also a practical blueprint for working with real-world, semi-structured data. Graph databases like Neo4j are especially well-suited for this kind of work. Their relationship-first model makes it easier to represent the complexity of real-world systems.

In this walkthrough, we showed how to connect OpenAI’s Codex to the Neo4j MCP Servers via Docker MCP Toolkit, enabling it to take on multiple specialized roles:

Data Engineer: Writing Python to scrape and transform web data

Data Architect: Designing and validating graph models using domain-specific tools

DevOps Engineer: Starting services and configuring tools based on its environment

Data Analyst: Running complex Cypher and Python queries to extract insights

In your own projects, you might ask your AI assistant to “Analyze production logs and identify the cause of performance spikes,” “Migrate the user database schema to a new microservice,” or “Model our product catalog from a set of messy CSVs.”

Summary

The Docker MCP Toolkit bridges the gap between powerful AI coding agents and the specialized tools they need to be truly useful. By providing secure, one-click access to a curated catalog of over 200 MCP servers, it enables AI agents to interact with real infrastructure, including databases, APIs, command-line tools, and more. Whether you’re automating data workflows, querying complex systems, or orchestrating services, the MCP Toolkit equips your assistant to work like a real developer. If you’re building with AI coding assistants and want it to go beyond code generation, it’s time to start integrating with the tools your stack already relies on!

Learn more

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers

Open Docker Desktop and get started with the MCP Toolkit (Requires version 4.48 or newer to launch the MCP Toolkit automatically)

Read our tutorial on How to Add MCP Servers to Claude Code with Docker MCP Toolkit

Read our tutorial on How to Add MCP Servers to Gemini CLI with Docker MCP Toolkit

Quelle: https://blog.docker.com/feed/

AWS Systems Manager Patch Manager launches security updates notification for Windows

AWS Systems Manager announces the launch of security updates notification for Windows patching compliance, which helps customers identify security updates that are available but not approved by their patch baseline configuration. This feature introduces a new patch state called “AvailableSecurityUpdate” that reports security patches of all severity levels that are available to install on Windows instances but do not meet the approval rules in your patch baseline. As organizations grow, administrators need to maintain secure systems while controlling when patches are applied. The security updates notification helps prevent situations where customers could unintentionally leave instances unpatched when using features like ApprovalDelay with large values. By default, instances with available security updates are marked as Non-Compliant, providing a clear signal that security patches require attention. Customers can also configure this behavior through their patch baseline settings to maintain existing compliance reporting if preferred. This feature is available in all AWS Regions where AWS Systems Manager is available. To get started with security updates notification for Windows patching compliance, visit the AWS Systems Manager Patch Manager console. For more information about this feature, refer to our user documentation or update your patch baseline with the details here. There are no additional charges for using this feature beyond standard AWS Systems Manager pricing.
Quelle: aws.amazon.com

Customer managed KMS keys now available for Automated Reasoning checks

AWS announces support for customer managed AWS Key Management Service (KMS) keys in Automated Reasoning checks in Amazon Bedrock Guardrails. This enhancement enables you to use your own encryption keys to protect policy content and tests, giving you full control over key management. Automated Reasoning checks in Amazon Bedrock Guardrails is the first and only generative AI safeguard that helps correct factual errors from hallucinations using logically accurate and verifiable reasoning that explains why responses are correct. This feature enables organizations in regulated industries like healthcare, financial services, and government to adopt Automated Reasoning checks while meeting compliance requirements for customer-owned encryption keys. For example, a financial institution can now use Automated Reasoning checks to validate loan processing guidelines while maintaining full control over the encryption keys protecting their policy content. When creating an Automated Reasoning policy, you can now select a customer managed KMS key to encrypt your content rather than using the default key. Customer managed KMS key support for Automated Reasoning checks is available in all AWS Regions where Amazon Bedrock Guardrails is offered: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris). To get started, see the following resources:

Automated Reasoning checks user guide
Amazon Bedrock Guardrails product page
AWS Key Management Service developer guide
Create an Automated Reasoning policy in the Bedrock console

Quelle: aws.amazon.com

Amazon EC2 C8g instances now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Europe (Milan), and AWS Asia Pacific (Hong Kong, Osaka, Melbourne) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances. To get started, see the AWS Management Console.
Quelle: aws.amazon.com