Now Available: Multi Target Replication on SAP HANA for Red Hat Enterprise Linux

The Red Hat Enterprise Linux (RHEL) High Availability Add-On, which is included with Red Hat Enterprise Linux for SAP Solutions, is an automated high availability solution that increases reliability and stability by eliminating single points of failure and minimizing unscheduled downtime in scale-up and scale-out SAP HANA, SAP S/4HANA, and SAP NetWeaver deployments. 
Quelle: CloudForms

Latest Mirantis OpenStack for Kubernetes Delivers Support for Centrally-Managed, Distributed Edge Deployments

Latest Mirantis OpenStack for Kubernetes Delivers Support for Centrally-Managed, Distributed Edge Deployments Provides broader range of deployment options and enhanced connectivity CAMPBELL, Calif., September 9, 2021 — Mirantis, the open cloud company, today announced  the availability of Mirantis OpenStack 21.4 that includes a number of important enhancements which enable enterprises to support a broader range … Continued
Quelle: Mirantis

Sqlcommenter now extending the vision of OpenTelemetry to databases

Database observability is important to every DevOps team. In order to troubleshoot a slow running application, developers, DBAs, data engineers or SREs use a variety of tools for Application Performance Monitoring (APM) that need access to database activity. This makes it imperative for database telemetry to be easily accessible and integrated seamlessly with your choice of tooling to get end-to-end observability. And that’s why today, we’re announcing that we are merging Sqlcommenter, an open source object-relational mapping (ORM) auto-instrumentation library, with OpenTelemetry, an open source observability framework. This merge will enable application-focused database observability with open standards. To easily correlate application and database telemetry, we open-sourced Sqlcommenter earlier this year, which was followed by great adoption from the developer community. Sqlcommenter enables object–relational mappings (ORMs) to augment SQL statements before execution, with comments containing information about the application code that caused its execution. This simplifies the process of correlating slow queries with source code, and provides insights into backend database performance. Sqlcommenter also allows OpenTelemetry trace context information to be propagated to the database, enabling correlation between application traces and database query plans.The following example shows a query log with SQL comments added by Sqlcommenter for the Sequelize ORM.Application developers can use observability information from Sqlcommenter to analyze slow query logs; or that observability information can be integrated into other products such as Cloud SQL Insights or APM tools from Datadog, Dynatrace, and Splunk to provide application-centric monitoring.Extending the vision of OpenTelemetry to DatabasesOpenTelemetry, which is now the second most active Cloud Native Computing Foundation (CNCF)  open-source project behind Kubernetes, makes it easy to create and collect telemetry data from your services and software, then forward that data to a variety of Application Performance Monitoring tools. But before today, OpenTelemetry lacked a common standard by which application tags and traces can be sent to databases and correlated with an application stack. To extend the vision of OpenTelemetry to databases, we merged Sqlcommenter with OpenTelemetry to unlock a rich choice of tools for database observability for developers. Bogdan Drutu, Co-founder of OpenCensus & OpenTelemetry and Senior Principal Software Engineer, Splunk, offered his perspective;“With Google Cloud’s Sqlcommenter contribution to OpenTelemetry, a vendor-neutral open standard and library will enable a rich ecosystem of Application Performance Monitoring tools to easily integrate with databases, unlocking a rich choice of tools for database observability for the developers.”OpenTelemetry, Google Cloud and our partnersWe believe that a healthy observability ecosystem is necessary for end-to-end application stack visibility and this is reflected in our continued commitment to open-source initiatives. This belief is shared by other key contributors to the ecosystem including Datadog, Dynatrace and Splunk:Datadog “Visibility into database performance and its impact on applications is critical to engineering.  A poorly performing query can impact every other layer of the stack making troubleshooting difficult.   Sqlcommenter bridges the gap between application requests and database queries, allowing APM users to troubleshoot requests in their entirety at all levels, from frontend to data tiers. As early contributors to OpenTelemetry, we are extremely pleased to see this contribution from Google Cloud as it brings us closer to the vision of open standards based observability.” – Ilan Rabinovitch, SVP Product, DatadogDynatrace“Observing datasets across thousands of companies, we’ve identified database access patterns and performance as among the top reasons for poor performing applications. Dynatrace, a core contributor to OpenTelemetry, natively supports telemetry data generated by Sqlcommenter in problem analysis, change detection, and real-time optimization analytics. We see the combination of Sqlcommenter and OpenTelemetry helping developers understand the impact of their database queries and make it easier to collaborate to optimize application performance.”—Alois Reitbauer, VP, Chief Technology Strategist, DynatraceSplunk”Splunk and Google have been behind OpenTelemetry since day one, and the merging of Sqlcommenter to OpenTelemetry means Splunk Observability Cloud customers can further empower developers with application centric database monitoring, accelerating their DevOps journeys for databases.”—Morgan McLean, Co-founder of OpenCensus & OpenTelemetry and Director of Product Management, SplunkAn example of how Cloud SQL Insights uses Sqlcommenter to simplify observability for developersTroubleshooting databases is hard in modern application architecturesToday’s microservice-based architectures redefine an application as an interconnected mesh of services as seen in the picture below, many of which are third-party and/or open-source. This can make it challenging to understand the source of system performance issues. When a database is involved, it becomes even harder to correlate application code with database performance. Cloud SQL Insights leverages Sqlcommenter to simplify database troubleshooting in distributed architectures.Application-centric database monitoring If you are an on-call engineer and an alert goes off indicating a database problem, it can be hard to identify which microservices may be impacting your databases. Existing database monitoring tools only provide a query-centric view, leaving a disconnect between application and queries. To empower developers to monitor their databases through the lens of an application, Cloud SQL Insights uses the  information sent by Sqlcommenter to identify the top application tags (model, view, controller, route, user, host etc.) that are sent by the application. As seen in the following Insights dashboard example, users can get a holistic view of performance organized by business function rather than by query, making it easy to identify which service is causing the database to be slow.End-to-end tracingKnowing which microservice and query is causing the problem is not enough; you also need to quickly detect which part of the application code is causing the problem. To get an end-to-end application trace, Sqlcommenter allows OpenTelemetry trace context information to be propagated to the database. With Cloud SQL Insights, query plans are generated as traces with the traceparent context information from the SQL comments. Since the trace id is created by the application, and the parent span id and trace id is sent to the database as SQL comments, end-to-end tracing from application to database is now possible. The example below shows application trace spans from OpenTelemetry along with query plan trace spans from the NodeJS Express Sqlcommenter library.Contributing to SqlcommenterToday Sqlcommenter is available for Python, Java, Node.js and Ruby languages and supports Django, Sqlalchemy, Hibernate, Knex, Sequelize and Rails ORMs. All these Sqlcommenter libraries will be available as part of the CNCF project. With OpenTelemetry community support there is an opportunity to extend it to many more languages and ORMs. You can join the OpenTelemetry Slack channel here, or check out the Special Interest Groups (SIGs) community here.Related ArticleIntroducing Sqlcommenter: An open source ORM auto-instrumentation librarySqlcommenter is an open source library that enables ORMs to augment SQL statements with comments before execution.Read Article
Quelle: Google Cloud Platform

Ad agencies choose BigQuery to drive campaign performance

Advertising agencies are faced with the challenge of providing the precision data that marketers require to make better decisions at a time when customers’ digital footprints are rapidly changing. They need to transform customer information and real-time data into actionable insights to inform clients what to execute to ensure the highest campaign performance.In this post, we’ll explore how two of our advertising agency customers are turning to Google BigQuery to innovate, succeed, and meet the next generation of digital advertising head on. Net Conversion eliminated legacy toil to reach new heightsPaid marketing and comprehensive analytics agency Net Conversion has made a name for itself with its relentless attitude and data-driven mindset. But like many agencies, Net Conversion felt limited by traditional data management and reporting practices. A few years ago, Net Conversion was still using legacy data servers to mine and process data across the organization, and analysts relied heavily on Microsoft Excel spreadsheets to generate reports. The process was lengthy, fragmented, and slow—especially when spreadsheets exceeded the million-row limit.To transform, Net Conversion built Conversionomics, a serverless platform that leverages BigQuery, Google Cloud’s enterprise data warehouse, to centralize all of its data and handle all of its data transformation and ETL processes. BigQuery was selected for its serverless architecture, high scalability, and integration with tools that analysts were already using daily, such as Google Ads, Google Analytics, and Data Hub. After moving to BigQuery, Net Conversion discovered surprising benefits that streamlined reporting processes beyond initial expectations. For instance, many analysts had started using Google Sheets for reports, and BigQuery’s native integration with Connected Sheets gave them the power to analyze billions of rows of data and generate visualizations right where they were already working.If you’re still sending Excel files that are larger than 1MB, you should explore Google Cloud. Kenneth Eisinger Manager of Paid Media Analytics at Net ConversionSince modernizing their data analytics stack, Net Conversion has saved countless hours of time that can now be spent on taking insights to the next level. Plus, BigQuery’s advanced data analytics capabilities and robust integrations have opened up new roads to offer more dynamic insights that help clients better understand their audience.   For instance, Net Conversion recently helped a large grocery retailer launch a more targeted campaign that significantly increased downloads of their mobile application. The agency was able to better understand and predict their customers’ needs by analyzing buyer behavior across the website, mobile application, and their purchase history. Net Conversion analyzed website data in real-time with BigQuery, ran analytics on their mobile app data through the Firebase’s integration with BigQuery, and enriched these insights with sales information from the grocery retailer’s CRM to generate propensity behavior models that  accurately predicted which customers would most likely install their mobile app.WITHIN helped companies weather the COVID stormWITHIN is a performance branding company, focused on helping brands maximize growth by fusing marketing and business goals together in a single funnel. During the COVID-19 health crisis, WITHIN became an innovator in the ad agency world by sharing real-time trends and insights with customers through its Marketing Pulse Dashboard. This dashboard was part of the company’s path to adopting BigQuery for data analytics transformation. Prior to using BigQuery, WITHIN used a PostgreSQL database to house its data and manual reporting. Not only was the team responsible for managing and maintaining the server, which took focus away from the data analytics, but query latency issues often slowed them down. BigQuery’s serverless architecture, blazing-fast compute, and rich ecosystem of integrations with other Google Cloud and partner solutions made it possible to rapidly query, automate reporting, and completely get rid of CSV files. Using BigQuery, WITHIN is able to run Customer Lifetime Value (LTV) analytics and quickly share the insights with their clients in a collaborative Google Sheet. In order to improve the effectiveness of their campaigns across their marketing channels, WITHIN further segments the data into high and low LTV cohorts and shares the predictive insights with their clients for in-platform optimizations.By distilling these types of LTV insights from BigQuery, WITHIN has been able to use those to empower their campaigns on Google Ads with a few notable success stories.WITHIN worked with a pet food company to analyze historical transactional data to model predicted LTV of new customers. They found significant differences between product category and autoship vs single order customers, and they implemented LTV-based optimization. As a result, they saw a 400% increase in average customer LTV. WITHIN helped a coffee brand increase their customer base by 560%, with the projected 12-month LTV of newly acquired customers jumping a staggering 1280%.Through integration with Google AI Platform Notebooks, BigQuery also advanced WITHIN’s ability to use machine learning (ML) models. Today, the team can build and deploy models to predict dedicated campaign impact across channels without moving the data.  The integration of clients’ LTV data through Google Ads has also impacted how WITHIN structures their clients’ accounts and how they make performance optimization decisions.Now, WITHIN can capitalize on the entire data lifecycle: ingesting data from multiple sources into BigQuery, running data analytics, and empowering people with data by automatically visualizing data right in Google Data Studio or Google Sheets.A year ago, we delivered client reporting once a week. Now, it’s daily. Customers can view real-time campaign performance in Data Studio — all they have to do is refresh. Evan Vaughan Head of Data Science at WITHINHaving a consistent nomenclature and being able to stitch together a unified code name has allowed WITHIN to scale their analytics. Today, WITHIN is able to create an internal Media Mix Modeling (MMM) tool with the help of Google Cloud that they’re trialing with their clients.The overall unseen benefit of BigQuery was that it put WITHIN in a position to remain nimble and spot trends before other agencies when COVID-19 hit. This aggregated view of data allowed WITHIN to provide unique insights to serve their customers better and advise them on rapidly evolving conditions.Ready to modernize your data analytics? Learn more about how Google BigQuery unlocks the insights hidden in your data.Related ArticleQuery BIG with BigQuery: A cheat sheetOrganizations rely on data warehouses to aggregate data from disparate sources, process it, and make it available for data analysis in s…Read Article
Quelle: Google Cloud Platform

To the cloud and beyond! Migration Enablement with Google Cloud’s Professional Services Organization

Google Cloud’s Professional Services Organization (PSO) engages with customers to ensure effective and efficient operations in the cloud, from the time they begin considering how cloud can help them overcome their operational, business or technical challenges, to the time they’re looking to optimize their cloud workloads. We know that all parts of the cloud journey are important and can be complex.  In this blog post, we want to focus specifically on the migration process and how PSO engages in a myriad of activities to ensure a successful migration.As a team of trusted technical advisors, PSO will approach migrations in three phases:Pre-Migration PlanningCutover ActivitiesPost-Migration OperationsWhile this post will not cover in detail all of the steps required for a migration, it will focus on how PSO engages in specific activities to meet customer objectives, manage risk, and deliver value.  We will discuss the assets, processes and tools that we leverage to ensure success.Pre-Migration PlanningAssess ScopeBefore the migration happens, you will need to understand and clarify the future state that you’re working towards.  From a logistical perspective, PSO will be helping you with capacity planning to ensure sufficient resources are available for your envisioned future state.While migration into the cloud does allow you to eliminate many of the considerations for the physical, logistical, and financial concerns of traditional data centers and co-locations, it does not remove the need for active management of quotas, preparation for large migrations, and forecasting.  PSO will help you forecast your needs in advance and work with the capacity team to adjust quotas, manage resources, and ensure availability. Once the future state has been determined, PSO will also work with the product teams to determine any gaps in functionality.  PSO capturesfeature requests across Google Cloud services and makes sure they are understood, logged, tracked, and prioritized appropriately with the relevant product teams.  From there, they work closely with the customer to determine any interim workarounds that can be leveraged while waiting for the feature to land, as well as providing updates on the upcoming roadmap.  Develop Migration Approach and ToolingWithin Google Cloud, we have a library of assets and tools we use to assist in the migration process.  We have seen these assets help us successfully complete migrations for other customers efficiently and effectively.Based on the scoping requirements and tooling available to assist in the migration, PSO will help recommend a migration approach.  We understand that enterprises have specific needs; differing levels of complexity and scale; regulatory, operational, or organization challenges that will need to be factored into the migration.  PSO will help customers think through the different migration options and how all of the considerations will play out.PSO will work with the customer team to determine the best migration approach for moving servers from on-prem to Google Cloud. PSO will walk customers through different migration approaches, such as refactoring, lift-shift, or new installs. From there, the customer can determine the best fit for their migration. PSO will provide guidance on best practices and use cases from other customers with similar use cases. Google offers a variety of cloud native tools that can assist with asset discovery, the migration itself, and post-migration optimization. PSO, as one example, will help work with project managers to determine the best tooling that accommodates the customer’s requirements for migrating servers. PSO will also engage Google product team to ensure the customer fully understands the capabilities of each tool and the best fit for the use case. Google understands from a tooling perspective, one size does not fit all, thus PSO will work with the customer on determining the best migration approach and tooling for different requirements. Cutover ActivitiesOnce all of the planning activities have been completed, PSO will assist in making sure the cutover is successful.During and leading up to critical customer events, PSO can provide proactive event management services which deliver increased support and readiness for key workloads.  Beyond having a solid architecture and infrastructure on the platform, support for this infrastructure is essential and TAMs will help ensure that there are additional resources to support and unblock the customer where challenges arise.As part of event management activities, PSO liaises with the Google Cloud Support Organization to ensure quick remediation and high resilience for situations where challenges arise.  A war room is usually created to facilitate quick communication about the critical activities and roadblocks that arise.  These war rooms can give customers a direct line to the support and engineering teams that will triage and resolve their issues.Post-Migration ActivitiesOnce cutover is complete, PSO will continue to provide support in areas such incident management, capacity planning, continuous operational support, and optimization to ensure the customer is successful from start to finish.PSO will serve as the liaison between the customer and Google engineers. If support cases need to be escalated, PSO will ensure the appropriate parties are involved and work to get the case resolved in a timely manner. Through operational rigor, PSO will work with the customer in determining if certain Google Cloud services will be beneficial to the customer objectives. If services will add value to the customer, PSO will help enable the services so it aligns with the customer’s goal and current cloud architecture. In cases where there are missing gaps in services, PSO will proactively work with the customer and Google engineering teams to close the gaps by enabling additional functionality in the services.  PSO will continue to work with the engineering teams to consistently review and provide recommendations on the customer’s cloud architecture in ensuring the most optimal and cost efficient design along with adhering to Google’s best practices guidelines. Aside from migrations, PSO is also responsible for providing continuous training of Google Cloud to customers. To ensure consistent development of Google Cloud, PSO will work with the customer to jointly develop a learning roadmap to ensure the customer has the necessary skills to succeed in delivering successful projects in Google Cloud.ConclusionGoogle PSO will be actively engaged throughout the customer’s cloud journey to ensure the necessary guidance, methodology, and tools are presented to the customer. PSO will engage in a series of activities from pre-migration planning to post migration in key areas such as capacity planning to ensure sufficient resources are allocated for future workloads to providing support on technical cases for troubleshooting. PSO will serve as a long-term trusted advisor who will be the voice of the  customer and provide the reliability and stability of the customer’s Google Cloud environment.Click here if you’d like to engage with our PSO team on your migration. Or, you can also get started with a free discovery and assessment of your current IT landscape.Reference materialMigration service kitMigration trip reports
Quelle: Google Cloud Platform

The Magic Behind the Scenes of Docker Desktop

With all the changes recently quite a few people have been talking about Docker Desktop and trying to understand what it actually does on your machine. A few people have asked, “is it just a container UI?” 

Great developer tools are magic for new developers and save experienced developers a ton of time. This is what we set out to do with Docker Desktop. Docker Desktop is designed to let you build, share and run containers as easily on Mac and Windows as you do on Linux. Docker handles the tedious and complex setup so you can focus on writing code. 

Some of the magic Docker Desktop takes care of for developers includes:

A secure, optimized Linux VM that runs Linux tools and containers Seamless plumbing into the host OS giving containers access to the filesystem and networking Bundled container tools including Kubernetes, Docker Compose, buildkit, scanning Docker Dashboard for visually managing all your container content A simple one click installer for Mac and Windows Preconfigured sane and secure defaultsAutomatic incremental updates to keep your system running securely

Let’s dive into some of these in more detail!

Start with a single package 

Starting from the top, Docker Desktop comes as one single package for Mac or Windows. By this we have a single installer which, in one click, sets up everything you need to use Docker in seconds. 

But what is it that Docker Desktop is installing when it does this?

Built securely and maintained by Docker

At the heart of Docker Desktop we have a lightweight LinuxKit VM that Docker manages for you. 

This means we help address tricky issues with annoying customer impacts like the previous work on Docker Desktop for Mac. As well as setting up this VM, Docker Desktop will keep this VM up to date for you over time by applying kernel patches or other security fixes as are required. This gives you the peace of mind that you don’t have another machine image you are managing in your estate and instead Docker will look after this for you.This VM is where all of the Linux tools that we include will run and is where in turn all of your Linux containers will run when you are using Docker Engine. 

On Windows we run this VM under WSL2 and in doing so are able to give all of your WSL2 distro’s access to Docker, simply by toggling them on in the UI. If you want to learn more about the WSL 2 backend, check out Introducing the Docker Desktop WSL 2 Backend On Mac (on Intel and M1 machines) we are currently transitioning away from our previous HyperKit implementation to use Apple’s new Virtualization framework to run this VM.

Docker Desktop also provides you with a graphical interface to manage the settings for this VM, on Mac we provide the tools to change the resources this has access to (CPU, RAM etc) and on Windows we provide the tools to choose which distros can access this. Being in a VM also means we can limit which areas of the file system on your host machine can be accessed by the containers running the VM, this is great for security as it means you know exactly what files anything you are running in containers could possibly have access to and keep this locked down.

Integrating with the host machine 

Just having a VM doesn’t make this magic, as most of you who have used Docker Desktop will have noticed, you don’t need to “go into a VM” to use Docker. Instead this just works as if natively on your local machine. This is achieved through integrations in both networking and the file system into the VM to make this seem like a seamless piece of your local machine.

With networking, Docker Desktop maps your local host ports to those in the VM meaning that you can run a container on say port 80 on the VM and be able to access that from the browser on your local host – being able to see what you are running! Along with this it also uses VPNKit to guarantee networking is seamless, as if each container were running as a native app on the host, even when your IT department has configured a complicated VPN policy or requires the use of network proxies. Docker Desktop automates all of this and provides you a simple UI to make changes as you need.

Along with networking we also have the file system integration, Docker Desktop setups up bind mounts from your host to the VM giving you access to your local files (as you want!) inside the VM. Filesystem change notifications (fsnotify/inotify) work transparently, automatically triggering page reload when source code changes. It also allows you to route back from the container to the host allowing Docker containers to access local services running on the host. If you want to learn more about the file sharing implementation on Mac, check out Dave’s deep dive blog post Deep Dive Into the New Docker Desktop filesharing Implementation Using FUSE. 

The best container tools included 

All of this integration is great into the VM, but without the contents of the VM it won’t provide you with a lot. This is why we install and keep up to date the best Linux container tooling for you inside the VM. 

What most people think of as the ‘Docker’ experience is a lot more now than just the Docker Engine, it is a setup including multiple tools that together produce a seamless environment for developers to work with their containers. The heart of this is still the Docker Engine, an OCI compatible container run time included as part of Docker Desktop. Docker Desktop also bundles the Docker CLI to provide access to this and then includes Docker Compose 2.0 as well, allowing developers to work with their favorite multi container manifest format locally.

Docker Desktop also includes buildkit and buildx as part of the Docker CLI, giving developers access to faster builds and empowers developers to build for x86 or ARM from any local machine. Along with this Docker Desktop includes tools for scanning your images for vulnerabilities (docker scan), for working with your content and teams on Docker Hub (hub-tool) and the ability to connect and deploy to AWS ECS and Microsoft Azure ACI straight from the CLI (docker context).

These aren’t the only Linux container tools in Docker Desktop, we appreciate that there is a great community of tools and we are continuing to review which are the best we should also be including as part of the developer experience. The first of these which was introduced was support for Kubernetes (K8s) in Docker Desktop. In one click you can install and set up K8s with a load balancer ready to use with your local image store to run clusters.

Graphical controls 

All of these core components of Docker Desktop come with a simple graphical interface to help you control and manage these settings. Nestled in the menu bar on Mac and system tray on Windows you will find the Docker Desktop whale icon which allows you to jump in and get into settings, control core actions and jump into the Docker Dashboard.

The Docker Dashboard provides you with a simplified UI to manage your core Docker components on Docker Desktop. The Docker Dashboard now supports the management of Docker images locally and in Docker Hub, management of local running containers and the ability to manage and explore your Docker volumes. 

Portable developer tooling

Docker Desktop also includes new features like Dev Environments. With Dev Environments developers can now easily set up repeatable and reproducible development environments by keeping the environment details versioned in their SCM along with their code. Once a developer is working in a Dev Environment, they can share their work-in-progress code and dependencies in one click via Docker Hub. They can then switch between their developer environments or their teammates’ environments, moving between branches to look at work-in-progress changes without moving off their current Git branch. This makes reviewing PRs as simple as opening a new environment.

Multi-architecture support

Along with all of these tools, Docker Desktop also supports you in using them whatever system architecture you choose. With support for Apple M1 ARM Mac and QEMU included in Docker Desktop, you are able to build and use multi-architecture images (Linux x86, ARM, Windows) on whatever platform you are working on out of the box. 

As with all of these components, Docker’s updates keep these all in sync working together and secure with the latest fixes applied automatically for you. This keeps your team in sync, working with the same tools and secure.

And with a Docker subscription, if you have issues getting any of these items to work successfully for your team, you get support to unblock you to keep all of your developers productive. 

Get started

To get started download Docker Desktop for Mac or Windows. To learn more about using Docker for your developer workflows check out our documentation on Orientation and setup | Docker Documentation. We are continuing to build new features for all Desktop users and are keen to hear what you need so let us know on our roadmap! 

Finally, we will be showing off some of the next generation of innovation across Docker, including some new features and sneak previews for Docker Desktop at our September Community All Hands meeting. The free event takes place Thursday, September 16th from 8 AM – 11 AM Pacific time, register today here.
The post The Magic Behind the Scenes of Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Neue Funktionen für die Volltextsuche ohne String-Indexierung für Amazon Neptune

Amazon Neptune unterstützt jetzt die Suche nach neuen Datentypen, wie z. B. Zahlen und Daten, zusätzlich zu Zeichenketten, wenn die Volltextsuche mit Elasticsearch integriert wird. Diese Verbesserung erlaubt es Neptune-Kunden, Nicht-String-Werte in einen Elasticsearch-Cluster zu replizieren, wie er z. B. vom Amazon Elasticsearch Service bereitgestellt wird, um Gremlin- oder SPARQL-Abfragen zu starten, die nach diesen Werten suchen.
Quelle: aws.amazon.com