Introducing logical replication and decoding for Cloud SQL for PostgreSQL

Last week, we announced the Preview of Datastream, our serverless and easy-to-use change data capture and replication service. Datastream currently supports access to streaming, low-latency data from Oracle and MySQL databases into Google Cloud services such as Cloud Spanner, Cloud SQL, and BigQuery.With this in mind, we understand that our customers are using different tools and technologies and we want to meet them where they are. Logical replication and decoding for example, is an inherent part of the PostgreSQL ecosystem and it is a commonly used functionality. Therefore, today we are excited to announce the public preview of logical replication and decoding for Cloud SQL for PostgreSQL. By releasing those capabilities and enabling change data capture (CDC) from Cloud SQL for PostgreSQL, we strengthen our commitment to building an open database platform that meets critical application requirements and integrates seamlessly with the PostgreSQL ecosystem.Let’s take, for example, a retailer’s ecommerce system in which each order is saved in a database. Placing the order in the database is just one part of the order processing. How does the inventory get updated? By leveraging CDC, downstream systems can be notified of such changes and act accordingly—in this case, update the inventory in the warehouse.Another common use case is data analytics pipelines. Businesses want to perform analytics on the freshest data possible. For example, low stock on some products might need to kick off certain logistical processes, such as restocking or alerting. You can leverage logical decoding and replication to get the freshest data from the operational systems to your data pipelines and from there to your analytics platform with low latency.What is logical replication and decoding?Logical replication enables the mirroring of database changes between two Postgres instances in a storage-agnostic fashion. Logical replication provides flexibility both in terms of what data can be replicated between instances and what versions those instances can be running.Logical decoding enables the capture of all changes to tables within a database in different formats, such as JSON or plaintext, to name a few. Once captured, they can be consumed by a streaming protocol or a SQL interface. What problems can I solve with logical replication and decoding?Here’s what you can solve easily with logical replication and decoding:Selective replication of sets of tables between instances so that only relevant data sets need be sharedSelective replication of table rows between instances mainly to reduce size of dataSelective replication of table columns from the source to remove non-essential or sensitive dataData gather/merge from multiple sources to form a data lakeStream fresh data from operational database to the data warehouse for near real time analyses Upgrades instances between major versions with near zero downtimeHow can I participate in the public preview?To get started, check out documentation for this feature and release notes. To use this feature in public preview, spin up a new instance of Postgres (any version is fine) and follow the instructions in the documentation.Related ArticleThe 5 benefits of Cloud SQL [infographic]Check out this infographic on the 5 benefits of Cloud SQL, Google Cloud’s managed database service for MySQL, PostgreSQL and SQL Server.Read Article
Quelle: Google Cloud Platform

New Cloud TPU VMs make training your ML models on TPUs easier than ever

Today, we’re excited to announce new Cloud TPU VMs, which make it easier than ever before to use our industry-leading TPU hardware by providing direct access to TPU host machines, offering a new and improved user experience to develop and deploy TensorFlow, PyTorch, and JAX on Cloud TPUs. Instead of accessing Cloud TPUs remotely over the network, Cloud TPU VMs let you set up your own interactive development environment on each TPU host machine.Now you can write and debug an ML model line-by-line using a single TPU VM, then scale it up on a Cloud TPU Pod slice to take advantage of the super-fast TPU interconnect. You have root access to every TPU VM you create, so you can install and run any code you wish in a tight loop with your TPU accelerators. You can use local storage, execute custom code in your input pipelines, and more easily integrate Cloud TPUs into your research and production workflows.In addition to Cloud TPU integrations with TensorFlow, PyTorch, and JAX, and you can even write your own integrations via a new libtpu shared library on the VM. “Direct access to TPU VMs has completely changed what we’re capable of building on TPUs and has dramatically improved the developer experience and model performance.”— Aidan Gomez, co-founder and CEO, CohereA closer look at the new Cloud TPU architectureUntil now, you could only access Cloud TPUs remotely. You would typically create one or more VMs that would then communicate with Cloud TPU host machines over the network using gRPC:By contrast, Cloud TPU VMs run on the TPU host machines that are directly attached to TPU accelerators, as shown below:This new Cloud TPU system architecture is simpler and more flexible. In addition to major usability benefits, you may also achieve performance gains because your code no longer needs to make round trips across the datacenter network to reach the TPUs. Furthermore, you may also see significant cost savings: If you previously needed a fleet of powerful Compute Engine VMs to feed data to remote hosts in a Cloud TPU Pod slice, you can now run that data processing directly on the Cloud TPU hosts and eliminate the need for the additional Compute Engine VMs.What customers are sayingEarly access customers have been using Cloud TPU VMs since last October, and several teams of researchers and engineers have used them intensively since then. Here’s what they have to say:Alex Barron is a Lead Machine Learning Engineer at Gridspace. Gridspace provides an out-of-the-box product for observing, analyzing and automating 100% of voice calls in real time. The company’s software powers voice operations at USAA, Bloomberg and Square, among other leading companies.“At Gridspace we’ve been using JAX and Cloud TPU VMs to train massive speech and language models. These models power advanced analytics and automation capabilities inside our largest contact center customers. We saw an immediate 2x speed up over the previous Cloud TPU offering for training runs on the same size TPU and were able to scale to a 32-host v3-256 with no code changes. We’ve been incredibly satisfied with the power and ease of use of Cloud TPU VMs and we look forward to continuing to use them in the future.”— Alex BarronJames Townsend is a researcher at the UCL Queen Square Institute of Neurology in London. His team has been using JAX on Cloud TPU VMs to apply deep learning to medical imaging.“Google Cloud TPU VMs enabled us to radically scale up our research with minimal implementation complexity. There is a low-friction pathway, from implementing a model and debugging on a single TPU device, up to multi-device and multi-host (pod scale) training. This ease-of-use, at this scale, is unique, and is a game changer for us in terms of research possibilities. I’m really excited to see the impact this work can have.”— James TownsendPatrick von Platen is a Research Engineer at Hugging Face. Hugging Face is an open-source provider of natural language processing (NLP) technologies and creator of the popular Transformers library. With Hugging Face, researchers and engineers can leverage state-of-the-art NLP models with just a couple lines of code.”At Hugging Face we’ve recently integrated JAX alongside TensorFlow and PyTorch into our Transformers library. This has enabled the NLP community to efficiently train popular NLP models, such as BERT, on Cloud TPU VMs. Using a single v3-8, it is now possible to pre-train a base-sized BERT model in less than a day using a batch size of up to 2048. At Hugging Face, we believe that providing easy access to Cloud TPU VMs will make pre-training of large language models possible for a much wider spectrum of the NLP community, including small start-ups as well as educational institutions.”— Patrick von PlatenBen Wang is an independent researcher who works on Transformer-based models for language and multimodal applications. He has published open-source code for training large-scale transformers on Cloud TPU VMs and for orchestrating training over several Cloud TPU VMs with Ray.“JAX on Cloud TPU VMs enables high-performance direct access to TPUs along with the flexibility to build unconventional training setups, such as pipeline parallel training across preemptible TPU pod slices using Ray.”— Ben WangKeno Fischer is a core developer of the Julia programming language and co-founder of Julia Computing, where he leads a team applying machine learning to scientific modeling and simulation. He is the author of significant parts of the Julia compiler, including Julia’s original TPU backend.“The new TPU VM offering is a massive step forward for the usability of TPUs on the cloud. By being able to take direct advantage of the TPU hardware, we are no longer limited by the bandwidth and latency constraints of an intermediate network connection. This is of critical importance in our work where machine learning models are often directly coupled to scientific simulations running on the host machine.” — Keno Fischer The Julia team is working on a second-generation Cloud TPU integration using the new libtpu shared library. Please sign up here to receive updates.And finally, Shrestha Basu Mallick is a Product Manager on the Sandbox@Alphabet team, which has successfully adapted TPUs for classical simulations of quantum computers and to perform large-scale quantum chemistry computations. “Thanks to Google Cloud TPU VMs, and the ability to seamlessly scale from 1 to 2048 TPU cores, our team has built one of the most powerful classical simulators of quantum circuits. The simulator is capable of evolving a wavefunction of 40 qubits, which entails manipulating one trillion complex amplitudes! Also, TPU scalability has been key to enabling our team to perform quantum chemistry computations of huge molecules, with up to 500,000 orbitals. We are very excited about Cloud TPUs.”— Shrestha Basu MallickPricing and availabilityCloud TPU VMs are now available via preview in the us-central1 and europe-west4 regions. You can use single Cloud TPU devices as well as Cloud TPU Pod slices, and you can choose TPU v2 or TPU v3 accelerator hardware. Cloud TPU VMs are available for as little as $1.35 per hour per TPU host machine with our preemptible offerings. You can find additional pricing information here.Get started todayYou can get up and running quickly and start training ML models using JAX, PyTorch, and TensorFlow using Cloud TPUs and Cloud TPU Pods in any of our available regions. Check out our documentation to get started:JAX quickstartPyTorch quickstartTensorFlow quickstart
Quelle: Google Cloud Platform

AWS Wavelength ist jetzt mit den System- und Organisationskontrollen (SOC) konform

Alle Services in AWS Wavelength , die mit den AWS System- und Organisationskontrollen (SOC) (1,2,3) konform sind, sind nun zugelassen. Der SOC-Bericht kann in AWS Artifact heruntergeladen werden. AWS pflegt seine Zertifizierungen durch umfangreiche Audits seiner Kontrollorgane. So ist sichergestellt, dass Sicherheitsrisiken für Informationen, die sich auf die Vertraulichkeit, Integrität und Verfügbarkeit von Unternehmens- und Kundendaten auswirken könnten, erfolgreich begegnet wird. 
Quelle: aws.amazon.com

Wir präsentieren Amazon Kinesis Data Analytics Studio: Interaktives Abfragen von Data Streams und Entwickeln von Stream-Verarbeitungsanwendungen unterstützt von Apache Flink

Amazon Kinesis Data Analytics Studio ist jetzt verfügbar und ermöglicht Ihnen die interaktive Abfrage von Data Streams in Echtzeit sowie die einfache Erstellung und Ausführung von Stream-Verarbeitungsanwendungen mit SQL, Python und Scala. Mit wenigen Klicks können Sie ein serverloses Notebook starten, um Ad-hoc-Abfragen und Live-Datenuntersuchungen für Data Streams durchzuführen und in Sekundenschnelle Ergebnisse zu erhalten. Über die Notebook-Oberfläche können Sie Ihren Code auch ganz einfach als Stream-Verarbeitungsanwendung mit dauerhaftem Zustand und automatischer Skalierung erstellen und bereitstellen, um kontinuierlich und ohne zusätzlichen Entwicklungsaufwand verwertbare Erkenntnisse in Echtzeit zu generieren.
Quelle: aws.amazon.com

AWS Launch Wizard fügt Unterstützung für SQL-Server-AlwaysOn-Failoverclusterinstanzen hinzu, die auf Amazon FSx for Windows File Server bereitgestellt werden

Sie können jetzt AWS Launch Wizard verwenden, um Microsoft-SQL-Server-AlwaysOn-Failoverclusterinstanzen (FCI) mit vollständig verwaltetem, gemeinsam genutztem Speicher in Amazon FSx for Windows File Server (Amazon FSx) bereitzustellen. AWS Launch Wizard bietet eine einfache konsolenbasierte Bereitstellungsumgebung für die Bereitstellung einer SQL-Server-FCI-Hochverfügbarkeitsumgebung, ohne dass einzelne AWS-Ressourcen manuell identifiziert und bereitgestellt werden müssen. Die Komplexität und die Kosten für den Betrieb Ihrer SQL-Server-AlwaysOn-Bereitstellungen werden reduziert, wenn Sie Amazon FSx als gemeinsamen Speicher verwenden.
Quelle: aws.amazon.com

Ankündigung von Amazon CloudWatch Resource Health

Amazon CloudWatch Resource Health ist eine neue Funktion, mit der Sie den Zustand und die Leistung von Amazon-EC2-Hosts (Amazon Elastic Compute Cloud) über Ihre Anwendungen hinweg in einer einzigen Ansicht automatisch erkennen, verwalten und visualisieren können. Mit Resource Health können Sie den Zustand Ihrer Amazon-EC2-Hosts in einer Karten- (oder Listen-) Ansicht nach Leistungsdimensionen wie CPU oder Arbeitsspeicher visualisieren und Hunderte von Hosts mithilfe von Tags und verfügbaren Filtern wie Instance-Typ, Instance-Status und Statusüberprüfung aufgliedern. Dies trägt dazu bei, die mittlere Zeit zur Problemlösung (Mean time to resolution, MTTR) zu reduzieren, indem EC2-Hosts, die suboptimal arbeiten, einfach isoliert werden.
Quelle: aws.amazon.com