AWS Secrets Manager veröffentlicht ab sofort Metriken zur Nutzung von Geheimnissen in Amazon CloudWatch

AWS Secrets Manager veröffentlicht ab sofort eine Metrik für die Anzahl der Geheimnisse Ihres Kontos in Amazon CloudWatch. Mit dieser Funktion können Sie leicht überprüfen, wie viele Geheimnisse Sie in Secrets Manager verwenden. Sie können auch Alarmmeldungen für eine unerwartete Zunahme oder Abnahme der Anzahl von Geheimnissen einstellen.
Quelle: aws.amazon.com

Radio Cloud Native – Week of May 5th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Kubernetes reaches 1.24 CNCF considers working group to examine environmental impact Surveys on the state of the cloud marketplace Tests of Esperanto’s new RISC-V AI chip You can watch the full replay below: … Continued
Quelle: Mirantis

Extending BigQuery Functions beyond SQL with Remote Functions, now in preview

Today we are announcing the Preview of BigQuery Remote Functions. Remote Functions are user-defined functions (UDF) that let you extend BigQuery SQL with your own custom code, written and hosted in Cloud Functions, Google Cloud’s scalable pay-as-you-go functions as a service.  A remote UDF accepts columns from BigQuery as input, performs actions on that input using a Cloud Function, and returns the result of those actions as a value in the query result. With Remote Functions, you can now write custom SQL functions in Node.js, Python, Go, Java, NET, Ruby, or PHP. This ability means you can personalize BigQuery for your company, leverage the same management and permission models without having to manage a server.In what type of situations could you use remote functions?Before today, BigQuery customers had the ability to create user defined functions or UDFs in either SQL or javascript that ran entirely within BigQuery. While these functions are performant and fully managed from within BigQuery, customers expressed a desire to extend BigQuery UDFs with their own external code. Here are some examples of what they have asked for:Security and Compliance: Use data encryption and tokenization services from the Google Cloud security ecosystem for external encryption and de-identification. We’ve already started working with key partners like Protegrity and CyberRes Voltage on using these external functions as a mechanism to merge BigQuery into their security platform, which will help our mutual customers address strict compliance controls. Real Time APIs: Enrich BigQuery data using external APIs to obtain the latest stock price data, weather updates, or geocoding information.Code Migration: Migrate legacy UDFs or other procedural functions written in Node.js, Python, Go, Java, .NET, Ruby or PHP. Data Science: Encapsulate complex business logic and score BigQuery datasets by calling models hosted in Vertex AI or other Machine Learning platforms.Getting StartedLet’s go through the steps to use a BigQuery remote UDF. Setup the BigQuery Connection:   1. Create a BigQuery Connection      a. You may need to enable the BigQuery Connection APIDeploy a Cloud Function with your code:   1. Deploying your Cloud Function     a. You may need to enable Cloud Functions API     b. You may need to enable Cloud Build APIs   2. Grant the BigQuery Connection service account access to the Cloud Function     a. One way you can find the service account is by using the bq cli show commandcode_block[StructValue([(u’code’, u’bq show –location=US –connection $CONNECTION_NAME’), (u’language’, u”)])]Define the BigQuery remote UDF:    1. Create the remote UDFs definition within BigQuery      a. One way to find the endpoint name is to use the gCloud cli functions describe commandcode_block[StructValue([(u’code’, u’gcloud functions describe $FUNCTION_NAME’), (u’language’, u”)])]Use the BigQuery remote UDF in SQL:   1. Write a SQL statement as you would calling a UDF    2. Get your results! How remote functions can help you with common data tasksLet’s take a look at some examples of how using BigQuery with remote UDFs can help accelerate development and enhance data processing and analysis.Encryption and DecryptionAs an example, let’s create a simple custom encryption and decryption Cloud Function in Python. The encryption function can receive the data and return an encrypted base64 encoded string. In the same Cloud Function, the decryption function can receive an encrypted base64 encoded string and return the decrypted string. A data engineer would be able to enable this functionality in BigQuery.The Cloud Function receives the data and determines which function you want to invoke. The data is received as an HTTP request. The additional userDefinedContext fields allow you to send additional pieces of data to the Cloud Function.code_block[StructValue([(u’code’, u’def remote_security(request):rn request_json = request.get_json()rn mode = request_json[‘userDefinedContext’][‘mode’]rn calls = request_json[‘calls’]rn not_extremely_secure_key = ‘not_really_secure’rn if mode == “encryption”:rn return encryption(calls, not_extremely_secure_key)rn elif mode == “decryption”:rn return decryption(calls, not_extremely_secure_key)rn return json.dumps({“Error in Request”: request_json}), 400′), (u’language’, u”)])]The result is returned in a specific JSON formatted response that is returned to BigQuery to be parsed.code_block[StructValue([(u’code’, u’def encryption(calls,not_extremely_secure_key):rn return_value = []rn for call in calls:rn data = call[0].encode(‘utf-8′)rn cipher = AES.new(rn not_extremely_secure_key.encode(‘utf-8′)[:16],rn AES.MODE_EAXrn )rn cipher_text = cipher.encrypt(data)rn return_value.append(rn str(base64.b64encode(cipher.nonce + cipher_text))[2:-1]rn )rn return json.dumps({“replies”: return_value})’), (u’language’, u”)])]This Python code is deployed to Cloud Functions where it awaits to be invoked.Let’s add the User Defined Function to BigQuery so we can invoke it from a SQL statement. The additional user_defined_context is what is sent to Cloud Functions as additional context in the request payloadso you can use multiple remote functions mapped to one endpoint.code_block[StructValue([(u’code’, u’CREATE OR REPLACE FUNCTION `<project-id>.demo.decryption` (x STRING) RETURNS STRING REMOTE WITH CONNECTION `<project-id>.us.my-bq-cf-connection` OPTIONS (endpoint = ‘https://us-central1-<project-id>.cloudfunctions.net/remote_security’, user_defined_context = [(“mode”,”decryption”)])’), (u’language’, u”)])]Once we’ve created our functions, users with the right IAM permissions can use them in SQL on BigQuery.If you’re new to Cloud Functions, be aware that there are very minimal delays known as “cold starts”. The neat thing is you can call APIs as well, which is how our partners at Protegrity and Voltage enable their platforms to perform encryption and decryption of BigQuery data.Calling APIs to enrich your dataUsers, such as data analysts, can use the user defined functions created easily without needing other tools and moving the data out of BigQuery.You can enrich your dataset with many more APIs, for example, the Google Cloud Natural Language API to analyze sentiment on your text without having to use another tool.code_block[StructValue([(u’code’, u’def call_nlp(calls):rn return_value = []rn client = language_v1.LanguageServiceClient()rn for call in calls:rn text = call[0]rn document = language_v1.Document(rn content=text, type_=language_v1.Document.Type.PLAIN_TEXTrn )rn sentiment = client.analyze_sentiment(rn request={“document”: document}rn ).document_sentimentrn return_value.append(str(sentiment.score))rn return_json = json.dumps({“replies”: return_value})rn return return_json’), (u’language’, u”)])]Once the Cloud Function is deployed and the remote UDF definition is created on BigQuery, you are able to invoke the NLP API and return the data from it for use in your queries.Custom Vertex AI endpointData Scientists can integrate Vertex AI endpoints and other APIs, all from the SQL console for custom models. Remember, the remote UDFs are meant for scalar executions.You are able to deploy a model to a Vertex AI endpoint, which is another API, and then call that endpoint from Cloud Functions.code_block[StructValue([(u’code’, u’def predict_classification(calls):rn # Vertex AI endpoint detailsrn client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)rn endpoint = client.endpoint_path(rn project=project, location=location, endpoint=endpoint_idrn )rn # Call the endpoint for eachrn for call in calls:rn content = call[0]rn instance = predict.instance.TextClassificationPredictionInstance(rn content=content,rn ).to_value()rn instances = [instance]rn parameters_dict = {}rn parameters = json_format.ParseDict(parameters_dict, Value())rn response = client.predict(rn endpoint=endpoint, instances=instances, parameters=parametersrn )’), (u’language’, u”)])]Try it out todayTry out the BigQuery remote UDFs today!Related ArticleRead Article
Quelle: Google Cloud Platform

Introducing AlloyDB for PostgreSQL: Free yourself from expensive, legacy databases

Enterprises are struggling to free themselves from legacy database systems, and need an alternative option to modernize their applications. Today at Google I/O, we’re thrilled to announce the preview of AlloyDB for PostgreSQL, a fully-managed, PostgreSQL-compatible database service that provides a powerful option for modernizing your most demanding enterprise database workloads. Compared with standard PostgreSQL, in our performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases.As organizations modernize their database estates in the cloud, many struggle to eliminate their dependency on legacy database engines. In particular, enterprise customers are looking to standardize on open systems such as PostgreSQL to eliminate expensive, unfriendly licensing and the vendor lock-in that comes with legacy products. However, running and replatforming business-critical workloads onto an open source database can be daunting: teams often struggle with performance tuning, disruptions caused by vacuuming, and managing application availability. AlloyDB combines the best of Google’s scale-out compute and storage, industry-leading availability, security, and AI/ML-powered management with full PostgreSQL compatibility, paired with the performance, scalability, manageability, and reliability benefits that enterprises expect to run their mission-critical applications.As noted by Carl Olofson, Research Vice President, Data Management Software, IDC, “databases are increasingly shifting into the cloud and we expect this trend to continue as more companies digitally transform their businesses. With AlloyDB, Google Cloud offers large enterprises a big leap forward, helping companies to have all the advantages of PostgreSQL, with the promise of improved speed and functionality, and predictable and transparent  pricing.”AlloyDB is the next major milestone in our journey to support customers’ heterogeneous migrations. For example, we recently added Oracle-to-PostgreSQL schema conversion and data replication capabilities to our Database Migration Service, while our new Database Migration Program helps you accelerate your move to the cloud with tooling and incentive funding. “Developers have many choices for building, innovating and migrating their applications. AlloyDB provides us with a compelling relational database option with full PostgreSQL compatibility, great performance, availability and cloud integration. We are really excited to co-innovate with Google and can now benefit from enterprise grade features while cost-effectively modernizing from legacy, proprietary databases.”—Bala Natrajan, Sr. Director, Data Infrastructure and Cloud Engineering, PayPal Let’s dive into what makes AlloyDB uniqueWith AlloyDB, we’re tapping into decades of experience designing and managing some of the world’s most scalable and available database services, bringing the best of Google to the PostgreSQL ecosystem. At AlloyDB’s core is an intelligent, database-optimized storage service built specifically for PostgreSQL. AlloyDB disaggregates compute and storage at every layer of the stack, using the same infrastructure building blocks that power large-scale Google services such as YouTube, Search, Maps, and Gmail. This unique technology allows it to scale seamlessly while offering predictable performance.Additional investments in analytical acceleration, embedded AI/ML, and automatic tiering of data means that AlloyDB is ready to handle any workload you throw at it, with minimal management overhead.Finally, we do all this while maintaining full compatibility with PostgreSQL 14, the latest version of the advanced open source database, so you can reuse your existing development skills and tools, and migrate your existing PostgreSQL applications with no code changes, benefitting from the entire PostgreSQL ecosystem. Furthermore, by using PostgreSQL as the foundation of AlloyDB, we’re continuing our commitment to openness while delivering differentiated value to our customers.“We have been so delighted to try out the new AlloyDB for PostgreSQL service. With AlloyDB, we have significantly increased throughput, with no application changes to our PostgreSQL workloads. And since it’s a managed service, our teams can spend less time on database operations, and more time on value added tasks.”—Sofian Hadiwijaya, CTO and Co-Founder, Warung PintarWith AlloyDB you can modernize your existing applications with:1. Superior performance and scaleAlloyDB delivers superior performance and scale for your most demanding commercial-grade workloads. AlloyDB is four times faster than standard PostgreSQL and two times faster than Amazon’s comparable PostgreSQL-compatible service for transactional workloads. Multiple layers of caching, automatically tiered based on workload patterns, provide customers best-in-class price/performance.2. Industry-leading availabilityAlloyDB provides a high-availability SLA of 99.99% inclusive of maintenance. AlloyDB automatically detects and recovers from most database failures within seconds, independent of database size and load. AlloyDB’s architecture also supports non-disruptive instance resizing and database maintenance. The primary instance can resume normal operations in seconds, while replica pool updates are fully transparent to users. This ensures that customers have a highly reliable, continuously available database for their mission-critical workloads.“We are excited about the new PostgreSQL-compatible database. AlloyDB will bring more scalability and availability with no application changes. As we run our e-commerce platform and its availability is important, we are specially expecting AlloyDB to minimize the maintenance downtime.”—Ryuzo Yamamoto, Software Engineer, Mercari (​​Souzoh, Inc.)3. Real-time business insights AlloyDB delivers up to 100 times faster analytical queries than standard PostgreSQL. This is enabled by a vectorized columnar accelerator that stores data in memory in an optimized columnar format for faster scans and aggregations. This makes AlloyDB a great fit for business intelligence, reporting, and hybrid transactional and analytical workloads (HTAP). And even better, the accelerator is auto-populated, so you can improve analytical performance with the click of a button. “At PLAID, we are developing KARTE, a customer experience platform. It provides advanced real-time analytics capabilities for vast amounts of behavioral data to discover deep insights and create an environment for communicating with customers. AlloyDB is fully compatible with PostgreSQL and can transparently extend column-oriented processing. We think it’s a new powerful option with a unique technical approach that enables system designs to integrate isolated OLTP, OLAP, and HTAP workloads with minimal investment in new expertise. We look forward to bringing more performance, scalability, and extensibility to our analytics capabilities by enhancing data integration with Google Cloud’s other powerful database services in the future.”—Takuya Ogawa, Lead Product Engineer, PLAID4. Predictable, transparent pricingAlloyDB makes keeping costs in check easier than ever. Pricing is transparent and predictable, with no expensive, proprietary licensing and no opaque I/O charges. Storage is automatically provisioned and customers are only charged for what they use, with no additional storage costs for read replicas. A free ultra-fast cache, automatically provisioned in addition to instance memory, allows you to maximize price/performance.5. ML-assisted management and insights Like many managed database services, AlloyDB automatically handles database patching, backups, scaling and replication for you. But it goes several steps further by using adaptive algorithms and machine learning for PostgreSQL vacuum management, storage and memory management, data tiering, and analytics acceleration. It learns about your workload and intelligently organizes your data across memory, an ultra-fast secondary cache, and durable storage. These automated capabilities simplify management for DBAs and developers. AlloyDB also empowers customers to better leverage machine learning in their applications. Built-in integration with Vertex AI, Google Cloud’s artificial intelligence platform, allows users to call models directly within a query or transaction. That means high throughput, low-latency, and augmented insights, without having to write any additional application code.Get started with AlloyDBA modern database strategy plays a critical role in developing great applications faster and delivering new experiences to your customers. The AlloyDB launch is an exciting milestone for Google Cloud databases, and we’re thrilled to see how you use it to drive innovation across your organization and regain control and freedom of your database workloads.To learn more about the technology innovations behind AlloyDB, check out this deep dive into its intelligent storage system. Then, visit cloud.google.com/alloydb to get started and create your first cluster. You can also review the demos and launch announcements from Google I/O 2022.Related ArticleAlloyDB for PostgreSQL under the hood: Intelligent, database-aware storageIn this technical deep dive, we take a look at the intelligent, scalable storage system that powers AlloyDB for PostgreSQL.Read Article
Quelle: Google Cloud Platform