Investigating the usage of GCP Service Accounts

Service accounts on Google Cloud are used when a workload needs to access resources or conduct actions without end-user involvement.  There are multiple methods of authenticating using service accounts, including using service accounts as part of Google Compute Engine instances, impersonating service accounts, or using service accounts with a key file — an option which should be carefully considered.   A common objective is to achieve keyless service account architectures on Google Cloud, but this can be difficult across an entire organization.  There are a number of reasons why teams may opt to generate service account keys, ranging from developer validation to third-party software integration requirements.  In this post, we will look at ways you can reduce risk when the use of service account keys can’t be avoided. We’ll focus on providing insights into understanding the usage of a service account within Google Cloud, which can enable us to reduce risk of unintended application failures when rotating service account keys. Let’s get started!Guidance from the CIS BenchmarksThe Center for Information Security (CIS) Benchmarks provide a set of recommended security hardening guidelines.  With respect to Google Cloud Platform, CIS has most recently published the CIS Google Cloud Platform Foundation Benchmark version 1.2.0 which offer a series of controls, descriptions, rationales, impacts, audit steps, and remediations to improve your overall security posture across foundational GCP services.Let’s take a look at a direct example from Section 1.7 of the CIS Google Cloud Platform Foundation Benchmark version 1.2.0. The benchmark reads:“Ensure user-managed/external keys for service accounts are rotated every 90 days or less (Automated)”This section is accompanied by a rationale, impact, CLI commands and more, providing valuable insights into the what, the why and how you can meet this benchmark.  CIS indicates that the rotation of service account keys “reduces the window of opportunity” for access that is associated with a potentially compromised account.  The remediation is to rotate your service account keys: Audit and identify keys older than 90 daysDelete keys in scopeCreate a new key (if needed) In referencing the above 3 steps, the process is seemingly straightforward. However, there are nuanced use cases that you should be aware of.  When deleting a Service Account Key, you effectively eliminate all current and future access for the associated private key.  This can result in unintended consequences, such as loss of access for applications, pipelines, or third-party integrations that were dependent on the underlying keyfile.  So how can we guard against this?Investigating the access rights and usage of a Service AccountOne method is to conduct an investigation of access and usage of the GCP Service Account and Service Account Key.  Let’s bring in 3 GCP services: Policy Analyzer, Policy Intelligence, and Cloud Logging. This tooling can help us identify the impact of deleting our intended service account key.  For our investigation, we should consider the following questions: 1. What can this service account do? (Policy Analyzer)IAM Roles in GCP define what a service account can do.  Roles are inherited hierarchically from the organization node to the folders to the projects.  IAM roles can be defined on many resources within a GCP project as well, including GCS buckets, KMS key rings, Service Accounts and more.  Writing a script to determine all IAM bindings that a service account has can be quite tedious.  The pseudocode could look like:If your organization happens to have more than just a few resources, this can become far too tiresome to meaningfully process.  So what can we do?  That’s where Policy Analyzer comes in to help save the day! Policy Analyzer enables access visibility for audit-related tasks and allows for queries across the entire organization. The following screenshot shows the key components of a Policy Analyzer Query, the Query scope, the Principal (or Service Account), and a set of advanced options.  The queried result will be the set of all roles for all resources that the service account has been granted across the entire organization. For ease and reuse, here is a templated link: https://console.cloud.google.com/iam-admin/analyzer/query;identity=serviceAccount:[YOUR_SA]@[YOUR_PROJECT_ID].iam.gserviceaccount.com;expand=groups;scopeResource=[YOUR_ORG_ID];scopeResourceType=2/report?organizationId=[YOUR_ORG_ID]&supportedpurview=project This query can assist significantly in determining the range of access that this service account may have.  The service account could have access in a single GCP project, access at the organization level, or access across arbitrary resources.  Using Policy Analyzer enables us to fully understand where our service account may be used.  2. When was this Service Account last used? (with Policy Intelligence)Understanding when our service account was last used can also be valuable.  For example, if our service account has not been used in the past year, it is likely a safe assumption that it can be deleted.  The Policy Intelligence service allows us to query the activity of a service account.  The command supports activity for Service Account Last Authentication or Service Account Key Last Authentication for a given GCP project. Here’s an example:Knowing that the service account authenticated during a given observation period is an indicator of whether this service account has been used.  If a service account has recently been used (I’ll leave the definition of recent up to you and your organization), you may want to exercise more caution before deleting the service account key.  Additionally, using Policy Intelligence to identify the last authentication for a key can provide even more granularity if you happen to have more than one service account key (this is not recommended practice).  On the other hand, if we do not see recent activity for the queried service account, we should have more confidence that deleting the service account is unlikely to have unintended consequences.  Policy Intelligence is project-scoped, so if our service account has roles across multiple projects, we will need to run this query within each project.  Also, Policy Intelligence returns a result from a given observation period.  The observation period may not include the most recent activity (such as an activity from a few minutes ago), so we can also refer to Cloud Logging…  3. What has this service account done recently? (Cloud Logging)In order to determine everything our service account has done recently, we will need to leverage Cloud Logging.  We will query for a list of activities over a specific timeframe. We can use the following query:For a given service account key, we can go one step further and run the following logging query. When conducting this Cloud Logging-based investigation, there should be a few watchpoints.   First, not all log types in GCP (for example, data access logs or VPC flow logs) are enabled by default. If we want to see that level of log type granularity, we would need to ensure logs are enabled on the corresponding resources.  Second, querying across multiple projects may be tedious, so we may wish to make a risk-based decision around the key rotation or alternatively, export our logs to a centralized location through the use of a logging sink and analyze them holistically in BigQuery or a similar tool.  Concluding thoughtsYou should now have a good sense of some of the risks around service account key use and strategies to mitigate them. First, try to avoid service account key creation whenever possible.  Using GCP-managed options such as workload identity, the virtual machine service account or service account impersonation can limit the number of use cases requiring generation of service account keys.Second, if you do have user-managed service account keys, ensure that you rotate the keys.  90 days is a reasonable baseline, but ultimately, it would be better to automate the rotation of keys more frequently than 90 days.  Finally, if you have a project with service account keys that you do not know the intended usage of (e.g., as might happen in a single project shared across many users), leverage the 3 GCP tooling recommendations of log queries, Policy Analyzer, and Policy Intelligence to reduce the chance of unintended application failures occurring upon key deletion.  While these recommendations do not provide a full-proof mechanism for identifying usage of your service accounts, they should improve your confidence in your ability to safely rotate your service account keys and reduce risk in the process.
Quelle: Google Cloud Platform

Betting on Cloud SQL pays off for oddschecker

oddschecker is the UK’s largest sports odds comparison website. We collect bookmaker pricing of various sports teams, games, and best prices, in multiple territories, and present them to our customers in a collated view. This unique view benefits our customers because of the pricing difference between one bookmaker and another. In the UK, we work with 25 different bookmakers and recently launched a U.S. site, as well as Spanish and Italian platforms — each with their own set of bookmakers. In the spring of 2018, we successfully migrated to Google Cloud. We were interested in Google from the start because we knew they would be far better at managing our database needs than we ever could be. We decided to take advantage of their managed services; Cloud SQL for MySQL, Memorystore for Redis, and Google Kubernetes Engine (GKE). As a result of the move, we’ve been able to free up developer time that used to be spent on the day-to-day hassles of database management. Because of our move to Cloud SQL, we have set the organization up to be able to make further architecture and roadmap innovations in the future. Google Cloud’s managed services were a sure betWe chose Google Cloud because it supports the most popular engines, MySQL, PostgreSQL and SQL Server, which means we can work the way we want to. Specifically, we opted for Cloud SQL primarily because of its ease of use. We were originally using on-premises databases, so we had to have large, custom virtual machines (VMs), disks, and cards to get the power we needed. Prior to the migration to Google Cloud, we were running in a private data center on custom hardware using MySQL 5.2, with about half a terabyte of data. When testing Cloud SQL, we replicated our system and found Cloud SQL performed the way we were hoping it would. Though we had initially considered a hybrid migration -where we would run parts through the cloud and chip away at making the full move over time — we ultimately decided to go all in with Google Cloud. We spun up an entire, all-around auto oddschecker infer in parallel, then did an overnight migration. After backing up and restoring to Cloud SQL, everything was ready to go.Cloud SQL covers the performance spread and then someWe were counting on Cloud SQL to meet our performance demands while radically transforming customer experiences, and it delivered everything we needed. The oddschecker site features an aggregation of platforms with a convenient, collated view which shows the odds and the status of each game. We pull 8,000 updates a second from our different operators because our customers need to get prices in a performant manner, otherwise they’re out of date, irrelevant, and, ultimately, bad for business. Things are now running on a large, single Cloud SQL database that’s a 64 CPU machine with a terabyte of storage that auto adjusts its size as needed (we use about 800 gigabytes of that consistently). We’re able to handle a couple thousand reads per second on average and about half that number of writes, running on MySQL 7.0. Because it’s our one source of truth for all onsite data, the database is critical. This content includes tips and articles, as well as our user database for new onsite customer registrations. We also have our hierarchy. It’s like a tree of information that structures each sport and team, all the way down to the markets and the matches customers bet on. In addition, we keep sports data, odds, and commentary — in short, basically every data point on the site comes from that Cloud SQL database.GKE provides the juice for delivering prices to customersCurrently, more than 90% of our workload, including our website, runs on GKE, with a few VMs running some of our legacy kit as well. We have multiple abstractions in our GKE clusters and we pull information from the various platforms, each through their own APIs. When data comes in via the ingress, we have our API reroute down to the services underneath. From there, it eventually falls into our proxies and on to Cloud SQL. On average, we try to deliver a price to customers within seven seconds of its publication by an operator. That’s complicated by some language processing that needs to happen since bookmakers often call the same teams by different names. We have to do aggregation as well as the odds comparison through a complex, homegrown mapping system that normalizes the data. Once again, Google Cloud delivered by making it possible for us to come through for our customers in a big way.MemoryStore for Redis delivers the cache, consistent key values stores, and moreAs for Memorystore for Redis, we’re using versions 3.2 and 4.0, with different teams using it for different purposes. We have 16 Memorystore instances running, with our API being one of the largest. Other instances include the caching of site content, price updates, and unmapped bets.  We also use it as a key value store and to keep some of our services stateless. This way, if we want to autoscale, we have a consistent key value store that doesn’t live inside the app, so we can quickly share odds data across services. For some data, we don’t want to be hitting the databases as much, so we have them sitting in Redis instances for ease of lookup. If you look at the odds data grid for each game on our site, it shows the map mark — all the prices for a specific bet. One line across is one key value, stored in Redis, which we can look up and share across services.Removing headaches and breaking monolithsAs we suspected, Google Cloud is better at managing, patching, and upgrading our database than we’d ever be. Since our migration, we haven’t had to touch the database, except for some minor sizing.  As for future goals, we’d like to break up our monolithic database because it’s become less nimble over the years — in part to alleviate blast radius concerns. Over the past three years, we’ve been building functional platforms, like our recently completed, reengineered, backend aggregation platform. It performs the same use cases so we can start chipping away at the monolith.The instance contains 30-40 critical and non-critical databases, and we don’t want to have read replicas or failover on all of them. Another goal is to move away from storing content in the database, which accounts for about half a terabyte. By adopting more of a microservices architecture, our services could be more flexible if each had their own part of the database.For example, we could have a completely different Cloud SQL profile for each. Some databases are write-heavy, some are read-heavy, and some are both. We could have custom, individually scalable machine types that cater to those use cases and provide a general improvement in functionality across the site. We could also begin breaking apart the database so we have the freedom to change when we know that it’s not going to impact the other 90 databases.In conclusion, Cloud SQL and Google Cloud’s suite of fully integrated managed services helped oddschecker make a painless move to the cloud, and meet the demands of odds comparisons, where every second matters. Learn  more about oddschecker and Cloud SQL. You can also check out How BBVA is using Cloud SQL for its next-generation IT initiatives.Related ArticleHow BBVA is using Cloud SQL for its next generation IT initiativesBBVA prioritizes managed services for speed, ease of maintenance, and centralized control features. Learn how Cloud SQL fits perfectly wi…Read Article
Quelle: Google Cloud Platform

Google Cloud IDS signature updates to help detect CVE-2021-44228 Apache Log4j vulnerability

NIST has announced a recent vulnerability (CVE-2021-44228) in the Apache Log4jlibrary. To help with detection, Google Cloud IDS customers can now monitor and detect attempted exploits of CVE-2021-44228. BackgroundThe Apache Log4j utility is a commonly used component for logging requests. On December 9, 2021, a vulnerability was reported that could allow a system running Apache Log4j version 2.14.1 or below to be compromised and allow an attacker to execute arbitrary code. On December 10, 2021, NIST published a critical Common Vulnerabilities and Exposure alert, CVE-2021-44228. More specifically, Java Naming Directory Interface (JNDI) features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from remote servers when message lookup substitution is enabled.If you have workloads you believe may be vulnerable, you can use Google Cloud IDS to help monitor and detect for exploit attempts in your Google Cloud environment. You can read further details on the NIST website here, and Google Cloud’s security advisory here. Addressing Apache Log4j vulnerability with Google Cloud IDSGoogle Cloud’s Cloud IDS is a native network-based threat detection product that helps detect exploit attempts and evasive techniques at both the network and application layers, including buffer overflows, remote code execution, protocol fragmentation, and obfuscation. The detection capability is built with Palo Alto Networks threat detection technology. To help our customers monitor the Cloud IDS team has worked with Palo Alto Networks, and the Google Cybersecurity Action Team team, to analyze this issue and update the Cloud IDS detection systems to help detect the most common types of Log4j exploit attempts in their GCP environments.For customers currently using Cloud IDS this has been automatically deployed, as of 12-12-2021 at 9:00PM UTC and no further action is required to enable it. Any new deployments or new Cloud IDS endpoints will have this monitoring enabled by default. The alerts from these detections have a severity level of Critical, and so all Cloud IDS endpoints will alert on these detections with no configuration changes required to your IDS endpoint threat severity profile.Monitoring for potential threatsAfter you set up Cloud IDS to monitor traffic to/from application workloads that may be exploited  due to this vulnerability you can quickly search for alerts related to CVE-2021-44228 in the Cloud IDS console by using a filter on “Threat Name:Apache Log4j Remote Code Execution Vulnerability”Cloud IDS detections on threats exploiting Apache Log4j vulnerabilityThreat Details for Log4J threats detected by Cloud IDSIn addition to monitoring for Log4j threat alerts in the Cloud IDS console you can also view more  detailed logs in Cloud Logging. More details about Cloud IDS logs can be found  here.Addressing potential threatsIn addition to updating your systems to the latest version of Apache log4j, customers using Google Cloud Armor can enable a new pre-configured WAF rule to help block requests to vulnerable infrastructure.Complete Cloud IDS product documentation for configuring Cloud IDS and filtering on alerts and logs is available here:Configuring Cloud IDSViewing Cloud IDS LogsPlease contact Google Cloud’s technical support or your Google Cloud account team for assistance if required. Additionally, you can seek support assistance in the Google Cloud Platform Community Slack Channel under gcp-security for non-urgent questions.Related ArticleGoogle Cloud Armor WAF rule to help mitigate CVE-2021-44228 Apache Log4j vulnerabilityCloud Armor WAF rule to help address CVE-2021-44228 Apache Log4j vulnerability.Read Article
Quelle: Google Cloud Platform

Google Cloud Armor WAF rule to help mitigate CVE-2021-44228 Apache Log4j vulnerability

NIST has announced a recent vulnerability (CVE-2021-44228) in the Apache Log4jlibrary. To help mitigate the effects of this vulnerability, Google Cloud Armor customers can now deploy a new preconfigured WAF rule that will help detect and, optionally, block attempted exploits of CVE-2021-44228. BackgroundThe Apache Log4j utility is a commonly used component for logging requests. On December 9, 2021, a vulnerability was reported that could allow a system running Apache Log4j version 2.14.1 or below to be compromised and allow an attacker to execute arbitrary code. On December 10, 2021, NIST published a critical Common Vulnerabilities and Exposure alert, CVE-2021-44228. More specifically, JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from remote servers when message lookup substitution is enabled.If you have workloads you believe may be vulnerable, review Google Cloud’s mitigation steps below. You can determine your exposure by reading further details on the NIST website here. Addressing Apache Log4j vulnerability with Cloud ArmorGoogle Cloud’s Cloud Armor provides Denial of Service and Web Application Firewall (WAF) protection for applications and services hosted on Google Cloud, on your premises, or hosted elsewhere. The Cloud Armor team has worked closely with the Google Cybersecurity Action Teamteam to analyze this issue and prepare a response.  In an attempt to help our customers address the Log4j vulnerability, we have introduced a new preconfigured WAF rule called “cve-canary” which can help detect and block exploit attempts of CVE-2021-44228. Cloud Armor customers can deploy the new rule into a new or existing Cloud Armor security policy following the below instructions. In order to detect or help mitigate exploit attempts of this CVE, you will need to create a new rule in your Cloud Armor security policy leveraging the preconfigured WAF rules called “cve-canary”. The rule can be created and inserted into a new or existing Cloud Armor security policy via the Google Cloud Console or the gCloud CLI.WAF rule in ConsoleA sample gCloud command line to create a rule with a deny action and priority 12345 which blocks the exploit attempts into an existing security policy is as follows:Monitoring, detecting, and analyzing potential threatsIf you need to monitor your Cloud Armor protected endpoints for exploit attempts without necessarily blocking the traffic, you can deploy the above rule in preview mode. Deploying the rule in preview mode will allow you to receive Cloud Logging event logs that the rule was triggered but Cloud Armor will not block the request. To configure preview mode for any rule, you can set the preview flag to enabled in the UI or CLIPreview mode in Cloud ArmorTo analyze suspicious requests you can enable Cloud Armor’s verbose loggingcapability in the relevant policy. With verbose logging enabled, Cloud Armor’s logs will contain additional information about where in the incoming request the suspicious signature appeared as well as a snippet of the suspicious signature and the field it appeared in.Example log message of a blocked exploit attempt with verbose logging enabledFinally, if your protected workload receives requests with content-type=’application/json’ like a REST API, then you will need to enable JSON parsing in your security policy to ensure Cloud Armor parses the JSON in a POST request’s body to detect exploit attempts. More detailed Cloud Armor product documentation for configuring the above capabilities is available here:Configuring Cloud Armor Security policiesUsing preconfigured WAF rulesPreview ModeVerbose LoggingJSON ParsingPlease contact Google Cloud’s technical support or your Google Cloud account team for assistance with applying the mitigation steps described above. Additionally, you can seek support assistance in the Google Cloud Platform Community Slack Channelunder gcp-security for non-urgent questions.Related ArticleCloud Armor: enhancing security at the edge with Adaptive Protection, expanded coverage scope, and new rulesCloud Armor gets Adaptive Protection, expanded coverage scope, and new rulesRead Article
Quelle: Google Cloud Platform

Leading with Google Cloud & Partners to modernize infrastructure in manufacturing

In the manufacturing sector, flexibility and stability are crucial to delivering business outcomes. In a dynamic and unpredictable world, however, manufacturers must constantly contend with forces that can disrupt supply chains and cause instability. Mitigating against challenges like these are part of life for manufacturers. This is why many of them implement highly sophisticated management strategies and operational processes designed to minimize machine failures, prevent bottlenecks, and adapt to changes quickly to meet ever-changing customer demands and operational efficiency.For the past 18 months, COVID-19 has brought with it even more unpredictability by creating recession-like conditions and causing demand issues and unprecedented complexity in the supply chain.In this new world, manufacturers are turning to digital initiatives that can help overcome supply chain, cost, product development and operational pressure. Manufacturers are betting on advanced ITGoogle’s own research shows that the top five areas where AI is currently deployed in day-to-day operations are quality inspection, supply chain management, risk management, product/production line quality checks, and inventory management. Manufacturers are clearly eager to adopt proven technology solutions that enable more precise supply chain intelligence and forecasting, and drive a culture of innovation with collaboration tools and digital skills enablement. This can be seen clearly when looking at Google Cloud and Google Workspace present opportunities to achieve both goals.  Google Cloud’s AI solutions enable manufacturing organizations to drive optimization, increase output quality, and lengthen uptime across their business–bringing new levels of flexibility and stability to their operations. This is why Google Cloud and some of its key partners are focused on providing manufacturers with the solutions and operational intelligence they need to minimize disruption, drive collaboration and achieve new levels of success.Working with a Google Cloud partner allows manufacturers to maintain their focus on the day-to-day operations, tap into a wider range of technologies, and work seamlessly with Google Cloud services and tools for quicker ramp-up times and more value-add capabilities..Let me show you how these partners are solving real-world business challenges.Transforming manufacturing through real-time machine health insightsAugury combines artificial intelligence and the Internet of Things (IoT), powered by Google Cloud, to make machines more reliable, reduce their environmental impact, and enhance human productivity.  With the help of DoiT, a top Google Cloud partner, Augury’s cloud-based solution deploys IoT devices that are connected to manufacturing machines around the world.  Transforming manufacturing plants using real-time machine health insights saves millions of dollars using superior insights into the health and performance of their machines and reduces machine failure by 75% through AI and IoT.  These devices continuously send data to the cloud where it is analyzed by machine learning algorithms, resulting in insights that are immediately provided back to their customers.With help of DoiT, our transition to Google Cloud was smooth and efficient. DoiT engineers were there to answer all of our questions, consulted with us on best practices for Google Cloud deployment, and helped us solve any issues that arose, quickly and efficiently. Gal Shaul, co-founder and Chief Technology Officer, AuguryLeveraging cloud services to modernize the property access experienceSentriLock is a specialized manufacturer that helps real estate owners, managers, and others make building access more seamless through next-gen lockboxes.  Cloudreach was engaged to help migrate SentriLock to the cloud, train their teams on best practices, and deliver a set of templates using Google Cloud Projects, VPC Configuration, Subnets, route tables, froles, a VPN and VPN Peering to deploy ‘Infrastructure as Code’ materials using Google Cloud.  As a result of this cloud migration, SentriLock was able to modernize its infrastructure by quickly, effectively and securely leveraging Google Cloud in an enterprise-ready format. The educational workshops empowered SentriLock’s team to accelerate time-to-value from cloud adoption and gain a deep understanding of cloud security best practices.As part of our digital transformation, we wanted to build our new infrastructure so we could scale and minimize disruptions. Cloudreach examined our existing infrastructure and helped us deploy on Google Cloud. Their guidance was vital to the success of this project. Eric Gatton Systems Engineer, SentriLockAccomodating a shift in business demands with easy-to-use Google communication toolsAs a manufacturer of essential products, Origami was allowed to operate during the lockdown to prevent supply shortage and mill operators needed the ability to quickly inform the production team about bottlenecks.  On its legacy email server, Origami encountered frequent problems, such as missing or undeliverable emails and running out of email storage space.  With the help of Shivaami, Origami was able to speed up the flow of communication with Google Workspace resulting in 24/7 remote working email access between employees and customers, while meeting the demand for quality tissue products worldwide.  Origami was able to promote employee efficiency as employees don’t have to worry about server space that previously caused disruption to the workflow.Shivaami eased our transition to Google Workspace with a smooth onboarding process and took away the stress from employees who had problems getting emails before the cloud,”says Pranali. “Without the worry of email access, they can work more freely and focus on what matters, their work. Pranali Sikchi, Brand Development Manager, Origami India Pvt. Ltd.Partner specializations create unique opportunities for manufacturersThese three examples show that Google Cloud helps manufacturers achieve their digital transformation goals with intelligent, data-driven solutions that are extended by our ecosystem of partners.  One of the beauties of working with a partner is the instant access to expertise and experience necessary to align challenges with solutions and aspirations with reality. We continue to add thousands of people across Google Cloud to ensure our partners and customers receive all the support needed to thrive and win.  Looking for a solution focused partner in your region who has achieved Expertise and/or Specialization in your industry?  Search our Global Partner Directory.  Not yet a Google Cloud partner? Visit Partner Advantage and learn how to become one today!Learn more about how Google Cloud is transforming manufacturing to meet changing customer expectations at our Google Cloud Next archives.Related ArticleUpdates to our Partner Advantage program help partners differentiate and grow their businessesWe’re showcasing our partners’ achievements and providing updates on our expanding ecosystem.Read Article
Quelle: Google Cloud Platform

Supporting digital ownership and decentralization with Dapper Labs and Google Cloud

Businesses, developers and end-users are increasingly embracing blockchain applications. The benefits and opportunities that decentralized technologies offer are becoming evident: true digital ownership and emerging markets of digital goods powered by transparent, reliable, and verifiable transactions. At the very heart of this ongoing mass adoption are networks like Flow – a consumer-friendly, high-throughput blockchain that provides great user experiences at scale, without sacrificing decentralization or sustainability. Originally developed by Dapper Labs, Flow is now home to the first blockchain application to reach one million users – NBA Top Shot – and with the NFL, LaLiga and UFC, even more consumer-scale experiences are joining an already vibrant ecosystem of mainstream users.The foundation enabling these levels of scalability is Flow’s multi-node architecture which increases efficiency by pipelining load across specialized node types. With the promising potential of this type of protocol design, I’m excited that Dapper Labs has selected Google Cloud as its hyperscale cloud partner to ensure performance, reliability and decentralization for the next wave of mainstream users on Flow.Through this partnership, Dapper Labs will deploy an Execution node on Google Cloud to increase the speed of the overall network and assure reliable performance even in times of large traffic spikes. This step ensures that Flow’s users will continue to enjoy low transaction fees and fast finalization times even as millions of novices are joining the network.In addition, Dapper Labs will tap the Google Cloud Marketplace to offer services for end-users to set up their own dedicated Consensus or Verification node on Flow. This step motivates active participation in the network and thus heightens the level of overall decentralization. Since Google Cloud infrastructure runs completely carbon neutral, this approach of hosting nodes is not only much easier, but also much more sustainable.Through all this, Google Cloud will enable greater levels of performance, reliability and security for Flow users, without needing to compromise on decentralization or sustainability.Flow and its multi-node architecture that scales seamlesslyFlow achieves high levels of scalability and decentralization with a novel multi-node architecture. Rather than letting every single node do the entire work of computation and consensus, the network’s load is pipelined across multiple node types: Collection nodes batch the work, Consensus nodes secure the work, Execution nodes do the work, and Verification nodes check the work.This architecture is highly efficient because each node type specializes on a specific task. Relatively fewer resource-intensive Execution nodes can run the computations of each block as fast as possible, while a much higher number of Consensus and Verification nodes can allow anyone to participate in block sealing or verification. In short, the small number of powerhouses of the network are continuously watched by a far greater number of nodes that analyze and verify every bit of their work. This pipelining allows for better efficiency across all parts of the network, while preserving high levels of decentralization. Google Cloud x Flow: Connecting performance and decentralizationGoogle Cloud will enable both performance and accessibility for Flow and developers. First, Google Cloud will operate a high-performance Execution node to run Flow computations at scale; and secondly, we will offer services to end-users that enable them to take part in the network themselves. Execution nodes demand the highest levels of performance because they are dedicated to running user transactions as quickly as possible and at large scale, bringing low-latency access for developers and lightning speed to consumer blockchain applications.Additionally, users will be able to operate a Flow node directly through solutions offered on Google Cloud Marketplace, enabling them to run various testnet configurations of Flow to test, experiment, and look for problems that may only show up at scale without posing any risk to the main network.Enabling sustainable growthGrowing in a sustainable manner is top of mind for many businesses, but is particularly relevant in the blockchain space, where the ability to run and scale with minimal carbon output is critical. Google Cloudis carbon neutral today, and aims to be carbon-free by 2030, ensuring that developers and partners can interact with the Flow network using the industry’s cleanest cloud.Google Cloud infrastructure and services will ensure that Flow can scale securely and reliably to billions of users. As that happens, Google Cloud will ensure that data, applications, games, and even digital assets like NFTs, are supported by a stable, secure, and trusted global network.
Quelle: Google Cloud Platform

Store more and worry less with 31 day retention in Pub/Sub

Pub/Sub stores messages reliably at any scale. Publish a log entry or an event and you don’t have to worry about when it is processed. If your subscribers, event handlers, or consumers are not keeping up, we’ll hold the messages till they are ready. Bugs made it past your integration test? Not to worry: just seek back and replay. But, you had to worry a little: until today, you had up to a week to fix the code and process everything. And if you wanted to use historical data to compute some aggregate state, such as a search index, you had to use a separate storage system. In fact, we noticed that many of our users stored raw copies of all messages in GCS or BigQuery, just in case. This is reliable and inexpensive, but requires a separate reprocessing setup in case you actually need to look at older data. Starting today, Pub/Sub can store your messages in a topic for up to 31 days. This gives you more time to debug subscribers. This also gives you a longer time horizon of events to backtest streaming applications or initialize state in applications that compute state from an event log. Using the feature is simple. The interfaces and pricing are unchanged. You can just set a larger value for a topic’s message retention duration. For example, you can configure extended retention of an existing topic using the gCloud CLI: gcloud pubsub topics update myTopic –message-retention-duration 31dOr use the settings in the Topic Details page in Cloud Console:One limitation of this feature is that you cannot extend storage retention for an individual subscription beyond 7 days. This limits the control individual subscription owners have over storage. The limit comes with benefits: controlling storage costs is simpler and so is limiting access to older data across multiple applications.We’d love to hear how you’ve used this feature or how it came short of your needs. Let us know by posting a message to the pubsub-discuss mailing list or creating a bug.
Quelle: Google Cloud Platform

Software-Defined community cloud – a new way to “Government Cloud”

Google has a long history anddeep commitment to innovation in the public sector and regulated markets including healthcare, financial services, and telecommunications, to name a few.  Recently, we’ve made significant advances in our security and compliance offerings and capabilities in order to better enable government and government supply chain customers to adopt Google Cloud. Specifically, our Assured Workloads product implements a novel approach to help customers meet compliance and sovereignty requirements: a software-defined community cloud. What is a community cloud? A community cloud is defined by NIST SP 800-145 as:Cloud infrastructure [that] is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.The approach has been used for decades and offers potential  benefits such as:Members of the community operate under the same set of security controlsThe ability to support attributes like citizenship and authorization controls while maintaining limited physical and/or logical access to resources.The ability to support data localization and some data sovereignty requirements based on the location of the community cloud’s data centersA clearly-defined perimeter security model encompassing the community cloudChallenges with legacy community cloud implementationsThe definition and objectives of community clouds are well-intentioned, but often the implementation of community clouds have failed to meet specific objectives or required significant tradeoffs for adopters.Most community clouds to-date have relied on physical separation as the primary means of establishing a security perimeter. While this approach offers benefits in simplicity and segregation, there are downsides. A perimeter security model, also referred to as a “castle wall model” often doesn’t yield significant advances in security, manageability, or compliance. The shortcomings of a perimeter model as the primary mode of protection is acknowledged across the industry and has accelerated adoption of alternative approaches such as Zero Trust architectures. This is the case for compliance as well – in models that are tied to physical perimeters (e.g., legacy “Gov Clouds”) control assumptions at the perimeter can lead to control failures on the interior that lead to potentially serious security problems. Having created a physical community cloud in the past, Google sought a new way to provide the benefits above along with scalable and lasting compliance implementations.  Software-defined community cloudLike virtualization for servers or software-defined networking for switching and routing hardware, a software-defined community cloud is designed to deliver the benefits of a community cloud in a more modern architecture. Google Cloud’s approach provides security and compliance assurances without the strict physical infrastructure constraints of legacy approaches. Google Cloud’s approach for offering software-defined community clouds is implemented using a combination of technologies referred to in aggregate as “Assured Workloads.” With Assured Workloads, Google Cloud can:Define communities around shared mission, security and compliance requirements, and policy.Separate those community projects from other projects.Add or remove capabilities from a community’s boundary with policy-controlled and audited configuration changes.This software-defined approach yields several potential benefits to customers. But first, let’s consider community cloud characteristics mapped to traditional and software-defined implementations:Software defined community cloud as a new type of “Government Cloud”In Google Cloud Platform (GCP), a project is an isolated, logical grouping of “infrastructure primitives.” In this context, an infrastructure primitive is any atomic unit of capacity in GCP – a virtual machine (VM), a persistent disk (PD), a storage bucket, etc. Projects are “global resources” that can be assigned infrastructure primitives from any region or zone.Every project is, by default, isolated from other customers’ projects. Low-level resources like hypervisors, blocks in our distributed blockstore that underlies Google Cloud Storage (GCS), and other components are isolated with resource abstractions that enforce the isolation both logically and cryptographically. A Private Cloud deployment model is described in NIST SP 800-145 as:Cloud infrastructure [that] is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.When a project is created within GCP, the infrastructure primitives that are assigned to that project are scoped to only that project. This scoping of infrastructure primitives effectively creates an “enclave” per Project.When overlaid with Assured Workloads constraints for data residency, support personnel attributes, and security controls common to that community, these per-project private cloud enclaves become software-defined community clouds.Benefits of a software-defined community cloudThe approach Google Cloud has taken brings multiple benefits in addition to meeting security and compliance requirements. New hardware, new services, and improvements to existing services can be made available faster than in traditional community clouds. The process by which new cloud technology can be onboarded and made available is also faster. Overall efficiency is improved in this model due to the scale of infrastructure available to the community; this can translate to improved availability and performance. Security enhancements can be scaled and implemented more quickly.    Moving forwardAt Google Cloud, we continue to advance the capabilities that enable our customers to create and operate within software-defined community clouds. Learn more about the capabilities currently delivered through Assured Workloads here.Related ArticleGoogle Cloud achieves new public sector authorizations: Google Workspace earns FedRAMP High, key Google Cloud Platform services receive DoD IL4Governments across the globe want to make life easier for their citizens—they want to enable businesses to thrive, increase public safety…Read Article
Quelle: Google Cloud Platform

commercetools powers modern, scalable digital commerce solutions with MongoDB Atlas on Google Cloud

Powering modern retail operations is no easy task – consumer and business preferences shift constantly, disruptions abound, and demand can significantly fluctuate daily or even hourly. But within this environment, there are almost limitless opportunities for retailers that have an agile IT environment that can efficiently and proactively meet changing consumer demands.At commercetools, we pride ourselves on giving businesses what they need to succeed in the digital era. As a cloud-native company, we constantly look toward the future, innovating on our platform and working to better serve our customers. To accomplish our goals and remain as agile and accessible as possible, we base our platform on microservices.For years we have taken advantage of being fully cloud native as a means to optimize our microservices infrastructure and take an API-first approach to management. Given constant innovations in cloud computing, we also look for the latest and greatest tools to further enhance the services we offer our customers.One area that has become particularly important in recent years is the ability to efficiently auto-scale and manage resources. That means being able to get even more performance out of our platform, which is why we chose to work with Google Cloud and MongoDB.Building on years of successGoogle Cloud has long been our trusted cloud services provider, and we’ve used MongoDB since 2014. We saw opportunities to extend these relationships and the technologies used to improve everyday operations across our business. This in part included implementing MongoDB Atlas to optimize database scalability and efficiency.A big advantage of running MongoDB Atlas on Google Cloud is the integration between the database and Google Cloud services including Google Kubernetes Engine (GKE), BigQuery, and the Cloud AI Platform. Since these solutions work so well together, the commercetools team is confident that performance will be exceptional regardless of the tasks we need to run.Given retail-specific factors such as seasonality and constantly shifting consumer preferences, we need to provide retailers with the confidence that our platform runs properly at all times. The combination of Google Cloud solutions and MongoDB Atlas as a fully managed database enables us to achieve our goals and deliver outstanding services to our customers.When it comes to scale, we have certain customers that process tens of thousands of transactions per minute. We need to make sure all of those orders execute flawlessly, with all accompanying data captured and managed. Accomplishing this requires that every step is fully integrated into the right area of our technology stack—something that Google Cloud and MongoDB make a lot easier.MongoDB’s application data platform also provides a whole suite of useful tools including MongoDB Charts, backups, and enhanced security capabilities. All are key to our success and important to our customers. In MongoDB, we can also scale horizontally or vertically, handle spikes through auto-scaling, and never worry about data security or retention. Without MongoDB Atlas on Google Cloud, this wouldn’t be possible.Because of this, we can quickly and reliably roll out new features, sites, touch points, and channels in step with our customers’ demands. The agility supported by Google Cloud and MongoDB Atlas allows commercetools to stand out from our competitors.Equally important to our success is the level of automation we build into our environment thanks to Google Cloud, MongoDB Atlas, and our own engineers. We’ve been able to take a more modern approach to performance testing, with everything monitored and fixed in real-time. For retailers, this is vital because many often wait until huge events like Black Friday, when it’s too late to test the power and reliability of an environment.Automating cluster management, deployments, and testing is essential for us to scale with our customers when they need us. With the automation offered in Google Cloud and MongoDB Atlas, we are always prepared for anything our customers need.All of this adds up to an unmatched ability to provide customers with roughly 300 APIs that they and our partners can mix and match to achieve the standout commerce experience they envision for their customers. This is what separates commercetools from our competitors. We are not just a one-size-fits-all tech stack. Instead, we give retailers the options to build their own solutions that map to their exact needs—and the opportunities their customers present.Addressing challenges across industriesWhile we are certainly rooted in retail, the need for digital transformation is evident in every industry, and our platform is flexible enough to adapt to many needs.For example, companies with smaller IT departments can work with our partners to accelerate the roll-out of new digital services and enhanced customer experiences. Our partners can project manage as needed and use our APIs to tailor the solution to each company’s unique needs. At the same time, companies with larger IT operations can have their own teams use our APIs to more quickly achieve their own goals.The success of our approach to build and manage commerce platforms was underscored during the COVID-19 pandemic, when so many businesses scrambled to strengthen digital footprint amid lockdowns and immense economic shifts.Let’s say a company wanted to build an additional channel to enable curbside pickup. With a monolithic system, an IT team might have worked six months or more to get that service up and running. For many companies, this would have been untenable given their resources and they would have lost market share.The beauty of commercetools is that we are flexible and accessible enough for any company, regardless of size or industry, to quickly deploy the services they need to keep pace with new market opportunities. It’s all thanks to our open architecture and cloud-native infrastructure, which Google Cloud and MongoDB are a big part of.Our success highlights the power of microservices supported by highly integrated solutions including GKE, BigQuery, and MongoDB Atlas. Through these partnerships, we can provide trials in which a customer can go to our website and explore everything we offer, spin up environments, deploy tools, and more.This is a huge differentiator for us. If we were locked down by our cloud infrastructure or database provider, we’d be unable to offer these types of readily accessible, high-performing self-service capabilities.We have accomplished a lot through our work with Google Cloud and MongoDB, and we are excited to continue transforming approaches to commerce in retail, healthcare, and many other industries in the future.
Quelle: Google Cloud Platform

Join Google Cloud Research Innovators to accelerate scientific projects

Researchers using Google Cloud are invited to apply for the second cohort of the Research Innovators program. In December 2020 Google Cloud launched the Research Innovators program to help established and next generation researchers advance scientific breakthroughs through the latest cloud technologies. By providing access to Google Cloud and Google specialists, the program seeks to speed up new discoveries, increase collaboration, and deepen support for researchers. Now in its second year, the Research Innovators program will offer more frequent networking opportunities for participants and specialized tracks to encourage cross-disciplinary work in genomics, sustainability, and social impact projects. Applications for the second cohort opened on November 22, 2021.Bringing the cloud to the cloudsThe inaugural cohort of 33 Google Cloud Research Innovators, spanning 30 institutions and eight countries across both industry and academia, are addressing some of the most urgent scientific challenges facing our world today. One of our Researcher Innovators, Dr. Tapio Schneider, a Senior Research Scientist at NASA’s Jet Propulsion Lab and the Theodore Y. Wu Professor of Environmental Science and Engineering at CalTech, is using Google Cloud’s vast computational resources to improve large-scale Earth system models. His team’s Climate Machine leverages recent advances in the computational and data sciences to learn directly from a wealth of Earth observations from space and the ground. The Climate Machine will harness more data than ever before, providing a new level of accuracy to predictions of droughts, heat waves, and rainfall extremes.Applying machine learning to identify social inequalitiesDr. Teodora Szasz, Computational Scientist at the University of Chicago, is another Research Innovator using Google’s cloud solutions to tackle urgent social issues. Using AutoML’s machine-learning capabilities to identify and categorize images, she and a team at the MiiE (Messages, Identity, and Inclusion in Education) Lab are measuring the changing representation of race, gender, and age in children’s books over the last century. Their work shows that while diversity in representation has improved, inequalities persist, which impacts how children learn about society and social norms.“Images are important to children even before they can read,” Dr. Szasz explains. “If they can recognize themselves in characters then they can imagine themselves in different futures.” The MiiE team moved their research to Google Cloud “because it’s easy to work with millions of files and load them into AutoML,” she says. “We could develop a model in one day that’s optimized for our needs, and I can trust that it won’t break. Google Cloud offers infrastructure built for scaling and efficiency. Without it, this work would take much longer.” The Research Innovators program, she adds, put her in touch with colleagues at other institutions using similar tools, so they could learn from each other. “The program goes beyond Google,” she says. It has helped her achieve results faster–which in turn makes publishing and funding easier–and helps this work get the attention it deserves.Reaching out to tomorrow’s innovatorsWith access to Google experts and support from peers, Research Innovators are able to fast-track their research for real-world impact. They receive additional Google Cloud academic research credits, support to share their work, speaking opportunities, complimentary admission to Google conferences, and more.You can discover more about the current Research Innovators and their projects here. To pursue this unique opportunity, apply by January 14, 2022. To get started with Google Cloud, apply for free credits towards your research.
Quelle: Google Cloud Platform