Introducing Cloud Workstations: Managed and Secure Development environments in the cloud

With the unprecedented increase in remote collaboration over the last two years, development teams have had to find new ways to collaborate, driving increased demand for tools to address the productivity challenges of this new reality. This distributed way of working also introduces new security risks, such as data exfiltration — information leaving the company’s boundaries. For development teams, this means protecting the source code and data that serves as intellectual property for many companies. At Google Cloud Next, we introduced the public Preview of Cloud Workstations, which provides fully managed and integrated development environments on Google Cloud. Cloud Workstations is a solution focused on accelerating developer onboarding and increasing the productivity of developers’ daily workflows in a secure manner, and you can start using it today simply by visiting the Google Cloud console and configuring your first workstation.Cloud Workstations: Just the factsCloud Workstations provides managed development environments with built-in security, developer flexibility, and support for many popular developer tools. Cloud Workstations addressing the needs of enterprise technology teams.Developers can quickly access secure, fast, and customizable development environments anywhere, via a browser or from their local IDE. With Cloud Workstations, you can enforce consistent environment configurations, greatly reducing developer ramp-up time and addressing “works on my machine” problems.Administrators can easily provision, scale, manage, and secure development environments for their developers, providing them access to services and resources that are private, self-hosted, on-prem, or even running in other clouds. Cloud Workstations makes it easy to scale development environments, and helps automate everyday tasks, enabling greater efficiency and security.Cloud Workstations focuses on three core areas:Fast developer onboarding via consistent environmentsCustomizable development environmentsSecurity controls and policy supportFast developer onboarding via consistent environmentsGetting developers started on a new project can take days or weeks, with much of that time spent setting up the development environment. The traditional model of local setup may also lead to configuration drift over time, resulting in “works on my machine” issues that erode developer productivity and stifle collaboration.To address this, Cloud Workstations provides a fully managed solution for creating and managing development environments. Administrators or team leads can set up one or more workstation configurations as their teams’ environment templates. Updating or patching the environments of hundreds or thousands of developers is as simple as updating their workstation configuration and letting Cloud Workstations handle the updates.Developers can create their own workstations by simply selecting among the configurations to which they were granted access, making it easy to ensure consistency. When developers start writing code, they can be certain that they are using the right version of their tools.Customizable development environmentsDevelopers use a variety of tools and processes optimized to their needs. We designed Cloud Workstations to be flexible when it comes to tool choice, enabling developers to use the tools they’re the most productive with, while enjoying the benefits of remote development. Here are some of the capabilities that enable this flexibility:Multi-IDE support: Developers use different IDEs for different tasks, and often customize them for their maximum efficiency. Cloud Workstations supports multiple managed IDEs such as IntelliJ IDEA Ultimate, PyCharm Professional, GoLand, WebStorm, Rider, Code-OSS, and many more. We’ve also partnered with JetBrains so that you can bring your existing licenses to Cloud Workstations. These IDEs are provided via optimized browser-based or local-client interfaces, avoiding the latency and challenges of general-purpose remote desktop tools such as latency and limited customization.Container-based customization: Beyond IDEs, development environments also comprise libraries, IDE extensions, code samples, and even test databases and servers. To help ensure your developers are getting the tools they need quickly, you can extend the Cloud Workstations container images with the tools of your choice.Support for third-party DevOps tools: Every organization has its own tried and tested tools — Google Cloud services such as Cloud Build, but also third-party tools such as GitLab, TeamCity, or Jenkins. By running Cloud Workstations inside your Virtual Private Cloud (VPC), you can connect to tools self-hosted in Google Cloud, on-prem, or even in other clouds.Security controls and policy supportWith Cloud Workstations, you can extend the same security policies and mechanisms you use for your production services in the cloud to your developer workstations. Here are some of the ways that Cloud Workstations helps to ensure the security of your development environments:No source code or data is transferred or stored on local machines.Each workstation runs on a single dedicated virtual machine, for increased isolation between development environments.Identity and Access Management (IAM) policies are automatically applied, and follow the principle of least privilege, helping to limit workstation access to a single developer.Workstations can be created directly inside your project and VPC, allowing you to help enforce policies like firewall rules or scheduled disk backups.VPC Service Controls can be used to define a security perimeter around your workstations, constraining access to sensitive resources, and helping prevent data exfiltration.Environments can be automatically updated after a session reaches a time limit, so that developers automatically get any updates in a timely manner.Fully private ingress/egress is also supported, so that only users inside your private network can access your workstations.What customers and partners are saying”We have hundreds of developers all around the world that need to be able to be connected anytime, from any device. Cloud Workstations enabled us to replace our custom solution with a more secure, controlled and globally managed solution.” — Sebastien Morand, Head of Data Engineering, L’Oréal“With traditional full VDI solutions, you have to take care of the operating system and other factors which are separate from the developer experience. We are looking for a solution that solves problems without introducing new ones.” — Christian Gorke, Head of Cyber Center of Excellence, Commerzbank“We are incredibly excited to tightly partner with Google Cloud around their Cloud Workstations initiative, that will make remote development with JetBrains IDEs available to Google Cloud users worldwide. We look forward to working together on making developers more productive with remote development while improving security and saving computation resources.” — Max Shafirov, CEO, JetBrainsGet started todayTry Cloud Workstations today by visiting your console, or learn more on our webpage, in our documentation or by watching this Cloud Next session. Cloud Workstations is a key part of our end-to-end Software Delivery Shield offering. To learn more about Software Delivery Shield, visit this webpage.
Quelle: Google Cloud Platform

What’s new in Firestore from Cloud Next and Firebase Summit 2022

Developers love Firestore because of how fast they can build an application end to end. Over 4 million databases have been created in Firestore, and Firestore applications power more than 1 billion monthly active end users using Firebase Auth. We want to ensure developers can focus on productivity and enhanced developer experience, especially when their apps are experiencing hyper-growth. To achieve this, we’ve made updates to Firestore that are all aimed at developer experience, supporting growth and reducing costs.COUNT functionWe’ve rolled out the COUNT() function, which gives you the ability to perform cost-efficient, scalable, count aggregations. This capability supports use cases like counting the number of friends a user has, or determining the number of documents in a collection. For more information, check out our Powering up Firestore to COUNT() cost-efficiently blog.Query Builder and Table ViewWe’ve rolled out Query Builder to enable users to visually construct queries directly in the console across Google Cloud and Firebase platforms. The results are also shown in a table format to enable deeper data exploration.For more information, check out our Query Builder blog.Scalable backend-as-a-service (BaaS)Firestore BaaS has always been able to scale to millions of concurrent users consuming data with real time queries, but up until now, there has been a limit of 10,000 write operations per second per database. While this is plenty for most applications, we are happy to announce that we are now removing this limit and moving to a model where the system scales up automatically as your write traffic increases.For applications using Firestore as a backend-as-a-service, we’ve removed the limits for write throughput and concurrent active connections. As your app takes off with more users, you can be confident that Firestore will scale smoothly. For more information, check out our  Building Scalable Real Time Applications with Firestore blog.Time-to-liveTo help you efficiently manage storage costs, we’ve introduced time-to-live (TTL), which enables you to pre-specify when documents should expire, and rely on Firestore to automatically delete expired documents.For more information, check out our blog: Manage Storage Costs Using Time-to-Live in FirestoreAdditional Features for Performance and Developer ExperienceIn addition, the following features have been added to further improve performance and developer experience:Tags have been added to enable developers to tag databases, along with other Google Cloud resources, to apply policy and observer group billing.Cross-service security rules allow secure sharing of Cloud Storage objects, by referencing Firestore data in Cloud Storage Security Rules.Offline query (client-side) indexing Preview enables more performant client-side queries by indexing data stored in the web and mobile cache.  Read documentation for more information.What’s nextGet started with Firestore.
Quelle: Google Cloud Platform

Meet Google Cloud at Supercomputing 2022

Google Cloud is excited to announce our participation in the Supercomputing 2022 (SC22) conference in Dallas, TX from November 13th – 18th, 2022. Supercomputing is the premier conference for High Performance Computing and is a great place to see colleagues, learn about the latest technologies, and meet with vendors, partners and HPC users. We’re looking forward to returning to Supercomputing fully for the first time since 2019 with a booth, talks, demos, labs, and much more.We’re excited to invite you to meet Google’s architects and experts in booth #3213, near the exhibit floor entrances. If you’re interested in sitting down with our HPC team for a private meeting, please let us know at hpc-sales@google.com. Whether it’s your first time speaking with Google ever, or your first time seeing us at Supercomputing, we are looking forward to meeting with you. Bring your tough questions, and we’ll work together to solve them.In the booth, we’ll have lab stations where you can get hands-on with Google Cloud labs covering topics ranging from HPC to Machine Learning and Quantum Computing. Come check out one of our demo stations to dive into the details of how Google Cloud and our partners can help handle your toughest workloads. We’ll also have a full schedule of talks from Google, Cloud HPC partners, and Google Cloud users hosted in our booth theater.Be sure to visit our booth to review our full booth talk schedule. Here is a sneak peak at a few talks and speakers we have scheduled:Using GKE as a Supercomputer – Louis Bailleul, Petroleum Geo-ServicesGoogle Cloud HPC Toolkit – Carlos Boneti, Google CloudMichael Wilde, Parallel WorksSuresh Andani, Sr. Director, AMDQuantum Computing at Google – Kevin Kissell, Google CloudTensor Processing Units (TPUs) on Slurm – Nick Ihli, SchedMDWomen in HPC Panel – Cristin Merritt, Women in HPC; Annie Ma-Weaver, Google CloudDAOS on GCP – Margaret Lawson, Google Cloud; Dean Hildebrand, Google CloudThere will also be talks, tutorials, and other events hosted by Google staff throughout the conference, including:Tutorial: Parallel I/O in Practice, Co-hosted by Brent WelchExhibitor Forum Talk: HPC Best Practices on Google Cloud, Hosted by Ilias KatsardisStorage events co-organized by Dean Hildebrand, including:IO500 Birds of a Feather (List of top HPC storage systems)DAOS Birds of a Feather (Emerging HPC Storage System)DAOS on GCP talk in the Intel boothKeynote by Arif Merchant at the Parallel Data Systems WorkshopConverged Computing: Bringing Together the HPC and Cloud Communities BoF, Bill Magro – PanelistEthics in HPC BoFco-organized by Margaret LawsonCloud operating model: Challenges and opportunities, Annie Ma-Weaver – PanelistGoogle Cloud is also excited to sponsor Women in HPC at SC22, and we look forward to seeing you at the Women in HPC Networking Reception, the WHPC Workshop, and Diversity Day.If you’ll be attending Supercomputing, reach out to your Google account manager or the HPC team to let us know. We look forward to seeing you there.
Quelle: Google Cloud Platform

Real-time Data Integration from Oracle to Google BigQuery Using Striim

Editor’s notes: In this guest blog, we have the pleasure of inviting Alok Pareek, Founder & EVP Products, at Striim to share latest experimental results from a performance study on real-time data integration from Oracle to Google Cloud BigQuery using Striim. Relational databases like Oracle are designed to store data, but they aren’t well suited for supporting analytics at scale. Google Cloud BigQuery is a serverless, scalable cloud data warehouse that is ideal for analytics use cases. To ensure timely and accurate analytics, it is essential to be able to continuously move data streams to BigQuery with minimal latency. The best way to stream data from databases to BigQuery is through log-based Change Data Capture(CDC). Log-based CDC works by directly reading the transaction logs to collect DML operations, such as inserts, updates, and deletes. Unlike other CDC methods, log-based CDC provides a non-intrusive approach to streaming database changes that puts minimal load on the database.Striim — a unified real-time data integration and streaming platform — comes with out-of-the-box log-based CDC readers that can move data from various databases (including Oracle) to BigQuery in real-time. Striim enables teams to act on data quickly, producing new insights, supporting optimal customer experiences, and driving innovation. In this blog post, we will outline experimental results cited in Striim’s recent white paper, Real-Time Data Integration from Oracle to Google BigQuery: A Performance Study. Building a Data Pipeline from Oracle to Google BigQuery with Striim: ComponentsWe used the following components to build a data pipeline to move data between an Oracle database to BigQuery in real time:Oracle CDC AdaptersA Striim adapter is a process that connects the Striim platform to a specific type of external application or file. Adapters enable various data sources to be connected to target systems with streaming data pipelines for real-time data integration.Striim comes with two Oracle CDC adapters to help manage different workloads.LogMiner-based Oracle CDC Reader uses Oracle LogMiner to ingest database changes on the server side and replicate them to the streaming platform. This adapter is ideal for low and medium workloads.OJet adapter uses a high-performance log mining API to support high volumes of database changes on the source and replicate them in real time.   This adaptor is ideal for high volume high throughput CDC workloads.With two types of Oracle adapters to choose from, when is it advisable to use one over the other?Our results show that if your DB workload profile is between 20GB and 80GB of CDC data per hour, the LogMiner based Oracle CDC reader is a good choice. If you work with a higher amount of data, then the OJet adapter is better; currently, it’s the fastest Oracle CDC Reader available. Here’s a table and chart that shows the latency (read-lag)  for both adapters:BigQuery WriterStriim’s BigQuery Writer is designed to save time and storage; it takes advantage of partitioned tables on the target BigQuery system and supports partition pruning in its merge queries. Database WorkloadFor our experiment, we used a custom-built, high-scale database workload simulation. This workload, SwingerMultiOps, is based on Swingbench — a popular workload for Oracle databases. It’s a multithreaded JDBC (Java Database Connectivity) application that generates concurrent DB sessions against the source database. We took the Order Entry (OE) schema of the Swingbench workload. In SwingerMultiOps, we continued to add more tables until we reached a total of 50 tables. Each of these tables comprised of  varying data types.Building the Data Pipeline: StepsWe built the data pipeline for our experiment following these steps:1. Configure the source database and profile the workloadStriim’s Oracle adapters connect to Oracle server instances to mine for redo data. Therefore it’s important to have the source database instance tuned for optimum redo mining performance. Here’s what you need to keep in mind about the configuration:Profile the DB workload to measure the load it generates on the source databaseRedo log sizes to a reasonably large value of 2G per log groupFor the OJet adapter, set a large size for the DB streams_pool_size to mine redo as quickly as possibleFor an extremely high CDC data rate of around 150 Gb/hour, set streams_pool_size to 4G2. Configure the Oracle adapterFor both adapters, default settings are enough to get started. The only configuration required is to set the DB endpoints to read data from the source database. Based on your need, you can use Striim to perform any of the following:Handle large transactionsRead and write data to a downstream databaseMine from a specific SCN or timestampRegardless of which Oracle adapter you choose, only one adapter is needed to collect all data streams from the source database. This practice helps to cut the overhead incurred by both adapters.3. Configure the BigQuery WriterUse BigQuery Writer to configure how your data moves from source to database. For instance, you can set your writers to work with a specified dataset to move large amounts of data in parallel.For performance improvement, you can use multiple BigQuery writers to integrate incoming data in parallel. Using a router ensures that events are distributed such that a single event isn’t sent to multiple writers.Tuning the number of writers and their properties helps to ensure that data is moved from Oracle to BigQuery in real time. Since we’re dealing with large volumes of incoming streams, we configure 20 BigQuery Writers in our experiment. There are many other BigQuery Writer properties that can help you to move and control data. You can learn about them in detail here.How to Execute the Striim App and Analyze ResultsWe used a Google BigQuery dataset to run our data integration infrastructure. We performed the following tasks to run our simulation and capture results for analysis:Start the Striim app on the Striim serverStart monitoring our app components using the Tungsten Console by passing a simple scriptStart the Database WorkloadCapture all DB events in the Striim app, and let the app commit all incoming data to the BigQuery targetAnalyze the app performanceThe Striim UI image below shows our app running on the Striim server. From this UI, we can monitor the app throughput and latency in real time.Results Analysis: Comparing the Performance of two Oracle ReadersAt the end of the DB workload run, we looked at our captured performance data and analyzed the performance. Details are tabulated below for each of the source adapter types.*LEE => Lag End-to-EndThe charts below show how the CDC reader lag varies with the input rate as the workload progresses on the DB server.Lag chart for Oracle Reader:Lag chart for OJet Reader:Use Striim to Move Data in Real Time to Google Cloud BigQueryThis experiment showed how to use Striim to move large amounts of data in real time from Oracle to BigQuery. Striim offers two high-performance Oracle CDC readers to support data streaming from Oracle databases. We demonstrated that Striim’s OJet Oracle reader is optimal for larger workloads, as measured by read-lag, end-to-end lag, and CPU and memory utilization. For smaller workloads, Striim’s LogMiner-based Oracle reader offers excellent performance. For more in-depth information, please refer to the white paper, check out a demo, Striim’s Marketplace listing or contact Striim.
Quelle: Google Cloud Platform

Can writing code be emotional? Google Cloud’s Kelsey Hightower says yes

Editor’s note: Kelsey Hightower is Google Cloud’s Principal Developer Advocate, meeting customers, contributing to open source projects, and speaking at internal and external events on cutting-edge technologies in cloud computing. A deep thinker and a charismatic speaker, he’s also unusually adept at championing a rarely-noticed aspect of software engineering; it’s really emotional stuff.So how does one become Google Cloud’s Principal Developer Advocate?A big part of the role is elevating people. I speak and give demos at conferences as well as contribute and participate in Open Source projects, which allows me to get to know a lot of different communities. I’m always trying to learn new things, which involves asking people if they can teach me something, or if we can learn together. I also try to spend a lot of time with customers, working on getting a strong sense of what it’s like to be in different positions in a team and working with our products to solve problems. It’s the best way I know to build trust and help people succeed.Is this something you can learn, or does it take a certain type of person?My career is built around learning to make people successful, starting with myself. I left college when I saw the courses were generically sending people up a ladder. I read a test prep book for CompTIA A+, a qualification that gives people a good overview of the IT world. I passed, and got a job and mentor at BellSouth. We’d troubleshoot, learn the fundamentals, and use our imaginations to solve problems. After that I opened an electronics store 30 miles south of Atlanta, making sure I stocked things people really needed, such as  new modems and surge protectors anticipating the next lightning storm – I was always thinking about customers’ problems. Weekends I held free courses for people who’d bought technical books. When you teach something, you learn too. My customers and students didn’t have a lot of money, but wanted the best computing experience at the lowest cost possible.I moved on from there, learning more about software and systems and doing a lot of work in open source Python, Configuration Management, and eventually Kubernetes.  A lot of what I’m doing hasn’t changed, on a fundamental level. I’m helping people, elevating people, and learning.What has doing this work taught you?Creating good software is very emotional. No, really. I can feel it when I’m doing a live demo of a serverless system, and I point out that there are no Virtual Machines. The audience sighs because the big pain point is gone. I feel it in myself when I encounter a new open source project, and I can tell what it could mean for people – I try to bottle that, and bring that feeling to customer meetings, demos, or whiteboards. It’s like I have a new sense of possibility, and I can feel people react to that. When I’m writing code, I feel like someone does when they’re cooking something good, and you can’t wait for people to taste what they’ve made – “I can’t wait for them to try this code, they are going to love this!”A few years ago I started our Empathetic Engineering practice, which enables people at Google Cloud to get a better sense of what it’s like for customers to work with our technology. The program has had a lot of success, but I think one of the most important payoffs is that people are happier when they feel they are connecting on a deeper level with the customers.Related ArticleJason Wellman is bullish on Cloud’s ability to transform healthcare – here’s whyGoogle Cloud’s Jason Wellman has watched cloud computing evolve over his past 15 years at Google. Hear why he’s bullish on Cloud’s abilit…Read Article
Quelle: Google Cloud Platform