How to transfer your data to Google Cloud

So you’ve decided to migrate your business to the cloud—good call!Now, comes the question of transferring the data. Here’s what you need to know about transferring your data to Google Cloud, and what tools are available.Any number of factors can motivate your need to move data into Google Cloud, including data center migration, machine learning, content storage and delivery, and backup and archival requirements. When moving data between locations it’s important to think about reliability, predictability, scalability, security, and manageability. Google Cloud provides four major transfer solutions that meet these requirements across a variety of use cases.(Click to enlarge)Google Cloud Data Transfer OptionsYou can get your data into Google Cloud using any of four major approaches:Cloud Storage transfer tools—These tools help you upload data directly from your computer into Google Cloud Storage. You would typically use this option for small transfers up to a few TBs. These include the Google Cloud Console UI, the JSON API, and the GSUTIL command line interface.  GSUTIL is an open-source command-line utility for scripted transfers from your shell. It also enables you to manage GCS buckets.  It can operate in rsync mode for incremental copies and streaming mode for pushing script output – for large multi-threaded/multi-processing data moves.  Use it in place of the UNIX cp (copy) command, which is not multithreaded.Storage Transfer Service—This service enables you to quickly import online data into Cloud Storage from other clouds, from on-premises sources, or from one bucket to another within Google Cloud. You can set up recurring transfer jobs to save time and resources and it can scale to 10’s of Gbps. To automate creation and management of transfer jobs you can use the storage transfer API or client libraries in the language of your choice. As compared to GSUTIL, Storage Transfer Service is a managed solution which handles retries and provides detailed transfer logging. The data transfer is fast since the data moves over high bandwidth network pipes. The on-premise transfer service minimizes the transfer time by utilizing the maximum available bandwidth and by applying performance optimizations. Transfer Appliance—This is a great option if you want to migrate a large dataset and don’t have lots of bandwidth to spare. Transfer Appliance enables seamless, secure, and speedy data transfer to Google Cloud. For example, a 1 PB data transfer can be completed in just over 40 days using the Transfer Appliance, as compared the three years it would take to complete an online data transfer over a typical network (100 Mbps). Transfer Appliance is a physical box that comes in two form factors: TA40 (40TB) and TA300 (300TB). The process is simple. First, you order the appliance through the Cloud Console.  Once it is shipped to you, you copy your data to the appliance (via a file copy over NFS), where the data is encrypted and secured. Finally, you ship the appliance back to Google for data transfer into your GCS bucket and the data is erased from the appliance. Transfer appliance is highly performant because it uses all solid state drives, minimal software and multiple network connectivity options.BigQuery Data Transfer Service—With this option your analytics team can lay the foundation for a BigQuery data warehouse without writing a single line of code. It automates data movement into BigQuery on a scheduled, managed basis. It supports several third-party sources along with transfers from Google SaaS apps, external cloud storage providers. and data warehouses such as Teradata and Amazon Redshift. Once that data is in. you can use it right inside BigQuery for analytics, machine learning, or just warehousing. ConclusionWhatever your use case for data transfer may be, getting it done fast, reliably, securely, and consistently is important. And, no matter how much data you have to move, where it’s located, or how much bandwidth you have, there is an option that can work for you. For a more in-depth look check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on twitter @pvergadia and keep an eye out on thecloudgirl.dev.Related Article5 cheat sheets to help you get started on your Google Cloud journeyWhether you need to determine the best way to move to the cloud, or decide on the best storage option, we’ve built a number of cheat shee…Read Article
Quelle: Google Cloud Platform

How capital markets can prepare for the future with AI

Editor’s note: This post originally appeared on Forbes BrandVoice.In capital markets, the stakes have been raised for participants to establish value, win loyalty, and expand their share of wallet. An organization’s data analytics capabilities, combined with artificial intelligence and machine learning, can open new opportunities in these areas. But many organizations are still using data strategies from the past, which limits their ability to harness data to its full potential and make the right business decisions. Without the ability to accurately predict business outcomes with the help of AI, market makers are left to rely on hunches and educated decision-making when predicting the unknown.Firms are increasingly recognizing the benefits of technology, and partnering with modern tech providers is key to realizing those benefits. But challenges still exist for firms looking to deploy ML at scale. Below, we’ll look at some of those challenges, along with tools and best practices that can help capital markets firms adopt and benefit from AI and ML strategies. Challenges in the data-to-ML journeyAt a high level, the challenges faced in capital markets when performing AI are similar to other industries. The first set of challenges comes with the data itself. Unstructured data accounts for 90% of enterprise data, and many enterprises face the limitations of on-premises and legacy applications that don’t work well with newer cloud-based tools. Also, a high number of data silos spread across capital markets are common due to growth through acquisitions—a time-consuming distraction that limits efficiency and decision-making. Data science is not hamstrung by the velocity of messages, nor the volume, but by the huge variety of disparate data sources.Other challenges include the views and varying levels of resistance regarding the value of data by various stakeholders within the enterprise; the restrictions of regulatory environments; and the limited cloud skills of an enterprise’s IT teams. ML operations can also be challenging as firms enter this emerging technology area.Adopting and benefiting from AI and ML strategies: Tools and best practices1. Before you perfect AI, get good at analyticsEffective AI and ML depend upon a strong and flexible data analytics platform, which first may need some rearchitecting of its infrastructure. Without a strong core data infrastructure, it’s hard to perform data science in production. With enterprises that have adopted traditional data analytics platforms that live on local servers, challenges abound—and the blue dollar costs (those charged back within the company) go far beyond software licensing. These enterprises have to expend costs and resources on monitoring, performance tuning, upgrading, resource provisioning, and scalability. Business-critical data sources may not be easily accessible by data scientists, blocking business-critical decision-making. All of these obstacles leave less time and room for gleaning analysis and insights from the data.With a serverless, cloud-based data analytics model, the vast majority of infrastructure maintenance and patching is handled by the cloud provider. This enables your data team to devote more time and resources to analysis and insights. Highly performant and integrated cloud technologies can help enterprises overcome data silos, establish a single code base, and contribute to a more collaborative workplace culture. They can also be designed to provide more real-time insights—an invaluable building block of ML and AI. In short, effective core data infrastructure is a competitive advantage over other organizations that remain stuck in silos and servers. 2. Get started by prioritizing a business goalIn the past several years alone, a number of common use cases for AI have arisen in the capital markets sector. Here are some specific examples, and how AI can help:Dynamically learn how best to place orders across venues with algorithmic execution.Recognize potential triggers for unscheduled events with predictive data analytics to forecast events.Generate multi-dimensional risk and exposure data analytics with real-time risk analysis.Use ML to help gain insight into the selection process via algorithms for asset selection. Determine client needs/opportunities using social media sentiment analysis. Build systems that can respond to client inquiries via speech-to-text natural language processing. Extract key data from unstructured or semistructured documents with natural language document analysis services. Generate performance and financial data commentary reporting with natural language generation for document writing. Identify complex trading patterns in large datasets with market abuse and financial crime surveillance. Though it’s tempting to focus exclusively on the benefits that tech can bring to data analytics, the immediate opportunity for how enterprises can fully benefit from AI rests in how humans and AI can work together. ML-based data analytics is more powerful when paired with human judgment and intuition. Recent advancements in tech have made computers faster, data storage cheaper, and access to algorithms more democratized.But human experience and judgment can contribute to and expand upon accurate, insightful data analysis, whether that be in medicine or in financial markets. Model explainability and fairness are concrete examples of where human experience is critical to successful AI (more on that below). When designing an AI system for use cases like the ones listed above, don’t divorce it from the benefits of human wisdom. 3. Structure your team for better data decisionsFinding, retrieving, and preprocessing data can be the most time-consuming part of building ML models. Over 80% of model building effort goes here. This challenge is not unique to financial services, but addressing this challenge is a necessary prerequisite for ML, and affords a competitive advantage. Structuring your organization and internal teams to tackle this challenge will increase your odds for success, but will require some planning and careful thought. Simply put, the purpose of a data science team is to facilitate better decision-making using data. Keep this in mind when deciding how to best structure your data science and AI/ML teams, as well as who they’ll be reporting to. It’s also important to consider where your organization currently sits in its data and AI journeys. Consider culture, size, and the ways the company has grown. Is your enterprise centralized or decentralized? Is it federated? Do you employ consultants?When defining team roles, consider how your flow of data is structured, and where those roles would be of most efficient use. Also, don’t limit yourself—different roles don’t necessarily require different employees. People can perform different roles, as long as the roles are clearly defined. 4. Understand the concepts of explainability and fairnessThere are two important considerations to keep in mind when structuring your organization for data analysis and AI. The first is explainability. We want AI systems to produce results as expected, with transparent explanations and reasons for the decisions they make. This is known as explainability, a high priority here at Google, as well as a growing area of concern for enterprises when it comes to designing their AI systems. Explainability increases trust in the decisions of AI systems, and a number of best practices have evolved to ensure that trust. These include closely auditing your work and data science processes; monitoring what’s called “model drift” (also referred to as “concept drift”); including accuracy metrics; and ensuring reproducibility of features. Fairness is another important topic in AI. An algorithm is said to show fairness if its results are independent of certain variables, especially those that may be considered sensitive. These include individual traits that shouldn’t correlate with the outcome, like ethnicity, gender, sexual orientation, or disability. An accurate model may learn or even amplify problematic pre-existing biases in the data based on those traits. Identifying appropriate fairness criteria for a system requires accounting for UX, cultural, social, historical, political, legal, and ethical considerations, several of which may have tradeoffs. Best practices for fairness include: Designing your model using concrete goals.Monitoring goals through time for your system to work fairly across anticipated use cases—in a number of different languages, for example, or in a range of different age groups.Using representative datasets to train and test your model.Using a diverse set of testers.Thinking about the model’s performance across different sub groups.Building your roadmap for the future with AI/ML Capital markets’ rich history of using cutting-edge technology now includes AI to open new opportunities in the sector. Foresight and planning will ensure the best results from ML and AI—they shouldn’t be an afterthought for your organization. That means building a strong core infrastructure for data analysis first, planning the structure of internal teams that will use data and AI, and using flexible, cloud-based tools to optimize results.When introducing new AI/ML strategies, IT leaders must ensure that they integrate and fit with existing modernization efforts, as opposed to being a bolt-on afterthought. This will lead to a true integration of AI/ML and business.Related ArticleFive habits of highly effective capital markets firms who run in the cloudLearn how capital markets firms are innovating in the cloud, taking inspiration from technology companies and focusing on collaboration.Read Article
Quelle: Google Cloud Platform

Google Cloud announces new region to support growing customer base in Israel

Google has long looked to Israel for globally impactful technologies including popular Search features, Waze, Live Caption, Duplex and flood forecasting. At our Decode with Google 15RAEL event last week, we celebrated 15 years of Google innovation in Israel and our longstanding support of the country’s vibrant startup ecosystem. Over the years, we’ve expanded our enterprise investments in the country, too. In addition to our over a decade long investment in the space, Google has acquired Israeli-based companies like Alooma, Elastifile and Velostrata, and Uri Frank joined Google Cloud last month to lead our server chip design team from our offices in Tel Aviv and Haifa. As we continue to meet growing demand for cloud services in Israel, we’re excited to announce that a new Google Cloud region is coming to Israel to make it easier for customers to serve their own users faster, more reliably and securely.Our global network of Google Cloud regions are the foundation of the cloud infrastructure we’re building to support our customers. With cloud’s 25 regions and 76 zones around the world, we deliver high-performance, low-latency services and products for Google Cloud’s enterprise and public sector customers. With each new Google Cloud region, customers get access to secure infrastructure, smarter analytics tools, an open platform and the cleanest cloud in the industry. Having a region in Israel will help accelerate innovation for customers of all sizes, including PayBox, a digital wallet application owned by Discount Bank, one of Israel’s largest banks. “When we acquired PayBox, our goal was to improve the security and the user experience for its products, but we also wanted to keep the startup’s agility and innovation. Google Cloud has enabled us to do just that,” said Sarit Beck-Barkai, Managing Director of PayBox at Discount Bank.“We are very excited that leading vendors like Google are investing and launching a local cloud region in Israel. This will make a significant change in the technology landscape of the public-sector, enterprise and SMB markets in Israel. Matrix is proud to be a major part of the transition to the cloud,” said Moti Gutman, CEO at Matrix, technology services company and Google Cloud partner. “In the last year, Panorays more than tripled its customer base and scaled its infrastructure, practically at the click of a button. Google Cloud made it easy for us to scale without worrying about DevOps, which meant that our engineers could focus on developing new and better features for our customers. The new region launching in Israel will allow us to serve our local customer base even better, as we’ll be able to experience higher availability and deploy resources in specific regions, thus reducing latency.” said Demi Ben-Ari, Co-founder and CTO, Panorays, a third-party security platform and Google Cloud customer.”This new cloud region will provide even better access and growth potential for our mutual customers with tech hubs in the region. We are serving hyper growth companies who need Google Cloud’s services and will benefit greatly from this regional presence,” said Yoav Toussia-Cohen, CEO of DoiT International.When it launches, the Israel region will deliver a comprehensive portfolio of Google Cloud products to private and public sector organizations locally. We look forward to welcoming you to the Israel region, and we’re excited to support your growing business on our platform. Learn more about our global cloud infrastructure, including new and upcoming regions, here.Related ArticleThe past, present and future of custom compute at GoogleTo meet users’ performance needs at low power, we’re doubling down on custom chips that use System on a Chip (SoC) designs.Read Article
Quelle: Google Cloud Platform

AWS Secrets Manager stellt Dienstanbieter für Kubernetes Secrets Store CSI-Treiber bereit

AWS Secrets Manager stellte heute den AWS Secrets and Configuration Provider (ASCP) vor, ein Plugin für den Secrets Store CSI-Treiber des Branchenstandards Kubernetes. Mit ASCP können in Kubernetes-Pods ausgeführte Anwendungen auf einfache Weise Secrets aus dem AWS Secrets Manager abrufen, ohne dass dafür benutzerdefinierter Code erforderlich ist. Nach der Installation stellt ASCP sicher, dass Ihre Anwendungen immer die aktuellste Version Ihrer rotierten Secrets erhalten. Dadurch profitieren Sie ohne zusätzlichen Programmieraufwand automatisch von den Funktionen des Secrets Managers zur Rotation und zum Lebenszyklusmanagement. ASCP ermöglicht auch den einfachen und sicheren Zugriff auf Ihre Konfigurationen im AWS SSM Parameter Store. AWS Secrets and Configuration Provider stehen ab sofort zum Download bereit. Lesen Sie den Blog und erfahren Sie mehr.
Quelle: aws.amazon.com

Software von Drittanbietern, die für AWS Control Tower entwickelt wurde, ist jetzt in der AWS Control Tower-Konsole, powered by AWS Marketplace, verfügbar

Wir freuen uns, ankündigen zu können, dass AWS Control Tower-Kunden eine kuratierte Sammlung an Software von Drittanbietern, die für AWS Control Tower entwickelt wurde, innerhalb der AWS Control Tower-Konsole entdecken können. Sie können aus professionellen Services und Software wählen, darunter Lösungen für Identitätsmanagement, Sicherheit in einer Mehrkontenumgebung, zentralisierte Netzwerke, Betriebsinformationen, Security Information and Event Management (SIEM), Kostenmanagement oder Angebote für kundenspezifische Schutzmaßnahmen, Account Factory, Lösungen zur Einhaltung gesetzlicher Vorschriften und unternehmensspezifische Lösungen (z. B. Internet of Things, Data Lake usw.).
Quelle: aws.amazon.com

Amazon Aurora PostgreSQL Patches 1.9.2 / 2.7.2 / 3.4.2 / 4.0.2 ist jetzt verfügbar

Patches 1.9.2, 2.7.2, 3.4.2, 4.0.2 sind jetzt für Kunden, die Amazon Aurora PostgreSQL-kompatible Edition nutzen, verfügbar. Detaillierte Versionshinweise finden Sie in unserer Versionsdokumentation. Sie können die neue Patch-Version in der AWS-Managementkonsole über die AWS-CLI oder über die RDS-API anwenden. Detaillierte Anweisungen können Sie unserer technischen Dokumentation entnehmen.
Quelle: aws.amazon.com

Amazon Translate erhöht die Größenbeschränkung für parallele Daten von 1 GB auf 5 GB

Amazon Translate ist ein voll verwalteter, neuraler Machine Translation-Service, der hochwertige, erschwingliche und anpassbare Übersetzungen in Echtzeit liefert. Heute geben wir bekannt, dass Amazon Translate die Größenbeschränkung für parallele Daten (PD) von 1 GB auf 5 GB erhöht hat. PD werden in Active Custom Translation (ACT) verwendet, einer Funktion, die Ihnen mehr Kontrolle über die Ausgabe Ihrer maschinellen Übersetzung gibt. Sie erstellen PD, indem Sie einfach Ihre Übersetzungsbeispiele im TMX-, TSV- oder CSV-Format bereitstellen. Dann verwendet Amazon Translate Ihre PD zusammen mit Ihrem Batch-Übersetzungsauftrag, um die Übersetzungsausgabe zur Laufzeit anzupassen. Mit der Erhöhung der PD-Größe können Sie mehr Daten verwenden, um Ihre ACT-Ausgabe anzupassen.
Quelle: aws.amazon.com

Amazon SNS erweitert den Satz der Nachrichtenfilteroperatoren

Nachrichtenfilter von Amazon Simple Notification Service (Amazon SNS)bieten die Möglichkeit zur Vereinfachung Ihrer Pub/Sub-Messaging-Architektur. Laden Sie dafür die Logik der Nachrichtenfilterung von Ihren Abonnentensystemen und die Logik der Nachrichtenweiterleitung von Ihren Publisher-Systemen ab. Die Amazon-SNS-Nachrichtenfilterung bietet eine Reihe von Abgleichsoperatoren, mit denen Sie Nachrichten basierend auf ihren Attributschlüsseln oder Attributwerten filtern können.
Quelle: aws.amazon.com