Migrating Teradata and other data warehouses to BigQuery

Traditional, on-premises data warehouses collect and store what is often an organization’s most valuable data—which helps drive growth and innovation. Organizations depend on this data to make informed and timely decisions that can shape the future of their business. But we know that traditional data warehouses can be expensive, hard to maintain, and unable to keep up with business needs. As data rapidly increases in volume, velocity and variety, it’s especially hard to get business needs met. We know that businesses are turning to BigQuery, our highly scalable and serverless enterprise data warehouse, to perform fast, real-time analysis of their data.When migrating your data warehouse, you’re moving what’s essentially the center of gravity of your entire data analytics and business intelligence environment. Many business applications depend on your data warehouse for reports, data feeds, and dashboards, and the users of these business applications expect to have minimal to no disruption during the migration. With all this in mind, we’ve created a new data warehouse migration guide to help walk you through data warehouse migrations with as little complexity and risk as possible. In the guide, you’ll find prescriptive, end-to-end guidance to securely migrate legacy data warehouses to BigQuery. Though the guide contains some sections specific for migrations from Teradata, you’ll be able to use the vast majority of the guide for any enterprise data warehouse migration.Building the migration frameworkA migration can be a complex and lengthy endeavor, but it can be made simpler with planning. As part of the migration guide, you’ll find our suggested structured framework for data warehouse migrations, based on Agile principles. The framework facilitates the application of project management best practices, helping to bring incremental and tangible business value while managing risk and minimizing disruptions. The framework adheres to the phases shown in the following diagram, with more details below:1. Prepare and discover: In this initial phase, the focus is on preparation and discovery. It’s about affording yourself and your stakeholders an early opportunity to discover the use cases you’re planning for BigQuery, raise initial concerns, and, importantly, conduct an initial analysis around the expected benefits.2. Assess and plan: The assess-and-plan phase is about taking the input from the prepare-and-discover phase, assessing that input, and then using it to plan for the migration. This phase can be broken down into the following tasks:Assess the current stateCatalog and prioritize use casesDefine measures of successCreate a definition of “done”Design and propose a proof-of-concept (POC), short-term state, and ideal end stateCreate time and cost estimatesIdentify and engage a migration partner (if applicable)Find more details here on these tasks.3. Execute: After you’ve gathered information about your legacy data warehouse platform, and created a prioritized backlog of use cases, you can group the use cases into workloads and proceed with the migration in iterations.An iteration can consist of a single use case, a few separate use cases, or a number of use cases pertaining to a single workload. Which option you choose for the iteration depends on the interconnectivity of the use cases, any shared dependencies, and the resources you have available to undertake the work. For example, a use case might have the following relationships and dependencies:Purchase reporting can stand alone and is useful for understanding monies spent and requesting discounts.Sales reporting can stand alone and is useful for planning marketing campaigns.Profit and loss reporting, however, is dependent on both purchases and sales, and is useful for determining the company’s value.With each use case, you’ll want to decide whether it will be offloaded or fully migrated. Offloading focuses on time to delivery, where speed is the top priority, and fully migrating is about ensuring all upstream dependencies are also migrated. The following diagram shows the execution process and flow in greater detail:During the execute phase, the work to fully migrate or offload the use case or workload should focus on one or more of the following steps. Our guide includes documents dedicated to each of these steps:Setup and data governance:Setup is the foundational work that’s required in order to let the use cases run on Google Cloud Platform (GCP). Setup can include configuration of your GCP projects, network, virtual private cloud (VPC), and data governance. Data governance is a principled approach to manage data during its lifecycle, from acquisition to use to disposal. Take a look at the data governance document to help define your governance program in the cloud, which should include an outline of the policies, procedures, responsibilities, and controls surrounding your data activities.Migrate schema and data: The schema and data transfer document provides extensive information on how you can move your data to BigQuery and offers recommendations for updating your schema to take full advantage of BigQuery’s features. The associated quickstart guides you step by step through an actual schema and data migration from Teradata to BigQuery.Translate queries: The query translation document addresses some of the challenges that you might encounter while migrating SQL queries from Teradata to BigQuery, and explains when SQL translation is required. The associated quickstart simplifies this and takes you through an exercise to translate some queries using the Teradata SQL to the standard ISO:2011 SQL supported by BigQuery, starting with manual translation and evolving into a more automated approach. The SQL translation reference details the similarities and differences in SQL syntax between Teradata and BigQuery.Migrate business applications: Depending on your organization, your business applications might include dashboards, reports and operational pipelines. The reporting and analysis document explains how you can take advantage of the full suite of business intelligence tools and applications integrated with BigQuery—this includes the reporting and analysis applications that you may be using with your legacy data warehouse.Migrate data pipelines: The data pipelines document helps you understand what a data pipeline is, what procedures and patterns it can employ, and which migration options and technologies are available in relation to the larger data warehouse migration.Optimize performance: The performance optimization document helps you understand the factors that can impact performance in BigQuery and helps you apply essential techniques to improve it.Verify and validate: At the end of each iteration, validate that the use case was successfully migrated according to your definition of done, and verify that data governance concerns have been met, the schema and data have been migrated, and that business applications are producing the expected results. Understanding the migration architectureAfter each iteration in the execution phase, you’ll likely have some use cases offloaded to BigQuery, some fully migrated, and some still in your on-premises data warehouse. This iterative approach is enabled by an architecture where both your data warehouse and BigQuery can be actively used in parallel. This architecture allows you to take data warehouse migration one step at a time, breaking down its complexity and reducing risk. The next diagram illustrates the architecture, showing Teradata working on-premises and BigQuery on GCP, where both can ingest from the source systems, integrate with your business applications, and provide access to the users who need it. Importantly, you can also see in the diagram that data is synchronized from Teradata to BigQuery.The data warehouse migration guide provides a wealth of prescriptive guidance so you can structure your migration project carefully and undertake each one of its challenges in a systematic manner. Our professional services organization and our partners are ready to assist you further in your migration journey, no matter how complex it may be. And check out our migration offer for help creating a streamlined path to a modern data warehouse.
Quelle: Google Cloud Platform

Web application vulnerability scans for GKE and Compute Engine are generally available

As the number of platforms you build and run your applications on increases, so does the challenge of understanding what applications you have deployed and their security state. Without visibility, it can be difficult to know if there are any latent vulnerabilities in your applications—much less how to fix them.Today, we’re excited to announce the general availability of Cloud Security Scanner for Google Kubernetes Engine (GKE) and Compute Engine, joining Cloud Security Scanner for App Engine. Now, no matter where you run your applications on Google Cloud, you can quickly gain insights into your web app’s vulnerabilities and take action before a bad actor can exploit them. Web application vulnerabilities can occur during the development process. Some of these vulnerabilities include the incorrect setup of an app’s security framework, the incorrect implementation of an app into a production environment, or systems that weren’t patched or updated. Cloud Security Scanner can surface a wide range of web application vulnerabilities as findings; here are a few examples of its capabilities:Identity and notify you of common external vulnerabilities in your applications such as Flash Injection or mixed content Detect vulnerabilities such as cross-site scripting bugs due to JavaScript breakageAlert you of accessible GIT and SVN repositoriesSurface mixed content vulnerabilities that a man-in-the-middle attacker could exploit to gain full access to the website that loads the resource or monitor users’ actions.Notify you if an application appears to be transmitting a password field in plain text, displaying HTTP header issues, including misspellings, mismatching values in a duplicate security header, or invalid headersCloud Security Scanner surfaces these vulnerabilities as findings in Cloud Security Command Center (Cloud SCC), our Cloud Security Posture Management (CSPM) tool, so you can gain visibility into misconfigurations, vulnerabilities, and threats and quickly respond to them from a centralized dashboard. Then, when you click on a finding, you can see a description of the issue and an actionable recommendation on how you can fix and prevent it in the future.Cloud Security Scanner is not on by default. To activate it, complete this quickstart and then go to Security Sources within Cloud SCC to ensure it’s active. You can also create customized scans for your applications using the Cloud Security Scanner UI. Once Cloud Security Scanner is on, it scans your application, following all links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handles as possible. The scans run using the Chrome and Safari browsers, and those embedded in Blackberry and Nokia phones. For more flexibility, you can also schedule scans.For additional protection of your applications running on GKE instances, you can also use the Container Registry vulnerability scanning to discover vulnerable container images before they are deployed into production. It’s easy to get started with Cloud Security Scanner and protect your applications. If you are new to GCP, start your free GCP trial and enable Cloud SCC then Cloud Security Scanner. If you are an existing customer, simply enable Cloud Security Scanner from Security Sources in Cloud SCC, and start using it for free. For more information on Cloud Security Scanner, read our documentation.
Quelle: Google Cloud Platform

Quic: Curl kann HTTP/3

Das extrem weit verbreitete Download- und Transfer-Werkzeug Curl unterstützt erstmals das Protokoll HTTP/3, das über den ebenfalls neuen Standard Quic transportiert wird. Noch ist die Technik aber sehr experimentell. (HTTP, Internet)
Quelle: Golem

Healthcare services company simplifies architecture, lowers costs with IBM Cloud

Patients, insurance organizations and regulatory bodies hold high expectations from healthcare organizations. Processes are complex, exposed to numerous risks and must comply with a growing number of healthcare laws.
A good and reliable recording of information, assurance of processes and simple communication about these are essential, but often fall short.
The Patient Safety Company (TPSC) breaks down barriers with its quality and risk management platform. The software platform hosts custom cloud solutions for data gathering, workflow management and process automation. Each solution identifies and analyzes risks, discovers trends and facilitates continuous quality improvement.
Seeking a way to support long-term business growth
Used by more than 500 healthcare clients and available in eight languages, our cloud-based software monitors, collects and stores data about quality and safety and makes the information available through customized dashboards that suit users’ specific needs in real-time.
To adhere to location-specific healthcare legislation, such as mandates regarding data being stored in-country, we were working with several local cloud providers to support our clients throughout Europe, North America and Australia. However, maintaining the different architectures in different clouds was costly. To align with our global growth plan, we began looking for a partner who could support regional healthcare regulations with a global standard for hosting centers.
IBM was the only vendor we considered that could provide data centers all over the world. Of course, our data centers were certified and compliant previously, but we now have increased visibility. With IBM, there’s now one global standard for our hosting centers. Plus, our customers feel more confident in The Patient Safety Company thanks to the security and governance assurances that accompany the IBM Cloud.
Another very important reason we chose IBM is that we are now headed towards a containerized, modern architecture from a traditional LAMP-stack environment. We’ll be able to take advantage of the IBM Cloud Kubernetes Service to further improve stability, speed deployment and add functionality as required.
Expanding into new markets
We migrated our software platform to IBM cloud by cloud. In some cases, we migrated the established architecture. And, in other cases, we started with a new architecture. Regardless of the starting point, though, the end result we’re working toward is that all instances of The Patient Safety Company cloud will be on the IBM Cloud with the same architecture. We expect the move to decrease our hosting costs by 40 percent and improve our uptime from 95 to 99 percent or more.
Using the IBM Cloud Kubernetes Service adds the needed flexibility for healthcare organizations to respond quickly to changes. Adjustments can be made as necessary to serve their changing needs for information.
The Patient Safety Company is now poised for growth in our target regions where both economic developments and developments in healthcare necessitate quality and risk management.
Additionally, we are aiming for other markets besides healthcare, such as oil and gas. Our solution for these other markets, built on the same foundation, is called RiskSync.
Read the case study for more details.
The post Healthcare services company simplifies architecture, lowers costs with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud