Building a healthy and secure software supply chain

Securing the software supply chain is now an everyday concern for developers. As attackers increasingly target open-source components as a way to compromise the software supply chain, developers hold the keys to making their projects as secure as they can be. That’s why Docker continues to invest heavily in our developer tools like Docker Desktop and secure supply chain offerings such as Docker Official Images and Docker Verified Publisher content.

This Tuesday, August 17, Docker CTO Justin Cormack and Head of Developer Relations Peter McKee will cover what it takes to securely develop from code to cloud. The webinar will provide a comprehensive overview on software security including explaining what is a software supply chain attack, key principles to identifying the weakest link and the stages for effectively securing the software supply chain.

As Justin told Dark Reading last month:  

“Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious.”

This is a webinar you don’t want to miss. Register today.

The post Building a healthy and secure software supply chain appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Partner Advantage two-year read out!

Last month marked the two-year anniversary of Google Cloud Partner Advantage. I want to thank our fast-growing ecosystem of global partners for their hard work, imagination, and energized commitment, and to reflect on how much we’ve accomplished together. In 2019, we kicked off by building a multi-year action plan together with partners, added some innovative Googleyness, and have since remained laser focused on our core principles — ensuring simplicity, fostering collaboration, focusing on the customer, and sustaining a growth mindset. We also continue to measure partner success in three fundamental ways that set us apart in a highly competitive market: Ensuring that Google Cloud and our partners are each aligned to the same business goals and strategies, providing partners with the opportunities to earn and showcase their skills to the market, and empowering partners to demonstrate differentiated value through customer success stories, certifications, Net Promoter Score (newly added this year!) and more.I am very pleased to share that to-date the results have been fantastic–thanks to an ecosystem based on trust and collaboration:The average size of partner-involved deals more than doubled from 2019 to 2020.We onboarded almost 3x more indirect resellers in the first three quarters of 2020 compared to the same period in 2019.Partner-created pipeline in the mid-market segment grew more than 200% YoY from 2019 to 2020.Partners were involved in 3X more customer deals in 2020 than in 2018.The number of enterprise customer accounts with a partner attached increased by 50% from 2019 to 2020.Our partner ecosystem has grown by more than 400% in the last two years.We’ve rapidly expanded key programmatic elements of Partner Advantage, such as incentives and Differentiation; worked with analysts and partners to design the most compelling offerings, integrated closely with key teams across Google Cloud; advanced our technical infrastructure; and deployed new features and growth drivers, from our Partner Advisors, to more formal certification and training options, to portal features that bring greater control and transparency to partners. We’ve also focused on ensuring that partners are part of every deal. Resources such as the internal and external partner directories allow Google Cloud sales teams to match partners to deals, help customers easily connect with the best partners for their needs, and allow partners to showcase their expertise and knowledge depth.  We highlight partner accomplishments by showcasing customer success stories, expertise by industry or solution area, and specialization in a major practice area–all to make it easier for our customers to find the right partner at the right time with the right skills for innovation and confidence.Check out the items below to learn more about what Partner Advantage has fueled and accomplished with our valued partners in the past two years.Advancing the Partner Differentiation JourneyThe Google Cloud Partner Differentiation Journey has always been the heart and soul of Partner Advantage. By providing partners with the tools, training, and insights they need to differentiate their business in a rapidly shifting global marketplace, we help partners offer more value to customers. In the two years since we launched Partner Advantage, partners have looked to our Differentiation Journey to achieve their goals and win:The number of Customer Success Stories published by partners has increased 250% since 2019. More than 3,800 are now online and accessible by customers. The number ofpartners with Specializations grew 70% through 2020. Earning Specializations helps unlock additional benefits and incentives. Our managed partners more than doubled their Expertise designations in 2020 over the prior year.We’ve also partnered with Forrester to take a deeper look at thebusiness opportunity Google Cloud offers to partners1. I’d encourage you to read the report if you haven’t already as it contains some excellent data and insights you won’t find anywhere else.Reinventing Partner Incentives Incentives are one of the most important elements of Partner Advantage and a strong motivator for partner loyalty and investment. Since launch in 2019, our incentives portfolio has  expanded  significantly  to offer partners more opportunities to earn and grow their business , easier for partners to leverage, and more competitive. It’s all about winning business with our partners — together.  In fact, IDC is projecting2 that when you combine our incentives with other components of Partner Advantage, the future looks very profitable:The overall Google Cloud partner business opportunity is expected to increase by a factor of at least 3.6 by 2025.On a global basis, IDC expects partners to generate $5.32 USD in revenue for every $1 of Google Cloud revenue. Better still, they expect partner revenue to jump to $7.54 for every dollar Google takes in by 2025.For our part, the Google Cloud Partner Advantage incentives are an attractive, competitive and comprehensive portfolio of rewards across Sell, Service and Build partners. Our partner investments include:More than 10X increase in partner incentives and funds since launch : Google WorkspaceWe focused on rewarding  partners for new customer  acquisition and to protect partner investment, which have led to more than a 50% increase in win rate for partner registered deals, and a significant increase in partner sourced pipeline. In 2021, we expanded the incentive portfolio to boost partner profitability for expanding into new markets  and driving adoption leading to customer success.We launched incentives for Distributors to expand into new geographies and new segmentsGoogle CloudBeginning with the MSP Initiative, we’ve expanded the incentives portfolio in 2021 to offer attractive partner discounts,additional incentives for new customer acquisition and rewarding partners who help their customers grow consumption.We have seen 40% more partners utilizing the funding for pre-sales engagements  and deployments and for sales acceleration.And, in the summer of 2021, to expand our routes to market, we launched Distribution incentives for GCP.We’re thrilled that our evolving resources and initiatives are strengthening our collaborative relationships with partners and helping to better serve our customers. That relationship is the cornerstone to our strategy as we drive innovation and grow our businesses together. To learn more about Google Cloud’s partner program, click here.1.The Google Cloud Business Opportunity For Partners, a commissioned Total Economic Impact™ study conducted by Forrester Consulting, January 20202. IDC eBook, sponsored by Google Cloud, Partner Opportunity in a Cloud World, doc #US46702120BROI, August 2020
Quelle: Google Cloud Platform

A technical solution producing highly-personalized investment recommendations using ML

Developed by SoftServe with the use of Google Cloud, the Investment Products Recommendation Engine (IPRE) is a solution designed to tackle common retail banking customer investment challenges. In particular, it makes investment recommendations based on BigQuery ML model capabilities. Big data pipelines are utilized to process investment data. The environment setup is automated with the use of Terraform. In this blog post we will take a closer look at the technical implementation of the solution. Solution architectureLet’s dive deeper into the technical part of the solution and consider solution architecture.Components of the pattern architecture are split into three main areas, shown in Figure 1.Figure 1. Investment product recommendation engine solution architectureThe Web-UI area is indicated by the green color and corresponds to the web application (React.js application deployed in Cloud Run). The application demonstrates features of investment risk preferences and portfolio investment recommendations. The web application has its database to respond to users’ requests.The Data processing area is indicated by the beige color and corresponds to the Data Processing that performs data transformation, aggregation, and putting the data into a BigQuery data lake. That part includes fetching data from external sources (Yahoo Finance is used as sample data), storing raw data in Cloud Data Storage, transforming data with the use of Cloud Dataflow, and putting data into BigQuery. The data pipeline is orchestrated by Cloud Composer.The Recommendation Engine area is indicated by the pink color and corresponds to the Recommendation Engine (RE). The RE provides portfolio optimization data for incoming requests from the web application. AutoML Tables models are used to make two different predictions:Investor risk preferencesInvestment recommendationsThe solution is deployed on Google Cloud. Terraform is used to set up all required components and establish the right communications between them.IPRE workflowThe following steps are executed to provide users with investment recommendations based on their risk preferences:The Investor Risk Preference cloud function generates users’ synthetic data and their preferences.Capital Market Data is fetched from Yahoo Finance by the Cap Market cloud function and stored as raw data in Cloud Storage.When new raw data is available in the bucket, the Cloud Dataflow job orchestrated by Cloud Composer is triggered. Dataflow stores processed data in BigQuery.BigQuery Training AutoML jobs, which are orchestrated by Cloud Composer, are triggered after initial setup (or daily) and create the corresponding BigQuery ML Models.Based on available data, BigQuery AutoML generates potential Investor risk preference profiles and investment recommendations, and puts it into Cloud Storage.The risk preference profile is determined for the user that signed in to the Web Application. The recommendations are displayed based on the user’s investment profile. A separate UI Fulfillment backend service provides recommended data to the user.Each day, when new capital market data is available, investment portfolio recommendations are updated with the same flow.Data pipelinesThe IPRE service relies on multiple data sources, both internal and external. The solution implements scalable data pipelines with technologies, such as BigQuery, Cloud Storage, and Dataflow.All external raw data streams are aggregated in dedicated Cloud Storage buckets. The Cloud Functions trigger minor pre-processing scripts. Writing an object to the Cloud Storage bucket triggers a Dataflow job for adding new data to BigQuery.This type of architecture makes an ETL pipeline resilient to corrupt data and scalable to multiple data sources.The Cloud Functions provide a clean, cost-effective solution for migrating massive datasets from data lake to DWH.Capital markets dataHistorical market data is a crucial element for the recommendation service. A dedicated data pipeline job collects quotes of the selected securities from Yahoo Finance. All selected assets vary in return and risk. This allows IPRE to construct a wide range of portfolios to meet diverse investors’ preferences. After minor preprocessing, daily historical quotes (q) are turned into periodic returns.Returns of observations with a unique timestamp are written to Cloud Storage. It allows reducing egress and ensures that BigQuery does not receive duplicate data. During the first run of the script, all observations starting from 2017 will make it to BigQuery. Subsequent runs provide incremental observations of the ”unseen” data. In the final stage of ETL, the processed data is written to BigQuery. Aggregating data in BigQuery allows other services to retrieve the data in a cost-effective way.Investors risk preferencesThe investor risk preferences (IRP) are a synthetic dataset containing historical records of thousands of existing retail investors. This dataset is a crucial component for making personalized recommendations based on an individual’s investment preferences. The risk aversion is a target variable of interest. Average monthly income, education, loans, and deposits are among 15 independent variables. Investors’ attributes are generated using different continuous variable distribution functions: Gamma, Gumbel, Gaussian, R-distributed, and others. A script produces monthly snapshots of investors’ attributes, resulting in 48,000 data points. The Cloud Function triggers a generation of the dataset upon the first launch of IPRE. Dataflow migrates the generated dataset from Cloud Storage to BigQuery.Machine learning advanced analyticsThe machine learning (ML) workflow is as follows:Raw data is preprocessed and uploaded to GCS. A Dataflow job is registered through Google Composer. Processed data is uploaded to BigQuery with predefined data schema and data format.By the Pub/Sub trigger, training of AutoML and ARIMA models is triggered. The training is performed with the use of integrated BigQuery ML tools.When the training has completed, the system triggers the inference process. Individual risk preferences and ticker’s prices are predicted by taking the uploaded BigQuery data as an input.Predicted results are saved to Cloud Storage to cache the results and make the data reusable.Results are published through the recommendation engine, which is deployed on Cloud Run, and prediction results are sent to the end user.The workflow is shown in Figure 2.Figure 2. Machine learning workflowIPRE implementation featuresThe solution is designed to be highly reproducible, with the minimal manual effort required to set up all services.Users of the web application can create several wallets and switch among them. In addition to working with wallets, users can see investment recommendations and their portfolio with detailed statistics.https://storage.googleapis.com/gweb-cloudblog-publish/images/investment_advice.max-2800×2800.jpgThe application’s back end is a service developed using Django Framework. The service, which acts as a bridge between the IPRE and the web application, is responsible for working with wallets, managing transactions, showing user portfolio..The ML interface pipeline is designed with ease of deployment in mind, so that the solution can be deployed on Google Cloud with just one click.Better investing with IPREUsing Google Cloud Platform, SoftServe developed the IPRE solution, and within the solution implemented an end-to-end automated MLmodel that can be deployed in one click. SoftServe’s Investment Products Recommendation Engine serves as a pivotal point in increasing the cross-selling potential of investment products to retail banking customers. It establishes a bridge between retail banking investors, who are non-finance professionals, and the complexity of modern capital markets investment vehicles. The solution applies ML technology for micro-segmentation of user groups based on their risk preferences to provide highly personalized investment products selection to an individual user.The IPRE makes investment recommendations based on BigQuery ML Model capabilities and uses Big Data pipelines to process investment data. The environment setup is automated by Terraform. The solution incorporates a fully automated ML process. Extensive pattern automation will help developers easily switch to implementation and explore different configuration options.If you want to dive deeper into the solution or implement your own IPRE with the use of GCP, please check out the pattern details or reach out to the Google Cloud or SoftServe team to get more information.Related ArticleSolving Banking challenges with highly personalized investment recommendations using AIHow SoftServe used Google Cloud to make investing easier by creating a data-driven solution to balance risk and expected ROIRead Article
Quelle: Google Cloud Platform

Solving Banking challenges with highly personalized investment recommendations using AI

Data science is one of today’s key priorities for finance industry leaders. Data Scientists harness knowledge to draw meaning from data, to turn data into information, and to translate information into practical insights that will bring a better understanding of how to gain customer loyalty, minimize churn, and grow revenue. In this blog post we will look at a comprehensive investment banking solution that builds a bridge between retail investors and the complexity of the capital markets.Let’s explore how Google Cloud Data and Analytics services can be used to turn real-time insight into an automated process, creating frictionless digital experiences to help retail investors with little capital markets expertise. The solution developed by SoftServe provides users with personalized investment recommendations to help make better decisions. Called the Investment Products Recommendation Engine (IPRE), SoftServe designed this solution to recommend the most suitable investment product by balancing an individual’s risk preferences and expected return on investment.SoftServe’s IPRE collects and processes market data (e.g., quotes, daily or weekly open, high, low, close prices) on available investment products such as stocks, bonds, powered by BigQuery and Cloud Functions. The IPRE prepares the raw data via Dataflow and constructs an optimal mean-variance portfolio for a given level of risk. So, the investment portfolio is optimized to provide the highest expected return on investment for a given risk level.An investor’s risk appetite depends on various factors and may exhibit non-stationary evolution over time. To produce recommendations in accordance with the optimal risk level for an individual investor, SoftServe used an AutoML Tables model, based on a variety of customer characteristics: level of income, level of savings, level of education, employment, geography, etc. This approach provides more flexibility when compared to classical investment theory metrics such as Constant Relative Risk Aversion (“CRRA”), Constant Absolute Risk Aversion (“CARA”), etc., consequently enabling the IPRE to unlock new customer segments.Finally, after providing recommendations for a portfolio of optimal assets based on risk levels, the IPRE estimates the qualitative and quantitative characteristics of the portfolio. It computes sophisticated industry-grade investment metrics describing the marginal risks, Conditional Value at Risk (CVaR), diversification effects, Sharpe Ratio, sensitivity of the portfolio to market fluctuations, etc.Let’s take a look at a hypothetical user journey to better understand the purpose of the solution and the value it brings to market. Meet Felix, a 33-year old architect whose dream is to buy his own flat in the next five years. He realizes he must accumulate more savings. A few months ago, Felix opened an account in the For-the-Future bank because of the smart investment feature on their mobile app, where he receives investment recommendations and can make decisions on the go. Felix has set a financial goal and built up a portfolio of investment funds aligned with his risk tolerance and his investment goals.One day, on his way to work, Felix receives a personalized investment recommendation from the For-the-Future bank’s mobile app. The app is constantly working to help Felix reach his goal and does all the time-consuming work on collecting and processing market data. The machine learning model generates recommendations, such as expected rate of return, popularity of the asset among people with portfolios like Felix’s, and information about the risk level that matches Felix’s portfolio. Felix can use that information to make a decision.The process of using the app was quick and simple. Felix’s portfolio gets automatic updates with the total value of the portfolio and tracks its performance against his financial goals. Felix continues his way to work smiling to himself, knowing that he is a little bit closer to owning his dream home.The technical implementation of the solution in Google Cloud incorporates Dataflow batch processing pipelines as well as trained investment recommendation Big Query Machine Learning (“BQML”) models and data analytic services such as BigQuery, Cloud Storage, and Pub/Sub. The solution is described in the blog post How to implement an Investment Product Recommendation solution in GCP.In partnership with Google Cloud, SoftServe helps our clients solve complex problems with innovative solutions to achieve a faster time to market, increase ROI, and provide great user experiences.To gain a broader understanding of the solution and see how its architecture works in real life, watch SoftServe’s user journey presented at Google Southeast Asia Financial Services Cloud OnAir: Creating aha moments in Financial Services.Related ArticleA technical solution producing highly-personalized investment recommendations using MLThe implementation details behind Softserve’s use of Google Cloud to improve retail investing with the Investment Products Recommendation…Read Article
Quelle: Google Cloud Platform

The Brexit vote: A case study in causal inference using machine learning

In this blog post, we’ll answer the question, “How did the Brexit vote impact exchange rates between the British Pound and US Dollar?” To do so, we’ll use causal inference techniques to estimate the impact of what statisticians call a “treatment,” in this case a policy decision.Please note that this is a technical blog post aimed at educating about concepts and tools with public data, not any political or economic implications. The techniques we’ll discuss here can apply to all kinds of scenarios, such as the impact of a marketing campaign or product introduction on sales.Causal inference is needed because we don’t have a controlled experiment for this scenario. An ideal experiment contains carefully matched groups, except for the explanatory variable being investigated. Many real-world situations in which we are trying to find meaning don’t meet those conditions.We’ll need to find another time series that closely follows the US Dollar : British Pound exchange rate, but was not impacted by the Brexit vote. From this other time series, we’ll derive the counterfactual: what was expected to happen, had the Brexit vote not occurred. We’ll estimate the effect as the difference between the counterfactual and actual time series.Our scenarioAfter the Brexit vote on June 23, 2016, the British Pound (GBP) dropped from 1.48 versus the US Dollar (USD) to 1.36 the following day, and continued to decline.In contrast, the Euro:USD exchange rate did not change much, despite being highly correlated to the GBP:USD exchange rate. The daily values of the two exchange rates had a Pearson correlation coefficient around 0.75 during the 5 year period prior to the event. So, we’ll use the Euro:USD exchange rate as a control.To estimate the effect, we’ll consider the following 4 weeks as the post-treatment period. We could extend this period out further to estimate the full effect. However, the longer of a window we use, other factors come into play, and it becomes more difficult to isolate the effect of the treatment alone.Below you can see a chart of both exchange rates, along with the shaded area indicating the post-treatment period:The data is available from FRED, the Federal Reserve Economic Data site (US/UK Exchange Rate,  US/Euro Exchange Rate).Effect estimation with statistical modelingGiven the stark change in USD:GBP, how can we determine if the Brexit vote was a factor, and how can we calculate the size of the effect?First, let’s use tfcausalimpact to estimate the effect. tfcausalimpact is a Python port of the R-based CausalImpact package. It is based on the TensorFlow Probability package and uses the Bayesian Structural Time Series method.After the data has been loaded into a dataframe, an analysis can be performed as follows:A summary report can be produced, indicating that the Average Treatment Effect during the post-treatment period (i.e. the 4 weeks following the Brexit vote) is a drop of about 9%:Also, you can visualize the findings in a plot:Effect estimation with machine learningWe’ll now explore an alternative machine learning approach using Vertex AI. Vertex AI is the unified platform for AI on Google Cloud, enables users to create AutoML or custom models for forecasting. We will create an AutoML forecasting model that allows you to build a time-series forecasting model without code.Over the past few years, there have been multiple studies comparing statistical vs machine learning approaches (e.g. Comparison of statistical and machine learning methods for daily SKU demand forecasting, Machine Learning vs Statistical Methods for Time Series Forecasting: Size Matters). It’s outside the scope of this article to discuss this topic in depth, but it’s worth noting that each approach has relative strengths, and it may be helpful to apply both in your analysis.This model will be used to derive the counterfactual time-series. In other words, the model will produce a time-series that aims to estimate “what would the USD:GBP exchange rate be had the Brexit event not happened”? The model will use patterns from the Euro exchange rate, as well as the pre-intervention data from the UK exchange rate, to derive the counterfactual.In this case, we’re actually generating a hypothetical, historical time series rather than forecasting a future time series. With a counterfactual time series like this, policy-makers or business leaders can consider the retrospective impact of decisions they’ve made.Let’s now explore how to implement the AutoML training process. Here is a code snippet, demonstrating how to create and run the training job from prepared training data:Vertex AI AutoML Forecasting estimated the counterfactual at a slightly higher level than tfcausalimpact, leading to a stronger treatment effect of -9.5% vs -9.3%.ConclusionIn this blog post, we’ve explored how to use causal inference to estimate the impact of an event. We’ve also looked at multiple approaches that can be used to perform this estimate. First, we used tfcausalimpact, which uses a Bayesian Structural Time Series approach, to generate the counterfactual. Then, we used the forecasting service from Vertex AI to use a Deep Learning based approach.If you’d like to try out this scenario yourself, all of the code is available in Github. From there, you can launch the notebook in GCP Notebooks or Colab. If you’d like to explore Vertex AI AutoML Forecasting in more depth, this codelab provides an end-to-end tutorial. Feel free to connect on LinkedIn or Twitter to continue the conversation!Related ArticleNew to ML: Learning path on Vertex AIIf you’re new to ML, or new to Vertex AI, this post will walk through a few example ML scenarios to help you understand when to use which…Read Article
Quelle: Google Cloud Platform