Application analysis in the DevSecOps life cycle

June is application analysis month in the Red Hat’s monthly Security series! Beginning in March 2021, the Red Hat Security Ecosystem team has provided an introduction to a DevOps Security topic in a monthly fashion to help you learn how Red Hat weaves together DevOps and Security to master the force called DevSecOps.
Quelle: CloudForms

Getting started with MLOps: Selecting the right capabilities for your use case

Establishing a mature MLOps practice to build and operationalize ML systems can take years to get right.  We recently published our MLOps framework to help organizations come up to speed faster in this important domain.  As you start your MLOps journey, you might not need to implement all of these processes and capabilities. Some will have a higher priority than others, depending on the type of workload and business value that they create for you, balanced against the cost of building or buying processes or capabilities. To help ML practitioners translate the framework into actionable steps, this blog post highlights some of the factors that influence where to begin, based on our experience in working with customers. The following table shows the recommended capabilities (indicated by check marks) based on the characteristics of your use case, but remember that each use case is unique and might have exceptions. (For definitions of the capabilities, see the MLOps framework.)MLOps capabilities by use case characteristicsYour use case might have multiple characteristics. For example, consider a recommender system that’s retrained frequently and that serves batch predictions. In that case, you need the data processing, model training, model evaluation, ML pipelines, model registry, and metadata and artifact tracking capabilities for frequent retraining. You also need a model serving capability for batch serving.In the following sections, we provide details about each of the characteristics and the capabilities that we recommend for them.PilotExample: A research project for experimenting with a new natural language model for sentiment analysis.For testing a proof of concept, your focus is typically on data preparation, feature engineering, model prototyping, and validation. You perform these tasks using the experimentation and data processing capabilities. Data scientists want to set up experiments quickly and easily and track and compare them. Therefore, you need the ML metadata and artifact tracking capability in order to debug, to provide traceability and lineage, to share and track experimentation configurations, and to manage ML artifacts. For large-scale pilots, you might also require dedicated model training and evaluation capabilities.Mission-criticalExample: An equities trading model where model performance degradation in production can put millions of dollars at stake.In a mission-critical use case, failure with the training process or production model has a significant negative impact on the business (a legal, ethical, reputational, or financial risk). The model evaluation capability is important to identify bias and fairness, as well as to provide explainability of the model. Additionally, monitoring is essential to assess the quality of the model during training and to assess how it performs in production. Online experimentation lets you test newly trained models against the one in production using a controlled environment before you replace the deployed model. Such use cases also need a robust model governance process to store, evaluate, check, release, and report on models and to protect against risks. You can enable model governance by using the model registry and metadata and artifact tracking capabilities. Additionally, datasets and feature repositories provide you with high-quality data assets that are consistent and versioned.Reusable and collaborativeExample: Customer Analytic Record (CAR) features that are used across various propensity modeling use cases.Reusable and collaborative assets allow your organization to share, discover, and reuse AI data, source code, and artifacts. A feature store helps you standardize the processes of registering, storing, and accessing features for training and serving ML models. Once features are curated and stored, they can be discovered and reused by multiple data science teams. Having a feature store helps you avoid reengineering features that already exist, and saves time on experimentation. You can also use tools to unify data annotation and categorization. Finally, by using ML metadata and artifacts tracking, you help provide consistency, testability, security and repeatability of the ML workflows. Ad hoc retrainingExample: An object detection model to detect various car parts, which needs to be retrained only when new parts are introduced. In ad hoc retraining, models are fairly static and you do not retrain them except when the model performance degrades. In these cases, you need data processing, model training, and model evaluation capabilities to train the models. Additionally, because your models are not updated for long periods, you need model monitoring. Model monitoring detects data skews, including schema anomalies, as well as data and concept drifts and shifts. Monitoring also lets you continuously evaluate your model performance, and it alerts you when performance decreases or when data issues are detected. Frequent retrainingExample: A fraud detection model that’s trained daily in order to capture recent fraud patterns.Use cases for frequent retraining are ones where model performance relies on changes in the training data. The retraining might be based on time intervals (for example, daily or weekly), or it could be triggered based on events like when new training data becomes available. For this scenario, you need ML pipelines to connect multiple steps like data extraction, preprocessing, and model training. You also need the model evaluation capability to ensure that the accuracy of the newly trained model meets your business requirements. As the number of models you train grows, both a model registry and metadata and artifact tracking help you keep track of the training jobs and model versions.Frequent implementation updatesExample: A promotion model with frequent changes to the architecture to maximize conversion rate.Frequent implementation updates involve changes to the training process itself. That might mean switching to a different ML framework, such as changing the model architecture (for example, LSTM to Attention) or adding a data transformation step in your training pipeline. Such changes in the foundation of your ML workflow require controls to ensure that the new code is functional and that the new model matches or outperforms the previous one. Additionally, the CI/CD process accelerates the time from ML experimentation to production, as well as reducing the possibility for human error. Because the changes are significant, online experimentation is necessary to ensure that the new release is performing as expected. You also need other capabilities such as experimentation, model evaluation, model registry, and metadata and artifact tracking to help you operationalize and track your implementation updates. Batch servingExample: A model that serves weekly recommendations to a user who has just signed up for a video-streaming service.For batch predictions, there is no need to score in real time. You precompute the scores and you store them for later consumption, so latency is less of a concern than in online serving. However, because you process a large amount of data at a time, throughput is important. Often batch serving is a step in a larger ETL workflow that extracts, pre-processes, scores, and stores data. Therefore, you need the data processing capability and ML pipelines for orchestration. In addition, a model registry can provide your batch serving process with the latest validated model to use for scoring.Online servingExample: A RESTful microservice that uses a model to translate text between multiple languages. Online inference requires tooling and systems in order to meet latency requirements. The system often needs to retrieve features, to perform inference, and then to return the results according to your serving configurations. A feature repository lets you retrieve features in near real time, and model serving allows you to easily deploy models as an endpoint. Additionally, online experiments help you test new models with a small sample of the serving traffic before you roll the model out to production (for example, by performing A/B testing).Get started with MLOps using Vertex AIWe recentlyannouncedVertex AI, our unified machine learning platform that helps you implement MLOps to efficiently build and manage ML projects throughout the development lifecycle. You can get started using the following resources: MLOps: Continuous delivery and automation pipelines in machine learningGetting started with Vertex AIBest practices for implementing machine learning on Google CloudAcknowledgements: I’d like to thank all the subject matter experts who contributed,  including Alessio Bagnaresi, Alexander Del Toro, Alexander Shires, Erin Kiernan, Erwin Huizenga, Hamsa Buvaraghan, Jo Maitland, Ivan Nardini, Michael Menzel, Nate Keating, Nathan Faggian, Nitin Aggarwal, Olivia Burgess, Satish Iyer, Tuba Islam, and Turan Bulmus. A special thanks to the team that helped create this, Donna Schut, Khalid Salama, and Lara Suzuki, and Mike Pope for his ongoing support.Related ArticleGoogle Cloud unveils Vertex AI, one platform, every ML tool you needGoogle Cloud launches Vertex AI, a managed platform for experimentation, versioning and deploying ML models into production.Read Article
Quelle: Google Cloud Platform

Cloud Career Jump Start: our virtual certification readiness program

It’s no question that the cloud computing industry is booming and cloud experts are in high demand.  In 2020, 67 percent of organizations surveyed in the IDG Cloud Computing Report added new cloud roles and functions (source). So there are plenty of cloud jobs out there, but if you don’t have years of experience, you might find it difficult to get one. This is especially true for members of underrepresented communities, who face other kinds of systemic barriers in education and employment.To help meet this demand and democratize opportunity, Google Cloud is introducing its first virtual Certification Journey Learning program for underrepresented communities across the United States—Cloud Career Jump Start. The program is free of charge for participants, and includes technical and professional skill development and resources with an aim to provide a readiness path to certification via the Google Cloud Associate Cloud Engineer Exam. Google Cloud’s Certification Journey Learning ProgramThe program offers free access to the Google Cloud Associate Cloud Engineer training, which provides support preparations for the Certification Exam. This industry-recognized certification helps job applicants validate their cloud expertise, elevate their career, and transform businesses with Google Cloud technology. Learn more about the exam by watching the Certification Prep: Associate Cloud Engineer webinar. Designed specifically for Computer Science or Information Systems-related majors (or relevant experience), the 12-week program offers:Exclusive free access to Google Cloud Associate Cloud Engineer-related training to support preparations for the Certification Exam. Guided progress through the certification prep training on Qwiklabs. Office hours hosted by Google Cloud technical experts. Technical workshops, including sessions with Googlers who earned cloud industry certifications and have built impactful careers in cloud tech, such as Google Cloud’s Kelsey Hightower (a Principal Developer Advocate) and Jewel Langevine (a Solution Engineer). Earn your Associate Cloud Engineer Certification Curious if you qualify to apply for the program? We’ve put together a guide below to help: What does an associate cloud engineer do? Associate Cloud Engineers deploy applications, monitor operations, and manage enterprise solutions. They also use the Google Cloud Console and the command-line interface to perform common platform-based tasks, to maintain one or more deployed solutions that leverage Google-managed or self-managed services on Google Cloud.Who’s eligible? We are encouraging applicants from groups that have been historically underrepresented in tech including Black, Latinx, Native American & Indigenous communities. Applicants should have six or more months of hands-on experience in a Computer Information Systems-related major, and/or relevant experience through online courses, boot camps, hackathons or internships. Although the ability to program is not required, familiarity with the following IT concepts is highly recommended: virtual machines, operating systems, storage and file systems, networking, databases, programming, and working with Linux at the command line.How do you know if you are ready for the program? To get an idea of what the program will cover:Watch the Certification Prep: Associate Cloud Engineer webinarComplete the Skill IQ Assessment via PluralsightReview the Associate Cloud Engineer Certification Exam Guide Take the Associate Cloud Engineer Practice ExamJump start a career in cloud infrastructure and technology Following the hands-on training, Google Cloud will offer an additional 9 months of career development resources and activities. This includes an online support community, mentoring with Googlers & partners including SADA, EPAM Systems Inc., and Slalom, resume and interviewing support from Google Recruiting & Staffing, and additional career workshops. We are also partnering with a number of Black-owned and -operated, nonprofit and cloud training organizations like Kura Labs. Cloud certifications open doors Want to hear more about this program from two Googlers who completed it? We asked Kelsey and Jewel to share more on how certifications helped them launch their careers in the cloud industry: “IT certifications introduced me to the game; opportunities and hard work helped me change it.” – Kelsey Hightower, Staff Developer Advocate, Google Cloud Platform “Becoming certified as a cloud computing professional combined with prayer, networking, and practice has kept me moving on my purposeful and rewarding career path as an engineer.” – Jewel Langevine, Solution Engineer, Google Cloud Solutions StudioKelsey and Jewel’s wisdom doesn’t end there. They play a central role in the program, sharing more about how they navigated certifications and leveraged them for success. Apply today to Cloud Career Jump StartThe program is now live in the United States, with plans to expand to other regions in the coming months. It is completely virtual, and all training is on-demand so that participants can access their coursework anytime, anywhere via the web or mobile device. To determine whether you (or someone you know) would be a great fit for the Cloud Career Jump Start, check out our guidelines andapply.Related ArticleIn case you missed it: All our free Google Cloud training opportunities from Q1Since January, we’ve introduced a number of no-cost training opportunities to help you grow your cloud skills. We’ve brought them togethe…Read Article
Quelle: Google Cloud Platform

Remote, hybrid, or office work? It's easier with no-code apps

As offices reopen, businesses aren’t keeping to a single formula. Many are embracing hybrid models, with employees splitting time between home and the office. Some are staying 100% remote. Others are returning the full workforce to offices with a greater focus on digitization.  Regardless of their model, one thing is certain: technology is more central than ever to organizations’ ability to work smarter and more effectively. In particular, organizations need technology that can be tailored to specific business needs. For example, a retail store manager may need a way to ensure improved sanitation practices, or office workers may need to create an app to manage desk reservations. Google Workspace can help by bringing together everything you need to get things done, all in one place—including the ability to build custom apps and automations with AppSheet. AppSheet, Google Cloud’s no-code development platform, lets employees–even those with no programming experience–reclaim their time and talent by building custom apps and automations. Building solutions with AppSheet is often easier and faster than traditional coding, as AppSheet leverages Google’s machine learning expertise to help you quickly go from idea to solution, helping your business to match the pace and agility that today’s landscape requires. In this article, we’ll explore how you can use AppSheet to create custom solutions that will help your business adapt to shifting business needs. To illustrate, we’ll focus on a use case many businesses face: reservation and management of meeting rooms and other office resources, but this process can easily be adapted for a wide variety of use cases, from apps for retail pickup to incident reporting and more. Simplify and centralize with appsThe first thing to consider is whether your solution would be best managed via a custom interface. If you’re trying to simplify a process for someone, such as inspections or inventory tracking, or centralize information, such as resource portals or events calendars, creating a custom app is the way to go.For this example app, we created a simple Google Sheetto hold our backend data. Sheets are ideal backend data sources when we know the amount of data will not grow exponentially, especially when piloting to a first office or group of users. We can switch our data seamlessly at any time to a scalable Google Cloud SQL database or other database if needed.The example app has several worksheets to manage the different buildings, rooms, checkpoints and people (including their roles).We can build an app in AppSheet directly from our Sheet by selecting Tools > AppSheet > Create an App. After creating our app, we see an overview of the data that we’ve connected, how pieces of data relate to each other, the views that are presented to the user, and a live interactive preview of the app. Refining the app is intuitive. When you input keywords, for instance, AppSheet surfaces appropriate functionality options that can be further customized. If we have address or location data, it’s automatically turned into a map view that we can further customize.Our office management app will offer building managers and cleaners administrative features for managing equipment, disinfection schedules, and maintenance information, and it will also offer views for employees to check occupancy and safely reserve desks and equipment.Opening an office and then a room in the app gives us an overview of both the floorplan and the occupancy and reservations for spaces and equipment in the room.Users can then reserve spaces and equipment in the app, without having to first search for a form or page in the intranet, or even log onto their work computer. The reservation is an automation routine in AppSheet, which performs pre-configured actions when triggered.As illustrated in the screenshot above, when the action is triggered either by pressing a button in the app, scanning a QR code, or tapping an RFC tag, the action simply sets the “Reserved by” and “Reserved at” columns to the current user and timestamp.Streamline with automationsAppSheet Automation, which was recently made generally available, makes it possible to reduce repetitive tasks and streamline processes even further.In this example, one challenge you may encounter is that users often forget to release a desk or resource when they’re finished, which means those reservations stay blocked until an administrator releases all blocked reservations. However, it’s also important that resources aren’t immediately released, as they first need to be cleaned and disinfected before being made available to new users.AppSheet Automation can solve this challenge by periodically checking all resources that have been reserved for longer than four hours, then alerting the user to either re-reserve them or release them to be cleaned. If there is no answer, the automation can trigger the resources to be cleaned, then released.We can configure this as a recurring event, and create a bot that sends the notifications and triggers actions based on the user input (or lack of input).Optimize for multiple rolesCreating an app for one user is one thing–creating an app that fits the needs of multiple types of users is another. AppSheet has you covered for this kind of complexity because it lets you build multiple roles into a single app. In our office management app, for example, the office manager can have a particular view and dashboard, and workers in the office can have their own views and functionalities front and center. This makes it possible to automate tasks for all types of users with a single app. Likewise, AppSheet Automation lets you easily react to real-time changes in data, send notifications to groups of users, and initiate additional workflows using data shared between the app and the users. If a group needs to reserve a conference space for an important late-breaking meeting, for example, there might be ripple effects across a number of user groups. The ability to automate all these updates and interactions can ultimately save a lot of time and effort. AppSheet also supports advanced features such as QR codes, NFC, and bluetooth sensor integration. This could help users check into locations or workspaces, further streamlining the task of helping people safely navigate and collaborate within the office.Mission critical solutions, faster than everIn the past, the type of app explored in this article would have required a central data store, distribution to specific groups of users, management and dashboard capabilities, as well as smartphone sensor integration for NFC and image capturing. This would have necessitated a mammoth implementation project and a team of professional coders.  But with no-code, you can effectively design and implement these requirements in hours and days rather than months and years. Changes can be made on the fly in any browser, including pushing out new versions to users. Though AppSheet opens up app building and digital automations to any knowledge worker, it also includes robust security and governance controls, ensuring that as you manage getting employees back into office, you don’t neglect IT security. Try out your own back-to-the-office app for free today at AppSheet.com, or explore and copy our sample app.
Quelle: Google Cloud Platform

Accelerate Google Cloud database migration assessments with EPAM’s migVisor

Editor’s note: Today, we’re announcing the Database Migration Assessment and partnership with software development company EPAM, allowing Google Cloud customers access to migVisor to conduct a database migration assessment.Today, we’re announcing our latest offering—the Database Migration Assessment—a Google Cloud-led project to help customers accelerate their deployment to Google Cloud databases with a free evaluation of their environment.  A comprehensive approach to database migrationsIn 2021, Google Cloud continues to double down on its database migration and modernization strategy to help our customers de-risk their journey to the cloud. In this blog, we share our comprehensive migration offering that includes people expertise, processes, and technology.People: Google Cloud’s Database Migration and Modernization Delivery Center is led  by Google Database Experts who have strong database migration skills and a deep understanding of how to deploy on Google Cloud databases for maximum performance, reliability, and improved total cost of ownership (TCO).Process: We’ve standardized an approach to assessing databases which streamlines migrating and modernizing data-centric workloads. This process shortens the duration of migrations and reduces the risk of migrating production databases. Our migration methodology addresses priority use cases such as: zero-downtime, heterogeneous, and non-intrusive serverless migrations. This combined with a clear path to database optimization usingCloud SQL Insights, gives customers a complete assessment-to-migration solution.Technology: Customers can use third-party tools like migVisor to do assessments for free as well as use native Google Cloud tools like Database Migration Service (DMS) to de-risk migrations and accelerate their biggest projects.This assessment helped us de-risk migration plans, with phases focused on migration, modernization and transformation. The assessment output has become the source of truth for us and we continuously refer to it as we make plans for the future. Vismay Thakkar VP of infrastructure, BackcountryAccelerate database migration assessments with migVisor from EPAM  To automate the assessment phase, we’ve partnered with EPAM, a provider with strategic specialization in database and application modernization solutions. Their Database Migration Assessment tool migVisor is a first-of-its-kind cloud database migration assessment product that helps companies analyze database workloads and generate a visual cloud migration roadmap that identifies potential quick wins as well as areas of challenge. migVisor will be made available to customers and partners, allowing for the acceleration of migration timelines for Oracle, Microsoft SQL Server, PostgreSQL and MySQL databases to Google Cloud databases. “We believe that by incorporating migVisor as part of our key solution offering for cloud database migrations and enabling our customers to leverage it early on in the migration process, they can complete their migrations in a more cost-effective, optimized and successful way. For us, migVisor is a key differentiating factor when compared to other cloud providers” –Paul Miller, Database Solutions, Google CloudmigVisor helps identify the best migration path for each database, using sophisticated scoring logic to rank databases according to the complexity of migrating to a cloud-centric technology stack. Users get a customized migration roadmap to help in planning.Backcountry is one such customer who embraced migVisor by EPAM. “Backcountry is on a technology upgrade cycle and is keen to realize the benefit of moving to a fully managed cloud database. Google Cloud has been an awesome partner in helping us on this journey,” says Vismay Thakkar, VP of infrastructure, Backcountry. “We used Google’s offer for a complete Database Migration Assessment and it gave us a comprehensive understanding of our current deployment, migration cost and time, and post-migration opex. The assessment featured an automated process with rich migration complexity dashboards generated for individual databases with migVisor.” A smart approach to database modernizationWe know a customer’s migration path from on-premises databases to managed cloud database services ranges in complexity, but even the most straightforward migration requires careful evaluation and planning. Customer database environments often leverage database technologies from multiple vendors, across different versions, and can run into thousands of deployments. This makes manual assessment cumbersome and error prone. migVisor offers users a simple, automated collection tool to analyze metadata across multiple database types, assess migration complexity, and provide a roadmap to carry out phased migrations, thus reducing risk.“Migrating out of commercial and expensive database engines is one of the key pillars and tangible incentive for reducing TCO as part of a cloud migration project,” says Yair Rozilio, senior director of cloud data solutions, EPAM. “We created migVisor to overcome the bottleneck and lack of precision the database assessment process brings to most cloud migrations. migVisor helps our customers easily identify which databases provide the quickest path to the cloud, which enables companies to drastically cut on-premises database licensing and operational expenses.” Get started today Using the Database Migration Assessment, customers will be able to better plan migrations, reduce risk and missteps, identify quick wins for TCO reduction, and review migration complexities and appropriately plan out the migration phases for best outcomes.Learn more about the Database Migration Assessment and how it can help customers reduce the complexity of migrating databases to Google Cloud.
Quelle: Google Cloud Platform

A blueprint for secure infrastructure on Google Cloud

When it comes to infrastructure security, every stakeholder has the same goal: maintain the confidentiality and integrity of their company’s data and systems. Period.Developing and operating in the Cloud provides the opportunity to achieve these goals by being more secure and having greater visibility and governance over your resources and data. This is due to the relatively uniform environment of cloud infrastructure (as compared with on-prem) and inherent service-centric architecture. In addition, cloud providers take on some of the key responsibilities for security doing their part in a shared responsibility model. However, translating this shared goal into reality can be a complex endeavor for a few reasons. Firstly, administering security in public clouds is unlike what you may be used to as the infrastructure primitives (the building blocks available to you) and control abstractions (how you administer security policy) differ from on premise environments. Additionally, ensuring you make the right policy decisions in an area as high-stakes and ever-evolving as security means that you’ll likely spend hours researching and reading through documentation, perhaps even hiring experts. To partner with you and help address these challenges, Google Cloud built the security foundations blueprint to identify core security decisions and guide you with opinionated best practices for deploying a secured GCP environment. What is the security foundations blueprint?The security foundations blueprint is made up of two resources: the security foundations guide, and the Terraform automation repository. For each security decision, the security foundations guide provides opinionated best practices in order to help you build a secure starting point for your Google Cloud deployment, and can be read and used as a reference guide. The recommended policies and architecture outlined in the document can then be deployed through automation using the Terraform repository available on GitHub. Who is it for?The security foundations blueprint was designed with the enterprise in mind, including those with the strongest security requirements. However, the best practices are applicable to any size cloud customer, and can be adapted or adopted in pieces as needed for your organization. As far as who in an organization is going to find it most useful, it is beneficial for many roles:CISOs and compliance officers will use the security foundations guide as a reference to understand Google’s key principles for Cloud Security and how they can be applied and implemented to their deployments.Security practitioners andplatform teams will follow the guide’s detailed instructions and accompanying Terraform templates for applying best practices so that they can actively set-up, configure, deploy, and operate their own security-centric infrastructure. Application developers will deploy their workloads and applications on this foundational infrastructure through an automated application deployment pipeline provided in the blueprint. What topics does it cover?This security foundations blueprint continues to expand the topics it covers, with its most recent release in April 2021 including the following areas:Google Cloud organization structure and policyAuthentication and authorizationResource hierarchy and deploymentNetworking (segmentation and security)Key and secret managementLoggingDetective controlsBilling setupApplication securityEach of the security decisions addressed in these topics come with background and discussion to support your own understanding of the concepts, which in turn enables you to customize the deployment to your own specific use case (if needed).The topics are useful separately, which makes it possible to pick-and-choose areas where you need recommendations, but they also work together. For example, by following the best practices for project and resource naming conventions, you will be set up for advanced monitoring capabilities, such as real-time notifications for compliance to custom policies.How can I use it?While the security foundations guide is incredibly valuable on its own, the real magic for a security practitioner or application developer comes from the ability to adopt, adapt, and deploy the best practices using templates in Terraform, a tool for managing Infrastructure as Code (IaC). For anyone new to IaC, simply put, it allows you to automate your infrastructure through writing code that configures and provisions your infrastructure. By using IaC to minimize the amount of manual configuration, you also benefit through limiting the possibility of human error in enforcing these components of your security policy.The Security Foundations Blueprint as an automated deployment pipelineThe Terraform automation repo includes configuration that defines the environment outlined in the guide. You can apply the repo end-to-end to deploy the full security foundations blueprint, or use the included modules individually and modify them so that you can adopt just portions of the blueprint. It’s important to note that there are a few differences between the policies for the sample organization outlined in the guide and what is deployed using the Terraform templates. Luckily, those few differences are outlined in the errata pages that are part of the Terraform automation repo.So what should I do next? We hope you’ll continue following the journey of this blog series where we’ll dive deeper into the best practices provided throughout the topical sections of the guide, discuss the different ways in which the blueprints have helped enterprises secure their own Cloud deployment, and take a look inside the Terraform templates to see how they can be adopted, adapted, and deployed. In the meantime, take a look at this recent Cloud blog post which announces the launch of the blueprint’s latest version and discusses the key security principles that steer the best practices. If you’re ready to dive straight into the security foundations guide, you can start at the beginning, or head to a topic in which you’re particularly interested. Reviewing the guide in this way, you will be able to see for yourself the level of detail and discussion, and most importantly, the direct path it provides to move beyond recommendations and into implementation. We don’t expect you to try and apply the blueprint to your security posture right away, but a great first step would be to fork the repo and deploy it in a new folder or organization. Go forth, deploy and stay safe out there.Related ArticleBuild security into Google Cloud deployments with our updated security foundations blueprintGet step by step guidance for creating a secured environment with Google Cloud with the security foundations guide and Terraform blueprin…Read Article
Quelle: Google Cloud Platform

Secure Software Supply Chain Best Practices

Last month, the Cloud Native Computing Foundation (CNCF) Security Technical Advisory Group published a detailed document about Software Supply Chain Best Practices. You can get the full document from their GitHub repo. This was the result of months of work from a large team, with special thanks to Jonathan Meadows and Emily Fox. As one of the CNCF reviewers I had the pleasure of reading several iterations and seeing it take shape and improve over time.

Supply chain security has gone from a niche concern to something that makes headlines, in particular after the SolarWinds “Sunburst” attack last year. Last week it was an important part of United States President Joe Biden’s Executive Order on Cybersecurity. So what is it? Every time you use software that you didn’t write yourself, often open source software that you use in your applications, you are trusting both that the software you added is what you thought it is, and that it is trustworthy not hostile. Usually both these things are true, but when they go wrong, like when hundreds of people installed updates from SolarWinds that turned out to contain code to attack their infrastructure, the consequences are serious. As people have hardened their production environments, attacking software as it is written, assembled, built or tested, before production, has become an easier route.

The CNCF Security paper started after discussions I had with Jonathan about what work needs to be done to make secure supply chains easier and more widely adopted. The paper does a really good job in explaining the four key principles:

First, every step in a supply chain should be “trustworthy” as a result of a combination of cryptographic attestation and verificationSecond, automation is critical to supply chain security. Automating as much of the software supply chain as possible can significantly reduce the possibility of human error and configuration drift. Third, the build environments used in a supply chain should be clearly defined, with limited scope.  Fourth, all entities operating in the supply chain environment must be required to mutually authenticate using hardened authentication mechanisms with regular key rotation.

In simpler language, this means that you need to be able to securely trace all the code you are using, which exact versions you are using,  where they came from, and in an automated way so that there are no errors. Your build environments should be minimal, secure and well defined, i.e. containerised. And you should be making sure everything is authenticated securely.

The majority of people do not meet all these criteria making exact traceability difficult. The report has strong recommendations for environments that are more sensitive, such as those dealing with payments and other sensitive areas. Over time these requirements will become much more widely used because the risks are serious for everyone.

At Docker we believe in the importance of a secure software supply chain and we are going to bring you simple tools that improve your security. We already set the standard with Docker Official Images. They are the most widely trusted images that  developers and development teams use as a secure basis for their application builds. Additionally, we have CVE scanning in conjunction with Snyk, which helps identify the many risks in the software supply chain. We are currently working with the CNCF, Amazon and Microsoft on the Notary v2 project to update container signing  which we will ship in a few months. This is a revamp of Notary v1 and Docker Content Trust that makes signatures portable between registries and will improve usability that has broad industry consensus. We have more plans to improve security for developers and would love your feedback and ideas in our roadmap repository.
The post Secure Software Supply Chain Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Amazon Connect Customer Profiles startet in der Region Kanada (Zentral)

Amazon Connect ermöglicht Ihnen jetzt in der Region Kanada (Zentral) die Aktivierung von Amazon Connect Customer Profiles und stellt den Kontaktcenter-Agenten die aktuellsten Informationen über den eingehenden Kontakt zur Verfügung, um einen schnelleren und persönlicheren Kundenservice zu bieten. Customer Profiles führt automatisch Kundeninformationen aus verschiedenen Anwendungen wie Salesforce, Amazon S3 und ServiceNow zu einem einheitlichen Kundenprofil zusammen, das den Agenten zu Beginn der Kundeninteraktion zur Verfügung gestellt wird. 
Quelle: aws.amazon.com