Automobilindustrie: Flucht nach vorne

Der Arbeitsplatzabbau in der Autoindustrie ist brutal. Gleichzeitig wird bestehendes Personal für IT umgeschult oder IT-Fachkräfte eingestellt. Wir haben nachgefragt, wie Hersteller und Zulieferer vorgehen. Eine Analyse von Peter Ilg (Auto, Elektroauto)
Quelle: Golem

Forbes embraces MongoDB on Google Cloud as part of digital-first strategy

It’s often said that success is more about the journey than it is about your destination. But for a modern tech organization, just the opposite is true. Every technology journey gets judged by its outcome: Does it contribute something of value to the business?The emergence of cloud-based managed services didn’t change the importance of this question for IT organizations. But the cloud has challenged IT to rethink which technology choices truly create business value—and which ones today aren’t as compelling as they used to be.Database systems are a great example of how this process is playing out for many IT organizations. The care and feeding of an on-premises database is one of the most expensive, demanding, and unforgiving IT functions. A typical enterprise will dedicate a small army of database administrators, along with a good part of its server, storage, network, and disaster recovery infrastructure, to keep business-critical data stores available and secure.Modern database systems, including MongoDB, have been a big step forward—giving businesses a more flexible, scalable, and developer-friendly alternative to legacy relational databases. But there’s an even bigger payoff with a solution such as MongoDB Atlas: a fully managed, database-as-a-service (DBaaS) offering. It’s an approach that gives businesses all of the advantages of a modern, scalable, highly available database, while freeing IT to focus on high-value activities.Forbes and MongoDB Atlas: rising to the challenge of record-setting growthForbes is one example of what’s possible when a tech organization executes a growth strategy that integrates a DBaaS solution with a cloud-native application architecture—in this case, migrating from self-managed MongoDB to MongoDB Atlas running on Google Cloud. Forbes was one of the first media brands to launch a web presence, and its team was already contending with a long run of record-breaking growth. In May 2020 alone, the company attracted more than 120 million unique visitors to Forbes.com. At the same time, however, Forbes was tasked with driving an aggressive growth and innovation strategy that included seven new online newsletters and an array of new services for both readers and journalists.Forbes’ decision to migrate from its on-premises MongoDB deployment to MongoDB Atlas database running on Google Cloud was crucial to hitting its business technology goals. By adopting a cloud-native architecture, Forbes could also implement an intermediate abstraction layer that placed a stable API on top of the database. This allowed more freedom and flexibility to work with changing data structures while minimizing the risk of breaking the services that use the data. In addition, by pairing MongoDB Atlas with Google Cloud, Forbes could build its digital products on a reliable and highly scalable infrastructure—one that can seamlessly handle both upward and downward spikes in traffic, without the cost and complexity of overprovisioning.And by adopting a fully managed DBaaS, Forbes got out from under the immense burden of managing, scaling, and securing an on-premises system. Resources the company had devoted to running database systems and infrastructure were now free to focus entirely on delivering high-value projects that drove growth and elevated the Forbes reader experience. “We did not want to be in the database management business,” said Forbes CTO Vadim Supitskiy. “We were now abstracted enough to focus solely on value delivery.”Forbes’ decision to run MongoDB Atlas on Google Cloud yielded other important benefits. The company’s application architecture, for example, relied on Kubernetes to orchestrate more than 50 microservices, which made Google Kubernetes Engine (GKE) a key source of IT value. Forbes unlocked additional value via integrations with other pieces of the Google Cloud software ecosystem, including its AI and machine learning capabilities; and with serverless applications built on App Engine that dramatically improved developer productivity and efficiency.For Forbes, the move to MongoDB Atlas on Google Cloud is already showing results: Release cycles for services have accelerated anywhere from 2x-10x, while total cost of ownership for its database system has dropped by 25%. And the company’s newly launched newsletters drove a 92% increase in overall newsletter subscriptions during 2020—a critical metric for any brand, and one that can be hard to push upward in a highly competitive industry.An example like Forbes provides a good starting point for understanding how MongoDB Atlas and Google Cloud can create value for your IT organization when assessing DBaaS offerings on various cloud platforms.Sizing up your database-as-a-service options: 4 key questions1. Who is managing a database service, and how will they add value to an offering? Look around online, and you’ll find a number of vendors with cloud offerings that look, at first glance, very similar to MongoDB Atlas. Some of these actually use MongoDB, but they may or may not offer the latest version with the most complete capabilities. Other vendors don’t actually use MongoDB but rather attempt to emulate it—and their efforts may fall short in unpredictable ways.What’s unique about MongoDB Atlas is the fact that it’s built, supported, and maintained by the core MongoDB engineering team. There’s tremendous value in getting database support directly from MongoDB engineers and consulting services from engineers with multiple years of MongoDB experience.MongoDB Atlas also holds a significant technology edge over third-party managed database services based on MongoDB. Compared to these competing offerings, for example, only MongoDB Atlas supports all MongoDB features with full application compatibility, or access to the latest MongoDB version, or even the most complete JSON data type support.2. Will a database service support a true cloud-native app strategy? In theory, moving your database to the cloud opens the doors to massive gains in performance, scalability, and innovation potential. In practice, not every managed database is equal in terms of being engineered and optimized as a cloud-native application.Keeping this in mind, there are five key areas where you should expect any cloud database, including MongoDB Atlas, to offer a clear advantage over legacy database systems:Efficiency, including automated deployments and provisioning, setup and maintenance, and version upgrades.Performance, with on-demand scaling and real-time performance insights.Mission-critical reliability, including distributed fault tolerance and backup options.Security, with controls and features that meet current protocols and compliance standards.Productivity, with drivers, integrations, and native tools that keep developers focused and engaged.3. Does a database service maximize your freedom and flexibility? Every cloud provider wants your business, but the reality is that many companies want the ability to deploy databases, services, and data stores across multi-cloud and hybrid cloud environments. MongoDB Atlas offers true multi-cloud capabilities, with easy portability between clouds. MongoDB also supports public cloud, private cloud, on-premises, and hybrid deployments with MongoDB Enterprise Advanced. This degree of multi-cloud support also means that MongoDB Atlas is available across a total of more than 70 public-cloud global regions—a high-value capability for companies with specialized data security, governance, or compliance requirements.4. How dependent is your business on your ability to provide a high-quality customer experience—anywhere, at any time? Adopting a truly global database infrastructure isn’t just about data security or disaster recovery. It’s also a major piece of the puzzle for businesses focused on global growth, especially in regions where digital businesses often create less-than-stellar customer experiences due to performance and latency issues.MongoDB Atlas addresses these types of performance issues with its use of global clusters—in essence, a cluster that includes different zones around the world to handle both reads and writes. By combining MongoDB Atlas with Google Cloud’s virtual private cloud capabilities, it becomes possible to build applications that offer in-region latency experiences for audiences almost anywhere in the world. And that’s an advantage with game-changing potential for any business aiming to build a loyal and satisfied global customer base.The goal: Getting the most from your cloud choicesThere’s tremendous value in migrating to a database service that frees up IT for projects that drive innovation and growth, and MongoDB Atlas meets that need. But questions like the ones above address the bigger challenge: choosing a DBaaS offering that truly taps into the full potential the cloud has to offer, and that leaves nothing off the table in terms of performance, growth, freedom, and flexibility. This is a much stricter and more difficult standard for any cloud application to meet, and it’s the reason why MongoDB Atlas and Google Cloud offer some important and unique advantages.Learn more about MongoDB Atlas on Google Cloud.Watch Vadim Supitskiy, Forbes’ CTO chat with Lena Smart, MongoDB CISO about how Forbes set digital innovation standards with MongoDB and Google Cloud.Related ArticleAnnouncing MongoDB Atlas free tier on GCPThe free tier offers a no-cost sandbox environment for MongoDB Atlas on GCP so you can test any potential MongoDB workloads and decide to…Read Article
Quelle: Google Cloud Platform

Healthcare gets more productive with new industry-specific AI tools

COVID-19 shined a light on the heroic efforts of front-line healthcare workers. But it also highlighted some of the challenges around managing healthcare data and interpreting unstructured digital text. For healthcare professionals, the process of reviewing and writing medical documents is incredibly labor-intensive. And the lack of intelligent, easy-to-use tools to assist with the unique requirements of medical documentation creates data capturing errors, a diminished patient-doctor experience, and physician burnout. Today, we are excited to launch in public preview a suite of fully-managed AI tools designed to help with these challenges: Healthcare Natural Language API and AutoML Entity Extraction for Healthcare. These tools assist healthcare professionals with the review and analysis of medical documents in a repeatable, scalable way. We hope this technology will help reduce workforce burnout and increase healthcare productivity, both in the back-office and in clinical practice. Healthcare Natural Language API enables auto summarization of medical insightsA significant pain point in the healthcare industry is that mission-critical medical knowledge is often stored in unstructured digital text—that is, content lacking metadata that can’t be mapped into standard database fields. For example, social determinants of health like substance abuse or physical activity, and follow-up recommendations such as medication amounts or behavior suggestions, often reside within the unstructured text of a medical record. The main path to accessing such information is a manual review of the medical document by the healthcare professional. With the Healthcare Natural Language API, enterprise customers can now better coordinate valuable medical insights that are captured in unstructured text, such as vaccinations or medications, that may be overlooked as patients move through their healthcare journeys. This solution can drive measurable outcomes by lowering the likelihood of redundant bloodwork or other tests, reducing operational spending, and improving the patient-doctor experience.How does it work? The Healthcare Natural Language API identifies medical insights in documents, automatically extracting knowledge about medical procedures, medications, body vitals, or medical conditions. By using machine learning, the API identifies clinically relevant attributes based on the surrounding context. For example, it discerns medications prescribed in the past from medications prescribed for the future and it picks up the likelihood of a specific symptom or diagnosis, as captured in language nuances. It can also distinguish medical insights that pertain to the patient from information that pertain to a patient’s relative.To facilitate analysis of medical insights at scale, the Healthcare Natural Language API automatically normalizes medical information against an industry-standard knowledge graph such as Medical Subject Headings (MeSH) or International Classification of Diseases (ICD). Human language is rich in concepts—often with overlapping meaning—yet analysis necessitates standardized data inputs. For example, the medical condition diabetes is commonly referred to as diabetes mellitus, while croup is also called laryngotracheobronchitis in specialist terms. With the Healthcare Natural Language API, similar medical information gets normalized into a standardized medical knowledge graph.Finally, the Healthcare Natural Language API enriches health applications that rely on the interpretation of unstructured digital text. For example, telehealth companies can deploy the Healthcare Natural Language API to identify the most relevant symptoms, pre-existing conditions, and medications from a doctor-patient transcribed conversation. Pharmaceutical and biotechnology customers may employ the Healthcare Natural Language API to optimize clinical trials by increasing the accuracy of patients matched against granular inclusion/exclusion protocol criteria. The Healthcare Natural Language API technology can also drive operational efficiencies in document-review workflows like Healthcare Effectiveness Data and Information Set (HEDIS) quality reporting or Hierarchical Condition Category (HCC) risk adjustment.AutoML Entity Extraction for Healthcare facilitates custom information extraction In addition to the Healthcare Natural Language API, we are launching AutoML Entity Extraction for Healthcare—an easy-to-use AI development platform that broadens access to AI across users with various technical backgrounds. AutoML Entity Extraction for Healthcare complements the coverage of insights available through the Healthcare Natural Language API.Healthcare professionals may not have the technical expertise on-hand to build their own tools for extracting information from digital documents. With AutoML Entity Extraction for Healthcare, we’ve made this much easier to do via a low-code interface, letting healthcare professionals build information extraction tools for gene mutations and socioeconomic factors, for example. AutoML Entity Extraction for Healthcare enriches digital health applications such as telemedicine, drug discovery, or clinical trials for rare diseases.To help customers get started with AutoML Entity Extraction for Healthcare, we are open-sourcing a set of annotation guidelines for medical text. We hope the community will contribute to the refinement and expansion of these guidelines to mirror the ever-evolving nature of healthcare.Accelerating impact through partnersTo help deliver these solutions to providers, payers, and life science companies nationwide, Google Cloud is partnering with a number of key solutions providers. SADA, a Google Cloud solutions provider, believes the new tools will be able to help healthcare customers implement medical analysis projects in days, not weeks. “The richest information about the health of a patient is typically not found within the structured fields of a medical record system. Instead, it is contained within the lengthy free-text notes that a clinician either types or dictates into the medical record in the course of care,” says Michael Ames, Sr. Director Healthcare and Life Sciences at SADA. “I’m very excited for the opportunities this suite of Healthcare Natural Language AI tools from Google Cloud will create.”What’s nextLearn more about getting started with the Healthcare Natural Language API, which is free-of-charge for the next 30 days, until December 10th, 2020. To get started with the public preview of AutoML Entity Extraction for Healthcare, see our step-by-step guide and check out our website, or contact sales for more information.Related ArticleRead Article
Quelle: Google Cloud Platform

DORA and the shared pursuit of digital operational resilience in finance

If you are a financial entity in the European Union (EU), the new draft regulation from the European Commission on Digital Operational Resilience for the Financial Sector (DORA) is likely top of mind. DORA aims to consolidate and upgrade existing Information and Communications Technology (ICT) risk management requirements, and is also introducing a new framework for direct oversight of critical ICT service providers by financial regulators in the EU. Where the criteria are met, this would apply to cloud service providers like Google Cloud. It’s important to know that DORA is still in draft and is going through the legislative process. As of today, DORA doesn’t create any new requirements for financial entities or ICT service providers. Google Cloud is following the proposed regulation and is contributing to the collaborative dialogue that is shaping it to help DORA achieve the European Commission’s priorities.Enhancing the digital resilience of the European financial systemDORA addresses a number of important topics for financial entities using ICT services, with the objective of enhancing the digital resilience of the European financial system from incident reporting to operational resilience testing and third party risk management.Resilience and security are at the core of Google Cloud’s operations. We firmly believe that migration to the public cloud can help financial entities improve their operational resilience and security posture. These benefits have come into full view during the COVID-19 pandemic — our technology and infrastructure have continued to support our customers without shortfalls. At the same time, the oversight framework for critical third-party providers under DORA could create a genuine opportunity to enhance understanding, transparency, and trust among ICT service providers, financial entities, and financial regulators, and ultimately stimulate innovation in the financial sector in Europe. Google Cloud already supports our customers in many of the areas addressed in DORA: Incident reporting: To protect our customers’ data, Google Cloud runs an industry-leading information security operation that combines stringent processes, a world-class team, and multi-layered information security and privacy infrastructure. Our Data incident response whitepaper outlines Google Cloud approach to managing and responding to data incidents. Operational resilience and testing: Our global infrastructure, baseline controls, and security features offer strong tools that customers can use to achieve resilience on our services. We are also committed to open source standards. These solutions help customers control the availability of their workloads and run them wherever they want without being dependent on or locked into a single cloud provider. We also recognize that resilience must be tested. Google Cloud conducts our own rigorous testing, including penetration testing and disaster recovery testing, and empowers our customers to perform their own penetration testing. We also provide information about how customers can use our services in their disaster recovery planning in our Disaster Recovery Planning Guide. Third-party risk: We recognize that financial entities must consider outsourcing and third-party risk management requirements when using cloud services. Google Cloud’s contracts for financial entities in the EU address the contractual requirements in the EBA outsourcing guidelines and the EIOPA cloud outsourcing guidelines. We pay close attention as laws and regulatory expectations continue to evolve. Policy engagement on the new frameworkAs the conversation around DORA progresses, we will continue to lend our view and technology expertise to policymakers and industry in a transparent manner, in particular advocating for the following:Harmonization and deduplication of requirements, including between DORA and existing frameworks like the European Supervisory Authorities’ Outsourcing Guidelines and the NIS Directive.Requirements that are proportionate and fit-for-purpose, especially those that recognize the technological and operational realities of evolving ICT services in the cloud context.Technology neutrality and innovation, which we believe is always encouraged by open ecosystems and the free flow of data.An approach that would be consistent with a multi-tenant cloud environment and respect the security and integrity of our services for all customers, whether they are subject to DORA or not. We are committed to being a constructive voice as we engage with stakeholders on the proposal. Open dialogue and sharing expertise and best practices will be key to DORA’s effectiveness.
Quelle: Google Cloud Platform

Dataproc cooperative multi-tenancy

Data analysts run their BI workloads on Dataproc to generate dashboards, reports and insights. Diverse sets of Data analysts from various teams analyzing data to generate reports, dashboards and insights drive the need for multi-tenancy for Dataproc workloads. Today, workloads from all the users on the cluster runs as a single service account thereby every workload has the same data access. Dataproc Cooperative Multi-tenancy enables multiple users with distinct data accesses to run workloads on the same cluster. A Dataproc cluster usually runs the workloads as the cluster service account. Creating a Dataproc cluster with Dataproc Cooperative Multi-tenancy enables you to isolate user identities when running jobs that access Cloud Storage resources. The mapping of the  Cloud IAM user(s) to a service account is specified at cluster creation time and many service accounts can be configured for a given cluster. This means that interactions with Cloud Storage will be authenticated as a service account that is mapped to the user who submits the job, instead of the cluster service account.ConsiderationsDataproc Cooperative Multi-Tenancy has the following considerations:Setup the mapping of the Cloud IAM user to the service account by enabling the dataproc:dataproc.cooperative.multi-tenancy.user.mapping property. When a user submits a job to the cluster, the VM service account will impersonate the service account mapped to this user and interact with Cloud Storage as that service account, through the GCS connector.Requires GCS connector version to be at least 2.1.4.Does not support clusters with Kerberos enabled.Intended for jobs submitted through the Dataproc Jobs API only.ObjectivesWe intend to demonstrate the following objects as part of this blog.Create a Dataproc cluster with Dataproc Cooperative Multi-tenancy enabled.Submit jobs to the cluster with different user identities and observe different access rules applied  when interacting with Cloud Storage.Verify that interactions with Cloud Storage are authenticated with different service accounts, through StackDriver loggings.Before You BeginCreate a ProjectIn the Cloud Console, on the project selector page, select or create a Cloud project.Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.Enable the Dataproc API.Enable the StackDriver API.Install and initialize the Cloud SDK.Simulate a Second UserUsually, you have another user as a second user, however you can also simulate a second user by using a separate service account. Since you are going to submit jobs to the cluster by different users, you can activate a service account in your gcloud settings to simulate a second user.First, get your current activated account in gcloud. In most cases this would be your personal accountFIRST_USER=$(gcloud auth list –filter=status:ACTIVE –format=”value(account)”)Create a service account Grant the service account proper permissions to submit jobs to a Dataproc clusterCreate a key for the service account and use the key to activate it in gcloud. You can delete the key file after the service account is activated. Now if you run the following command:gcloud auth list –filter=status:ACTIVE –format=”value(account)”You will see this service account is the active account. In order to proceed with the examples below, switch back to your original active accountgcloud config set account ${FIRST_USER}Configure the Service AccountsCreate 3 additional service accounts, one as the Dataproc VM service account, and the other 2 as the service accounts mapped to users (user service accounts). Note: we recommend using a per-cluster VM service account and only allow it to impersonate user service accounts you intend to use on the specific cluster.Grant the iam.serviceAccountTokenCreator role to the VM service account on the two user service accounts, so it can impersonate them.AndGrant the dataproc.worker role to the VM service account so it can perform necessary jobs on the cluster VMs.Create Cloud Storage Resource and Configure Service AccountsCreate a bucketWrite a simple file to the bucket.echo “This is a simple file” | gsutil cp – gs://${BUCKET}/fileGrant only the first  user service account, USER_SA_ALLOW, admin access to the bucket.gsutil iam ch serviceAccount:${USER_SA_ALLOW}:admin gs://${BUCKET}Create a Cluster and Configure Service AccountsIn this example, we will map the user “FIRST_USER” (personal user) to the service account with GCS admin permissions, and the user “SECOND_USER” (simulated with as a service account) to the service account without GCS access.Note that cooperative multi-tenancy is only available in GCS connector from version 2.1.4 onwards. It is pre-installed on Dataproc image version 1.5.11 and up, but you can use the connectors initialization action to install a specific version of GCS connector on older Dataproc images.The VM service account needs to call the generateAccessToken API to fetch access token for the job service account, so make sure your cluster has the right scopes. In the example below I’ll just use the cloud-platform scope.Note: The user service accounts might need to have access to the config bucket associated with the cluster in order to run jobs, so make sure you grant the user service accounts access.2. On Dataproc clusters with 1.5+ images, by default, Spark and MapReduce history files are sent to the temp-bucket associated with the cluster, so you might want to grant the user service accounts access to this bucket.Run Example JobsRun a Spark job as “FIRST_USER”, and since the mapped service account has access to the GCS file gs://${BUCKET}/file, the job will succeed.And the job will succeed with output like:Now run the same job as “SECOND_USER”, and since the mapped service account has no access to the GCS file gs://${BUCKET}/file, the job will fail, and the driver output will show it is because of permission issues.And the job driver shows it is because the service account used does not have storage.get.access to the GCS file.Similarly for a Hive job (creating an external table in GCS, inserting records, then reading the records), when running the following as user “FIRST_USER”,It will succeed because the mapped  service account has access to the bucket <BUCKET>: However, when querying the table employeeas a different user “SECOND_USER”, the job will use the second user service account which has no access to the bucket, and the job will fail.Verify Service Accounts Authentication With Cloud Storage Through StackDriver LoggingFirst, check the usage of the first service account which has access to the bucket.Make sure the gcloud active account is your personal accountgcloud config set account ${FIRST_USER}Find logs about access to the bucket using the service account with GCS permissionsgcloud logging read “resource.type=”gcs_bucket” AND resource.labels.bucket_name=”${BUCKET}” AND protoPayload.authenticationInfo.principalEmail=”${USER_SA_ALLOW}””And we can see the results are that permissions are always granted:Checking the service account which has no access to the bucketAnd we see access permissions were never granted:And we can verify the VM service account was never directly used to access the bucket (the following gcloud command returns 0 log entries)gcloud logging read “resource.type=”gcs_bucket” AND resource.labels.bucket_name=”${BUCKET}” AND protoPayload.authenticationInfo.principalEmail=”${VM_SA}””CleanupDelete the clustergcloud dataproc clusters delete ${CLUSTER_NAME} –region ${REGION} –quietDelete the bucketgsutil rm -r gs://${BUCKET}Deactivate the service account used to simulate a second usergcloud auth revoke ${SECOND_USER}Delete the service accountsNoteThe cooperative multi-tenancy feature does not yet work on clusters with Kerberos enabled.Jobs submitted by users without service accounts mapped to them will fall back to use the VM service account when accessing GCS resources. However, you can set the `core:fs.gs.auth.impersonation.service.account`property to change the fallback service account. The VM service account will have to be able to call `generateAccessToken` to fetch access tokens for this fallback service account as well.This blog successfully demonstrates how you can use Dataproc Cooperative Multi-Tenancy to share Dataproc clusters across multiple users.Related ArticlePresto optional component now available on DataprocThe Presto query engine optional component is now available for Dataproc, Google Cloud’s fully managed Spark and Hadoop cluster service.Read Article
Quelle: Google Cloud Platform

Combining Snyk Scans in Docker Desktop and Docker Hub to Deploy Secure Containers

Last week, we announced that the Docker Desktop Stable release includes vulnerability scanning, the latest milestone in our container security solution that we are building with our partner Snyk. You can now run Snyk vulnerability scans directly from the Docker Desktop CLI.  Combining this functionality with Docker Hub scanning functionality that we launched in October provides you with the flexibility of including vulnerability scanning along multiple points of your development inner loop, and provides better tooling for deploying secure applications.

You can decide if you want to run your first scans from the Desktop CLI side, or from the Hub.  Customers that have used Docker for a while tend to prefer starting from the Hub. The easiest way to jump in is to configure the Docker Hub repos to automatically trigger scanning every time that you push an image into that repo. This option is configurable for each repository, so that you can decide how to onboard these scans into your security program. (Docker Hub image is available only for Docker Pro and Team subscribers, for more information about subscriptions visit the Docker Pricing Page.)

Once you enable scanning, you can view the scanning results either in the Docker Hub, or directly from the Docker Desktop app as described in this blog. 

From the scan results summary you can drill down to first view the more detailed data for each scan and get more detailed information about each vulnerability type. The most useful information in vulnerability data is the Snyk recommendation on how to remediate the detected vulnerability, and if a higher package version is available where the specific vulnerability has already been addressed.  

Detect, Then Remediate 

If you are viewing vulnerability data from the Docker Desktop, you can start remediating vulnerabilities, and testing remediations directly from your CLI.  Triggering scans from Docker Desktop is simple – just run docker scan, and you can run iterative tests that confirm successful remediation before pushing the image back into the Hub.  

For new Docker users, consider running your first scans from the Desktop CLI. Docker Desktop Vulnerability Scanning CLI Cheat Sheet is a fantastic resource for getting started.  

The CLI Cheat Sheet starts from the basics, which are also described in the Docker Documentation page on Vulnerability scanning for Docker local images – including steps for running your first scans, description of the vulnerability information included with each scan result, and docker scan flags that help you specify the scan results that you want to view.  Some of these docker scan flags are – 

–dependency-tree – displaying the list of all the package underlying dependencies that include the reported vulnerability–exclude base – running an image scan, without reporting vulnerabilities associated with the base layer–f – including the vulnerability data for the associated Dockerfile –json – displaying the vulnerability data in JSON format

The really cool thing about this Cheat Sheet is that it shows you how to combine these flags to create a number of options for viewing your data – 

Show only high severity vulnerabilities from layers other than the base image: $ docker scan myapp:mytag –exclude-base –file path/to/Dockerfile –json | jq ‘[.vulnerabilities[] | select(.severity==”high”)]’ Show high severity vulnerabilities with an CVSSv3 network attack vector: $ docker scan myapp:mytag –json | jq ‘[.vulnerabilities[] | select(.severity==”high”) | select(.CVSSv3 | contains(“AV:N”))]’ Show high severity vulnerabilities with a fix available: $ docker scan myapp:mytag –json | jq ‘[.vulnerabilities[] | select(.nearestFixedInVersion) | select(.severity==”high”)]’ 

Running the CLI scans and remediating vulnerabilities before you push your images to the Hub, reduces the number of vulnerabilities reported in the Hub scan, providing your team with a faster and more streamlined build cycle  

To learn more about running vulnerability scanning on Docker images, you can watch “Securing Containers Directly from Docker Desktop” session, presented during SnykCon.  This is a joint presentation by Justin Cormack, Docker security lead, and Danielle Inbar, Snyk product manager, discussing what you can do to leverage this new solution in the security programs of your organization
The post Combining Snyk Scans in Docker Desktop and Docker Hub to Deploy Secure Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/