Cloud makes it better: What's new and next for data security

Today’s digital economy offers a wealth of opportunities, but those opportunities come with growing risks. It has become increasingly important to manage the risks posed by the intersection of digital resilience and today’s risk landscape. Organizations around the world should be asking themselves: If a risk becomes a material threat, how can we help our employees continue to get work done efficiently and securely despite unexpected disruption, no matter where they are? This new era of “anywhere work” is not only a technology issue, but one that encompasses leadership support and cultural shifts. In a recent webinar, Heidi Shey, principal analyst at Forrester, and Anton Chuvakin, senior staff, Office of the CISO at Google Cloud, had a spirited discussion about the future of data security. They agreed that this is a moment of inflection, when smart organizations are rethinking their entire security approach and using the opportunity to take a closer look at their security technology stack at the same time. Here are some trends that they are seeing today.Greater volume, more variety. The data that organizations generate is not only increasing in volume, but in variety as well. Sensitive information can exist anywhere, including employee communications, messaging applications, and virtual meetings, making traditional techniques for classifying data such as manual tagging less effective. Organizations need to grow their risk intelligence by using artificial intelligence (AI) and machine learning (ML) to better identify and protect sensitive information. At the same time, employees are accessing enterprise data in multiple ways, on multiple devices, wreaking havoc on traditional security perimeters and anomaly detection. A more strategic approach. Multiplying threat vectors and vulnerabilities often drive organizations into a losing game of whack-a-mole as they acquire more and more point solutions, which leads to information silos and visibility gaps. While security modernization doesn’t require a rip-and-replace, its success depends on a more strategic approach to choosing and applying controls. Successful organizations are being deliberate in creating an ecosystem of controls that interoperate and reduce data silos and visibility gaps. Zero Trust. Central to any modern security strategy should be a Zero Trust approach to user and network access, not only for people but also for the growing number of internet-of-things (IoT) devices that exchange enterprise data. Zero Trust means that organizations no longer implicitly trust any user or device inside or outside the corporate perimeters — nor should they. Rather, a company must verify that attempts to connect to a network or application are authorized before granting access. Zero Trust replaces the perimeter security model between a trusted internal network and an untrusted external network – including virtual private networks (VPNs) used to access corporate data remotely. Unlike a traditional perimeter model in which a network could become compromised if a hacker breached the organization or if a malicious insider attempts to steal a company’s sensitive data, a Zero Trust approach helps ensure users only have access to the specific resources they need at a point in time.Growing supply chain networks. As organizations expand their supply chains to increase resilience and efficiency, they need a way for vendors, customers, and other third parties to securely access the data and applications necessary to conduct business. A Zero Trust approach to access can provide a scalable solution to meet this need. Enterprise security solutions with the speed, intelligence, and scale of GoogleCybersecurity is ever-evolving as new threats arise daily. Google Cloud’s approach takes advantage of Google’s experience securing more than 5 billion devices and keeping more people safe online than any other organization. Google Cloud brings our pioneering approaches to cloud-first security to enterprises everywhere they operate, leveraging the unmatched scale of Google’s data processing, novel analytics approaches with artificial intelligence and machine learning, as well as a focus on eliminating entire classes of threats. By combining Google’s security capabilities with those of our ecosystem and alliance partners — including Cybereason, IPNet, ForgeRock, Palo Alto Networks, and SADA — we’re bringing businesses a full stack of powerful and effective solutions for managing data access, verifying identity, sharing signal information, and gaining visibility into vulnerabilities and threats. In concert with our ecosystem of partners, we will be working with Mandiant and its partners to deliver an end-to-end security operations suite with even greater capabilities to help you address the ever changing threat landscape across your cloud and on-premise environments. In sum, Google Cloud brings you the tools, insight, and partnerships that can transform your security to meet the requirements of our rapidly transforming world.  To get a deeper dive into the trends and research driving this change, watch the “Cloud Makes it Better: What’s New and Next for Data Security” webinar with Forrester and Google Cloud.
Quelle: Google Cloud Platform

Accelerate speed to insights with data exploration in Dataplex

Data Exploration Workbench in Dataplex is now generally available. What exactly does it do? How can it help you? Read on.Imagine you are an explorer embarking on an exciting expedition. You are intrigued by the possible discoveries and are anxious to get started on your journey. The last thing you need is the additional anxiety induced by running from pillar to post to get all the necessary equipment in place – protective clothing is torn, first aid kits are missing, and most of the expedition gear is malfunctioning. You end up spending more time on collecting these items rather than in the actual expedition. If you are a Data Consumer (Data Analyst or Data Scientist), your data exploration journey would be similar. You too, are excited by the insights your data has in store. But, unfortunately, you, too, need to integrate a variety of tools to stand up the required infrastructure, get access to data, fix data issues, enhance data quality, manage metadata, query the data interactively, and then operationalize your analysis.  Integrating all these tools to build a data exploration pipeline will take so much effort that you have little time left to  explore the data and generate interesting insights. This disjointed approach to data exploration is the reason why 68% of companies1 never see business value from their data. How can they? Their best data minds are busy spending 70% of their time2 just figuring out how to make all these different data exploration tools work.How is the data exploration workbench solving this problem?Now imagine you having access to all the best expedition equipment in one place. You can start your exploration instantly and have more freedom to experiment and uncover fascinating discoveries that will help humanity! Wouldn’t it be awesome if you too, as a Data Consumer,  get access to all the data exploration tools in one place? A single unified view that lets you discover and interactively query fully governed high-quality data with an option to operationalize your analysis?  This is exactly what the Data exploration workbench in Dataplex offers. It provides a Spark-powered serverless data exploration experience that lets data consumers interactively extract insights from data stored in Google Cloud Storage and BigQuery using Spark SQL scripts and open source packages in Jupyter NotebooksHow does it  work?Here is how data exploration workbench tackles the four most popular pain points faced by Data Consumers and Data Administrators during the exploration journey:Challenge 1: As a data consumer you spend more time on making different tools work together than on generating insights Solution: Data exploration workbench provides a single user interface where:You have 1-click access to run Spark SQL queries using an interactive Spark SQL editor.You can leverage open-source technologies such as PySpark, Bokeh, Plotly to visualize data and build machine learning pipelines via JupyterLab Notebooks.Your queries and notebooks run on fully managed, serverless Apache Spark sessions – Dataplex  auto-creates user-specific sessions and manages the session lifecycle.You can save the scripts and notebooks as content in Dataplex and enable better discovery and collaboration of that content across your organization. You can also govern access to content using IAM permissions. You can interactively explore data, collaborate over your work, and operationalize it with one-click scheduling of scripts and notebooks.Challenge 2: Discovering the right datasets needed to kickstart data exploration is often a “manual” process that involves reaching out to other analysts/data ownersSolution:  ‘Do we have the right data to embark on further data analysis?’ – This is the question that kickstarts the data exploration journey. With Dataplex, you can examine the metadata of the tables you want to query right from within the data exploration workbench. You can further use the indexed Search to understand not only the technical metadata but business and operational metadata along with the data quality scores for your data. And finally, you get deeper insights into your data by interactively querying  using the Workbench. Challenge 3:  Finding the right query snippet to use —analysts often don’t save and share useful query snippets in an organized or centralized way. Furthermore, once you have access to the code, you now need to recreate the same infrastructure setup to get results.Solution: Data exploration workbench allows users to save Spark SQL queries and Jupyter notebooks as content and share them  across the organization via IAM permissions. It provides a built-in Notebook viewer that helps you examine the output of a shared notebook without starting a Spark session or re-executing the code cells. You can not only share the content of a script or a notebook, but also the environment where the script ran to ensure others can run on the same underlying set up. This way, analysts can seamlessly collaborate and build on the analysis. Challenge 4: Provisioning the infrastructure necessary to support different data exploration workloads across the organization is an inefficient process with limited observability.Solution: Data Administrators can pre-configure Spark environments with the right compute capacity, software packages, and auto-scaling/auto-shutdown configurations for different use cases and teams. They can govern access to these environments via IAM permissions and easily track usage and attribution per user or environment.  How can I get started?To get started with the Data exploration workbench, visit the Explore tab in Dataplex. You choose the lake of your choice and the resource browser will list all the data tables (GCS and BigQuery) in the lake. Before you start: Make sure the lake where your data resides is federated with a Dataproc Metastore instance. Request your data administrator to set up an environment and grant you Developer role or associated or IAM permissions.You can then choose to query the data using Spark SQL scripts or Jupyter notebooks. You will be priced as per the Dataplex premium processing tier for the computational and storage resources used during querying.Data Exploration Workbench is available in us-central1 and europe-west2 regions. It will be available in more regions in the coming months. 1. Data Catalog Study, Dresner Advisory Services, LLC – June 15, 20202. https://www.anaconda.com/state-of-data-science-2020
Quelle: Google Cloud Platform

October Extensions Roundup: CI on Your Laptop and Hacktoberfest!

This month, we’ve got some new extensions so good, they’re scary! Docker Extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s take a look at some of the recent ones. And if you’d like to see everything available, check out our full Extensions Marketplace!

Drone CI

Do you need to build and test a container friendly pipeline before sharing with your team? Or do you need the ability to perform continuous integration (CI) or debug failing tests on your laptop? If the answer is yes, the Drone CI extension can help you! With the extension, you can:

Import Drone CI pipelines and run them locallyRun specific steps of a pipelineMonitor execution resultsInspect logs

See it in action in the gif below!

Open Source Docker Extensions

This month, Docker celebrated Hacktoberfest, a month-long celebration of open-source projects, their maintainers, and the entire community of contributors. During this event, Docker worked with the community to contribute to our open source Docker Extensions — and encourage developers to create their own open source extensions.

In fact, here’s a list of open source extensions available in our Marketplace:

DDosify – High performance, open-source, and simple load testing tool written in Golang.Drone CI – Run Continuous Integration & Delivery Pipeline (CI/CD) from within Docker Desktop.GOSH – Build your decentralized and secure software supply chain with Docker and Git Open Source Hodler.JFrog – Scan your Docker images for vulnerabilities with JFrog Xray.Lacework Scanner – Enable developers with the insights to securely build their containers and minimize the vulnerabilities before the images go into production.Meshery – Design and operate your cloud native deployments with the Meshery extensible management plane.Mini Cluster – Run a local Apache Mesos cluster.Okteto – Remote development for Docker Compose.Open Source management tool for PostgreSQL – Use an embedded PGAdmin4 Open Source management tool for PostgreSQL.Oracle SQLcl client tool – Use an embedded version of Oracle SQLcl client tool.RedHat OpenShift – Easily deploy and test applications on to OpenShift.Volumes Backup & Share – Backup, clone, restore and share Docker volumes effortlessly.

Hacktoberfest generated a lot of great extensions from our community that aren’t yet available in the Marketplace. To check out these extensions, visit our Hacktoberfest Github Repo. All of these extensions can be installed via the CLI, and you can visit our docs to learn how. 

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. We hope that these new extensions will make your life easier and that you’ll give them a try! Check out these resources for more info on extensions:

Try October’s newest extensions by installing Docker Desktop for Mac, Windows, or Linux.Visit our Extensions Marketplace to see all of our extensions.Build your own extension with our Extensions SDK.
Quelle: https://blog.docker.com/feed/

Neue Version von FreeRTOS Long Term Support veröffentlicht

Wir freuen uns, heute das zweite Release von FreeRTOS Long Term Support (LTS) – FreeRTOS 202210.00 LTS – bekannt zu geben. Dieses Release umfasst neue Bibliotheken wie AWS IoT Fleet Provisioning und Cellular LTE-M Interface für eine einfachere Gerätebereitstellung und Mobilfunkkonnektivität. Es umfasst auch coreMQTT- und FreeRTOS-Plus-TCP-Bibliotheken mit verbesserter Modularität und Robustheit. Alle in dieser FreeRTOS-LTS-Version enthaltenen Bibliotheken, die in diesem Post aufgelistet sind, erhalten bis Oktober 2024 Sicherheitskorrekturen und schwere Fehlerbehebungen. Mit einem LTS-Release können Sie Ihre bestehende FreeRTOS-Codebasis behalten und mögliche Störungen aufgrund von FreeRTOS-Versionsupgrades vermeiden.
Quelle: aws.amazon.com

Amazon Detective trägt dazu bei, die Dauer der Untersuchung von Amazon-GuardDuty-Erkenntnissen durch Gruppierung von verwandten Erkenntnissen zu verkürzen

Ab heute gruppiert Amazon Detective automatisch verwandte GuardDuty-Erkenntnisse, um Sicherheitsanalysten dabei zu helfen, schneller zu sichten und eine umfassendere Sicherheitsuntersuchung zu schaffen. Detective verwendet Machine Learning (ML), um verwandte GuardDuty-Erkenntnisse zu gruppieren, die isoliert womöglich ignoriert wurden, gemeinsam aber den Lebenszyklus eines Angriffs zeigen, wodurch Sicherheitsanalysten fortgeschrittene Bedrohungen leichter identifizieren können. Auf der Seite „Summary“ (Zusammenfassung) zeigt Detective verwandte GuardDuty-Erkenntnisse mit Schweregrad, allen betroffenen AWS-Konten und Ressourcen. Darüber hinaus ordnet Detective die Entwicklung der Erkenntnisse den Taktiken, Techniken und Prozeduren (TTP) des MITRE-ATT&CK-Framework zu, einem weit verbreiteten Framework für Sicherheit und Bedrohungserkennung.
Quelle: aws.amazon.com