Cloud CISO Perspectives: October 2022

Welcome to October’s Cloud CISO Perspectives. This month, we’re focusing on our just-completed Google Cloud Next conference and Mandiant’s inaugural mWise Conference, and what our slate of cybersecurity announcements can reveal about how we are approaching the thorniest cybersecurity challenges facing the industry today. As I wrote in last month’s newsletter, a big part of our strategy involves integrating Mandiant’s threat intelligence with our own to help improve our ability to stop threats and to modernize the overall state of security operations faster than ever before. We focused on the democratization of SecOps to help provide better security outcomes for organizations of all sizes and levels of expertise. Therefore, it’s vital that our cybersecurity intelligence be an integral part of customer security strategies.This is all part of our vision of engineering advanced capabilities into our platforms and simplifying operations, so that stronger security outcomes can be achieved. As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.Next ‘22 and mWise: In pursuit of the grand challengeI recently wrote on my personal blog about the grind of routine security work, and the challenges security professionals face in moving forward through our daily tasks and toil to achieve a better security state. We focus on two fundamentals: We strive to achieve grand challenges and create exponential growth in security outcomes, and we remain equally focused on tactical improvements to reduce the wear and tear of the daily grind.Many of Google Cloud’s announcements at this year’s Next are the result of envisioning a new, improved security state, and working hard to achieve it.At this year’s Next, we took a deep dive into our security philosophy, helped customers achieve their security goals with hands-on training, and made five major security announcements: We introduced Chronicle Security Operations, which can help detect, investigate, and respond to cyberthreats with the speed, scale, and intelligence of Google.We introduced Confidential Space, which can help unlock the value of secure data collaboration.We introduced Software Delivery Shield, which can help improve software supply chain security. We detailed our latest advancements in digital sovereignty, to address the growing demand for cloud solutions with high levels of control, transparency, and sovereignty.And we introduced new and expanded Google Cloud partnerships with leaders across the security ecosystem.We also revealed new capabilities across our existing slate of security products. These include:Our Assured Open Source Software service, which we announced earlier this year, is now available in Preview.The integration of groundbreaking technology from Foreseeti, which can help teams understand their exposure and prioritize contextualized vulnerability findings, will be coming soon to Security Command Center in Preview.reCAPTCHA Enterprise will partner with Signifyd’s anti-fraud technology to bring to market a joint anti-fraud and abuse solution that can help enterprises reduce abuse, account takeovers, and payment fraud. Palo Alto Networks customers can now pair Prisma Access with BeyondCorp Enterprise Essentials to help secure private and SaaS app access while mitigating threats with a secure enterprise browsing experience. Google Workspace has received several security updates and advances. They bring data loss prevention (DLP) to Google Chat to help prevent sensitive information leaks, new Trust rules for Google Drive for more granular control of internal and external sharing, and client-side encryption in Gmail and Google Calendar to help address a broad range of data sovereignty and compliance requirements.Google Cloud Armor, which was instrumental in stopping the largest Layer 7 DDoS attack to date, was named a Strong Performer in The Forrester Wave™: Web Application Firewalls, Q3 2022. This is our debut in the WAF Wave, and it’s encouraging to see the recognition for the product in this market segment.New Private Service Connect capabilities available now in Preview include consumer-controlled security, routing, and telemetry to help enable more flexible and consistent policy for all services; support for on-prem traffic through Cloud Interconnects to PSC endpoints; support for hybrid environments; and five new partner managed services.We are expanding our Cloud Firewall product line and introducing two new tiers: Cloud Firewall Essentials and Cloud Firewall Standard. We want to help transform how organizations can secure themselves not just in the cloud but across all their environments. This also includes changing how security teams can engage and retain the support of their Boards and executive teams. At the mWise Conference held in Washington, D.C., the week following Next ‘22, in some of my remarks with Kevin Mandia we talked about the need for higher expectations of the board and CISO (and CIO) relationship to drive this transformation. We’ve written about the importance of this change here in this newsletter, and we at Google Cloud have suggested 10 questions that can help facilitate better conversations between CISOs and their boards. As you’ve seen, it’s been a bumper set of announcements and content this month. That momentum will continue as we further build the Most Trusted Cloud, now in partnership with our new colleagues from Mandiant.Google Cybersecurity Action Team highlightsHere are the latest updates, products, services and resources from our security teams this month: SecurityHow Cloud EKM can help resolve the cloud trust paradox: In the second of our “Best Kept Security Secrets” blog series, learn about Cloud External Key Manager, which can help organizations achieve even more control over their data in the cloud. Read more.Announcing new GKE functionality for streamlined security management: To help make security easier to use and manage, our new built-in Google Kubernetes Engine (GKE) security posture dashboard provides security guidance for GKE clusters and containerized workloads, insights into vulnerabilities and workload configuration checks, and offers integrated event logging so you can subscribe to alerts and stream insight data elsewhere. Read more.Introducing Sensitive Actions to help keep accounts secure: We operate in a shared fate model at Google Cloud, working in concert with our customers to help achieve stronger security outcomes. One of the ways we do this is to identify potentially risky behavior to help customers determine if action is appropriate. To this end, we now provide insights on what we are calling Sensitive Actions. Learn more.How to secure APIs against fraud and abuse with reCAPTCHA Enterprise and Apigee X: A comprehensive API security strategy requires protection from fraud and abuse. Developers can prevent attacks, reduce their API security surface area, and minimize disruption to users by implementing Google Cloud’s reCAPTCHA Enterprise and Apigee X solutions. Read more.Secure streaming data with Private Service Connect for Confluent Cloud: Organizations in highly regulated industries such as financial services and healthcare can now create fully segregated private data pipelines through a new partnership between Confluent Cloud and Google Cloud Private Service Connect. Read more.3 ways artifact registry and container analysis can help optimize and protect container workloads: Our artifact management platform can help uncover vulnerabilities present in open source software, and here are three ways to get started. Read more.Secure Cloud Run deployments with Binary Authorization: With Binary Authorization and Artifact Registry, organizations can easily define the right level of control for different production environments. Read more.Backup and Disaster Recovery strategies for BigQuery: Cloud customers need to create a robust backup and recovery strategy for analytics workloads. We walk you through different failure modes, the impact of these failures on data in BigQuery, and examine several strategies. Learn more.Industry updatesCloud makes it better: What’s new and next for data security: In a recent webinar, Heidi Shey, principal analyst at Forrester, and Anton Chuvakin, senior staff, Office of the CISO at Google Cloud, had a spirited discussion about the future of data security. Here are some trends that they are seeing today. Read more.How Chrome supports today’s workforce with secure enterprise browsing: Google Chrome’s commitment to security includes its ongoing partnership with our BeyondCorp Enterprise Zero Trust access solution. Here’s three ways that Chrome protects your organization. Read more.CUF boosted security, reduced costs, and drove energy savings with ChromeOS: José Manuel Vera, CIO of CUF, Portugal’s largest private healthcare provider, explains how ChromeOS securely enabled agile medical and patient care. Read more.Compliance & ControlsEnsuring fair and open competition in the cloud: Cloud-based computing is one of the most important developments in the digital economy in the last decade, and Google Cloud supports openness and interoperability. We have been a leader in promoting fair and open licensing for our customers since the start of the cloud revolution. Here’s why.Assured Workloads expands to new regions, gets new capabilities: Assured Workloads can help customers create and maintain controlled environments that accelerate running more secure and compliant workloads, including enforcement of data residency, administrative and personnel controls, and managing encryption keys. We’re expanding the service to Canada and Australia, and introducing new capabilities to automate onboarding and deploying regulated workloads. Read more.Google Cloud Security PodcastsWe launched a new weekly podcast focusing on Cloud Security in February 2021. Hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today. This month, they published a record nine must-listen podcasts:Cloud security’s murky alphabet soup: Cloud security comes with its own dictionary of acronyms, and it may surprise you that not everybody’s happy with it. To help organizations with their cultural shift to the cloud, we discuss some of the most popular and contentious cloud security acronyms with Dr. Anna Belak, a director of thought leadership at our partner Sysdig. Listen here.A CISO walks into the cloud: Frustrations, successes, and lessons from the top of the cloud: Along with data, security leaders also need to migrate to the cloud. We hear from Alicja Cade, director for financial services at our Office of the CISO, on her personal cloud transformation. Listen here.Sharing The Mic In Cyber — Representation, Psychological Safety, and Security: A must-listen episode, this discussion digs into how DEIB intersects with psychological safety and cybersecurity, by guest hosts Lauren Zabierek, acting executive director of the Belfer Center at the Harvard Kennedy School, and Christina Morillo, principal security consultant at Trimark Security. Listen here.“Hacking Google,” Operation Aurora, and insider threats at Google: A wide-ranging conversation on insider threats at Google, the role that detection and response play in protecting our user’s trust, and the Google tool we call BrainAuth, with our own Mike Sinno, security engineering director, Google Detection and Response. Listen here. How virtualization transitions can make cloud transformations better: What lessons for cloud transformation can we glean from the history of virtualization, now two decades old? Thiébaut Meyer, director at Google Cloud’s Office of the CISO, talks about how the past is ever-present in the future of cloud tech. Listen here.As part of Next ‘22, Anton and Tim recorded four bonus podcasts centered on key cybersecurity themes:Celebrate the first birthday of the Google Cybersecurity Action Team: Google Cloud CISO Phil Venables sits down to chat about the first year of GCAT and its focus on helping customers. Listen here.Can we escape ransomware by migrating to the cloud: Google Cloud’s Nelly Kassem, security and compliance specialist, dives deep into whether public clouds can play a role in stopping ransomware. Listen here.Improving browser security in the hybrid work era: One of the unexpected consequences of the COVID-19 pandemic was the accelerated adoption of hybrid work. How modern browsers work with an existing enterprise stack is only one of the questions tackled by Fletcher Oliver, Chrome browser customer engineer. Listen here.Looking back at Log4j, looking forward at software dependencies and open source security: Is another log4j inevitable? What can organizations do to minimize their own risks? Are all open-source dependencies dependable? Hear the answers to these questions and more from Nicky Ringland, product manager for Google’s Open Source Insights. Listen here.To have our Cloud CISO Perspectives post delivered every month to your inbox, sign up for our newsletter. We’ll be back next month with more security-related updates.
Quelle: Google Cloud Platform

How to build customer 360 profiles using MongoDB Atlas and Google Cloud for data-driven decisions

One of the biggest challenges for any retailer is to track an individual customer’s journey across multiple channels (Online and In-Store), devices, purchases, and interactions. This lack of a single view of the customer leads to a disjointed and inconsistent customer experience. Most retailers report obstacles to effective cross-channel marketing caused by inaccurate or incomplete customer data. Marketing efforts are also fragmented since the user profile data does not provide a 360˚view of customer’s experience. Insufficient information leads to  lack of visibility into customer sentiment that further hinders customer engagement and loyalty.Creating a single view of the customer across the enterprise Helps with customer engagement and loyalty by improving customer satisfaction and retention through personalization and targeted marketing communications.Helps retailers achieve higher marketing ROI by aggregating customer interactions across all channels and identifying and winning valuable new customers, resulting in increased revenues.360˚ is a relationship cycle that consists of many touch points where a customer meets the brand. The customer 360˚ solution provides an aggregated view of a customer. It collects all your customer data in one place, from customer’s primary contact information to their purchasing history, interactions with customer service, and their social media behavior.Single view of customer data records and processes:Behavior Data: Customer behavior data, including the customer’s browsing and search behavior online through click-stream data and the customer’s location if the app is location-based.Transactional Data: The transactional data includes online purchases, coupon utilization, in-store purchases, returns and refunds.Personal Information: Personal information from online registration, in-store loyalty cards and warranties will be collated into a single viewUser Profile Data: Data profiling will be used as a part of the matching and deduplication process and establish a Golden Record.  Profile segments can be utilized to enable marketing automation.An enhanced customer 360˚ solution with machine learning models can provide retailers with key capabilities for user based personalization like generating insights and orchestrate experiences for each customer.On October 1st 2022, we announced Dataflow templates that simplify the moving and processing of data between MongoDB Atlas and BigQuery.Dataflow is a truly unified stream and batch data processing system that’s serverless, fast, and cost-effective. Dataflow templates allow you to package a Dataflow pipeline for deployment. Templates have several advantages over directly deploying a pipeline to Dataflow. The Dataflow templates and the Dataflow page make it easier to define the source, target, transformations, and other logic to apply to the data. You can key in all the connection parameters through the Dataflow page, and with a click, the Dataflow job is triggered to move the data to BigQuery.BigQuery is a fully managed data warehouse that is designed for running analytical processing (OLAP) at any scale. BigQuery has built-in features like machine learning, geospatial analysis, data sharing, log analytics, and business intelligence.This integration enables Customers to move and transform data from MongoDB to BigQuery for aggregation and complex analytics. They can further take advantage of BigQuery’s Built-in ML and AI integrations for predictive analytics, fraud detection, real-time personalization, and other advanced analytics use cases.This blog talks about how Retailers can use fully managed MongoDB Atlas and Google Cloud services to build customer 360 profiles , the architecture and the reusable repository that customers can use to implement the Reference Architecture in their environmentsAs part of this reference architecture, we have considered four key data sources – user’s browsing behavior, orders, user demographic information, and product catalog. The diagram below illustrates the data sources that are used for building a single view of the customer, and some  key business outputs that can be driven from this data.The technical architecture diagram below shows how MongoDB and Google Cloud can be leveraged to provide a comprehensive view of the customer journey.The Reference Architecture consists of the following processes:1. Data IngestionDisparate data sources are brought together in the data ingestion phase.  Typically we integrate a wide array of data sources, such as Online Behavior, Purchases (Online and In-Store), Refunds, Returns and other enterprise data sources such as CRM and Loyalty platforms. In this example, we have considered four representative data sources: User profile data through User ProfilesProduct CatalogTransactional data through OrdersBehavioral data through Clickstream EventsUser profile data, product catalog, and orders data are ingested from MongoDB, and click-stream events from web server log files are ingested from csv files stored on Cloud Storage.The data ingestion process should support an initial batch load of historical data and dynamic change processing in near real-time. Near real-time changes can be ingested using a combination of MongoDB Change Streams functionality and Google PubSub to ensure high throughput and low latency design. 2. Data ProcessingThe data is converted from the the document format in MongoDB to the row and column format of BigQuery and loaded into BigQuery from MongoDB Atlas using the Google Cloud Dataflow Templates and Cloud Storage Text to BigQuery Dataflow templates to move CSV files to BQ.Google Cloud Dataflow templates orchestrate the data processing and the aggregated data can be used to train ML models and generate business insights. Key analytical insights like ​​product recommendations are brought back to MongoDB to enrich the user data.3. AI & ML The reference architecture leverages the advanced capabilities of Google Cloud BigQueryML and Vertex AI. Once the data is in BQ, BigQueryML lets you create and execute multiple machine learning models, but for this reference architecture, we focussed on the below models. K-means clustering to group data into clusters. In this case it is used to perform user segmentation.Matrix Factorization to generate recommendations. In this case, it is used to create product affinity scores using historical customer behavior, transactions, and product ratings.           The models are registered to Vertex AI Model Registry and deployed to an endpoint             for real-time prediction.4. Business InsightsUsing the content provided in github repo, we showcase the Analytics capabilities of Looker, which is seamlessly integrated with the aggregated data in BigQuery and MongoDB, providing advanced data visualizations that enable the business users to slice and dice the data and look for emerging trends. The included dashboards contain insights from MongoDB and from BigQuery, and from combining the data from both sources.The detailed implementation steps, sample datasets and the Github repository for this reference architecture are available here. There are many reasons to run MongoDB Atlas on Google Cloud, and one of the easiest is our self-service, pay-as-you-go listing on Google Cloud Marketplace. Please give it a try and let us know what you think. Also, check this blog to learn how Luckycart is able to handle large volumes of data and carry out complex computations it requires to deliver ultra-personalized activations for its customers using MongoDB and Google Cloud.We thank the many Google Cloud and MongoDB team members who contributed to this collaboration.  Thanks to the team at PeerIslands for their help with developing the reference architecture.
Quelle: Google Cloud Platform

Top 10 reasons to get started with Log Analytics today

Logging is a critical part of the software development lifecycle enabling developers to debug their apps, DevOps/SRE teams to troubleshoot issues, and security admins to analyze access patterns. Log Analytics is a new set of features in Cloud Logging available in Preview to help you perform powerful analysis on log data. In this post, we’ll cover 10 reasons why you should get started with Log Analytics today. Check our introductory blog or join us for a live webinar on Nov 15, 2022 where we will walk attendees through Log Analytics use cases including a demo. Register here today.#1: Log Analytics is included in Cloud Logging pricingIf you already use Cloud Logging, Log Analytics is included in the Cloud Logging pricing. There are no additional costs associated with upgrading the log bucket or running queries on the Log Analytics UI.Our standard pricing is based on ingestion which includes storing logs in the log bucket for 30 days, our default period, or you can set a custom log retention period. Check out the pricing blog to learn how to maximize value with Cloud Logging. If you don’t already use Cloud Logging, you can leverage the free tier of 50GiB/project/month to explore Cloud Logging including Log Analytics. #2: Enable a managed logging pipeline with one-clickLog Analytics manages the log pipeline for you, eliminating the need to build and manage your own complex data pipelines, which can add cost and operational overhead. A simple one-click set-up allows you to upgrade an existing log bucketorcreate a new log bucket with Log Analytics. Data is available in real-time, allowing users to immediately access their data via either the Log Explorer or the Log Analytics page.#3: Log data is available in Cloud Logging & BigQueryUpgrading a log bucket to Log Analytics means that your logs can be accessed via the Log Analytics page in Cloud Logging. If you also want to access log data from BigQuery, you can enable the checkbox to expose a linked dataset in BigQuery that is linked to your Log Analytics bucket.Once the log bucket is upgraded, log data can be accessed both from Log Analytics in Cloud Logging or BigQuery which eliminates the need to manage or build data pipelines to store log data in BigQuery. Cloud Logging will still manage the log data including access, immutability, and retention. Additionally, Cloud Logging uses BigQuery’s new native support for semi-structured data so you don’t need to manage the schema in your logs.This can be useful when:You already have other application or business data in BigQuery and want to join it with log data from Cloud LoggingYou want to use Looker Studio or other tools in the BigQuery ecosystem.There is no cost to create a linked dataset in BigQuery, but the standard BigQuery query cost applies to querying logs via the BigQuery APIs.#4: Determine root cause faster on high cardinality logs Application, infrastructure and networking logs can often have high cardinality data with unique IP addresses, session ids and instance ids. High cardinality data can be difficult to convert, store, and analyze as metrics. For example, two common use cases are: Application and infrastructure troubleshootingNetwork troubleshootingApplication and infrastructure troubleshootingSuppose that you are troubleshooting a problem with your application running on Google Kubernetes Engine and you need to break down the requests by sessions. Using Log Analytics, you can easily group and aggregate your request logs by session, gaining insights into the request latency and how it changes over time. This insight can help you reduce time spent troubleshooting by executing just one SQL query.Network troubleshootingNetwork telemetry logs on Google Cloud are packed with detailed networking data that is often high volume and cardinality. With Log Analytics, we can easily run a SQL query on the VPC Flow Logs to find the top 10 highest count of packets and total bytes grouped by destination IP address. With this information, you can generate insights into whether any of these destination IP addresses represent unusual traffic levels that warrant deeper analysis. This latency analysis makes it easier to identify any unusual values either as a part of network troubleshooting or routine network analysis.#5: Gather business insights from log dataLog Analytics reduces the need for multiple tools by reducing data silos. The same log data can be used to gain business insights which can be useful for Business Operations teams.  Here are a few examples of how you can use Log Analytics: Determine the top 5 regions from where content is being downloadedDetermine the top 10 referrers to a URL pathConvert IP addresses to city/state/country mapping. Identify unique IP addresses from a given country accessing a URL #6: Simplify audit log analysis for security users For security analyses, one common pattern is to review all the GCP audit logs for a given user, IP address or application. This type of analysis requires very broad search and scalable capabilities since different services may log the IP address in different fields. In Log Analytics, you can easily find values in logs using the SEARCH function to comb through all the fields in the log entry across terabytes of logs without worrying about the speed and performance of the database.With the SEARCH function, you can now search across log data in SQL even when you’re not exactly sure in which field your specific search term will appear in the log entry.#7: Use Visualization for better insights We have many great enhancements on the roadmap that will make it even easier to generate insights. Charting is one of the features that can easily help users make sense of their logs. Charting in Log Analytics is available now as a Private Preview (sign-up form).During the Private Preview for charting capabilities, we’re working hard to make it easier to use with support for additional chart types and a simple charting selector.#8: Cloud Logging provides an enterprise-grade logging platform While Log Analytics is currently in Preview, the Cloud Logging platform is already GA and provides an enterprise-grade logging solution complete with alerting, logs-based metrics and advanced log management capabilities. With Cloud Logging, you can help reduce operational expenditure while supporting your security and compliance needs. #9: Use our sample queries to get started todayWe put together common queries in our Github repository to make it easy to get started.Use this SQL query to determine the min, max and average # of requests grouped for a service.Use this query to determine if your Load Balancer latency was more than 2 seconds. When actively troubleshooting, you can determine the list of top 50 requests to filter out the HTTP errors with this query.Check out Github for additional sample queries. #10: Use our lab to gain hands on experience on Log AnalyticsUsing the Log Analytics on Google Cloud lab, you can work through deploying a sample application, managing log buckets and analyzing log data. This can be a great way to get started, especially if you’re not already using Cloud Logging.SummaryWe’re building Log Analytics for Developers, SRE, DevOps and Operations teams to gain insights faster while keeping costs under control. To learn more about how you can use Log Analytics, please join our live webinar on Nov 15th (registration) which will include a live demo. To get started with Log Analytics today, you can use the lab to gain hands-on experience, visit the documentation or try out the Log Analytics page in the Cloud Console.
Quelle: Google Cloud Platform

Unleashing the power of BigQuery to create personalized customer experiences

Editor’s note: Wunderkind, a leading performance marketing software, specializes in delivering tailored experiences to individuals at scale. Today, we learn how BigQuery’s high performance drives real-time, actionable decision-making that lets Wunderkind bring large brands closer to their customers. At Wunderkind, we believe in the power of one. Behind every website visit is a living, breathing person, with unique wants and needs that can be (and should be) met by the brands they trust. When our customers and our customers’ customers get the experience they deserve, it has the potential to transform what’s possible — and deliver impactful revenue results. Our solutions integrate hyper-personalized content into the customer experiences on retailer websites to help them understand and respond accordingly to each individual shopper. In addition, we provide these shoppers with personalized emails and text messages based on their interactions onsite. For example, we’ll alert a shopper with a ‘price drop’ message for an item they browsed, an item they left in their shopping cart, or about new products that we think they’ll love. Ultimately, our best-in-class tech and insight help deliver experiences that fit individual customers, and conversions at off-the-chart rates.With the billions of one-to-one messages we send monthly, it effectively means we track a lot of data – in the trillions of events. Because of this, we want a deep understanding of this data so we can tailor our content specifically to each unique user to ensure it’s as enjoyable and engaging as possible.Wunderkind’s data journey to BigQuery: how we got hereBack in its start-up days, all of Wunderkind’s analytics relied on a MySQL database. This worked well for our reporting platform, but any sort of ad-hoc inquiry or aggregate insight was a challenge. As an analyst, I had to beg engineers to create new database indexes and tables just to support new types of reporting. As one can imagine, this consumed a lot of time and energy – figuring out how to get complicated queries to run, using SQL tricks to fake indexes, creating temporary tables, and whatever else was necessary to improve performance and execute specific queries. After all, this is a company built on data and insights – so it had to be done right.To get the most value out of our data, we  invested early in the BI platform , Looker. Our prior business intelligence efforts for the broader business were also hooked up to a single relational database. This approach was very troubling for a lot of reasons, that included but were not limited to:We could only put so much data in a relational database. We couldn’t index every query pattern that we wanted.Certain queries would never finish.We were querying off a replicated database and had no means to create any additional aggregate or derived tables.Along with our new Business Intelligence approach, we decided to move to BigQuery.  BigQuery is not just a data warehouse. It’s an analytics system that seems to scale infinitely.  It gave us a data playground where we could create our own aggregate tables, mine for new insights and KPIs, and successfully run any type of data inquiry we could think up.  It simply was a dream. As we were testing, we loaded one single day of event logs into BigQuery, and for a month, it fueled dozens of eye-opening insights  about how our products actually work and the precise influence they have on user behavior. After this single-day test there was no turning back – we needed all of our data in BigQuery.BigQuery’s serverless architecture provides an incredibly consistent performance profile regardless of the complexity of the queries we threw at it. With relational databases, you can run one query and get a sub-second, exceptionally low-latency response, while another will never finish. I sometimes joke that every single query run against BigQuery takes 30 seconds — no matter how big or small. It’s a beautiful thing knowing that virtually any question you think up can be answered in a very reasonable amount of time.BigQuery allows our Analytics team to think more about the value of the data for the business and less about the mechanics of how particular queries should run. By combining BigQuery and Looker, I can give teams across our company the flexibility to work with their data in a way that previously only analysts could. I’ve also found that BigQuery is one of the easiest and best places to learn SQL. It’s well suited to learn for so many reasons, including:It’s very accessible and in-browser, so there’s no complicated setup or install process. It’s free up to a terabyte per month. Its public datasets are vast and relatable, making your first queries more interesting. Real-time query validation lets us know quickly if  something is wrong with our query.It’s a no-ops environment.  No indexes are required.  You just query.Data Journey: How Wunderkind gets (and delivers) the value of data for digital marketingBigQuery + Looker = Data Love Our Analytics team has three key groups of stakeholders: our customers and the teams that serve them, our research and development (R&D) team, and our business/operations team. We recognize that every customer is a bit different and take pride in being able to answer their unique questions in the dimensions that make the most sense for their business. Customers may want more detail on the performance of our service for different cohorts of users or for certain types of web pages in ways that require more raw data than we provide in our standard product. BigQuery’s performance lets us respond  to customers and offer them greater confidence around our approach. Thanks to Looker, we can roll out new internal insights very quickly that help inform and drive new strategies. Plus, with dashboards and alerts we can uncover cohorts and segments where our product performs exceptionally, and areas where our strategies need work. Our R&D team is another important stakeholder group. As we plan new products and features, we work with BigQuery to forecast and simulate the expected performance and incrementality. As our product develops, we use BigQuery and Looker to prototype new KPIs and reporting. It’s helpful to easily stage live data and KPIs to ensure they’re valuable to the customer ahead of productizing in our reporting platform. BigQuery’s speed means that we can aggregate billions of rows of raw data on the fly as we perfect our stats. Additionally, we’re able to save significant engineering time by using Looker as a product development sandbox for reporting and insights.Our final key stakeholder is our internal business operations team. Business operations typically ask more thought-provoking and challenging ‘what-if’ questions geared toward driving true incremental revenue for our customers and serving them optimally. For example, they may challenge the accuracy of the industry’s standard “attribution” methods and whether we can leverage our data to better understand return on spend and “cannibalization” for our customers. Because these tougher questions tend to involve data spanning product lines and more complicated data relationships, BigQuery’s high performance is essential to making rapid iteration with this team possible. Unlocking the insights we need to truly ‘get’ our customers Across these stakeholders, we truly empower Wunderkind with actionable data. BigQuery’s performance is key to enabling real-time, iterative decision-making within our organization and in tandem with our customers. Looker is a powerful front-end to securely share data in a way that’s meaningful, actionable, and accurate. As much as I love writing SQL, I believe it’s best reserved for new ad-hoc insights and not standardized reporting. Looker is how we can enforce consistency and accuracy across our internal reporting. We’ve found the most powerful insights come out of conversations with our stakeholders. From there, we can use our data expertise and product knowledge to build flexible dashboards that scale across the organization. While it can seem a bit restrictive for some stakeholders, this approach ensures the data they’re getting is always intuitive, consistent, clean, and actionable. We’re not in the business of vanity metrics, we’re in the business of driving impact.BigQuery is the foundational element that drives our goal of identifying not just our customers’ needs, but those that drive their customers to purchase. As a result, we can deliver better outcomes for customers, more rapid evolution of our products, and continuous validation and improvement of our business operations. We aim to maximize performance, experience, and returns for our customers – BigQuery is instrumental in helping to derive these insights. Even as Wunderkind has grown, we’ve been able to operate with a proportionally leaner team because BigQuery allows our Analytics team to perform most data tasks without needing engineering resources.
Quelle: Google Cloud Platform

Accenture and Microsoft drive digital transformation with OnePlatform on Microsoft Energy Data Services for OSDU™

This post was co-authored by Sacha Abinader, Managing Director, Accenture and Keith Armstron, Senior Manager, Accenture Microsoft Business Group.

Accelerate decision-making and interoperability with the OSDU Data Platform

The OSDU™ Forum is a cross-industry collaboration to develop a common, standards-based, and open platform for the exploration and production (E&P) community with the goal to liberate and enable greater access and insight into your valuable data. The OSDU Data Platform promise is compelling and offers value beyond what can be achieved with in-house solutions through established industry standards and openness to the larger technology ecosystem. Accenture has partnered with Microsoft and collaborated closely with Schlumberger during the Microsoft Energy Data Services preview and development process. In addition to Accenture’s domain expertise and digital integration acumen, the preview experience has allowed Accenture to develop skills and scale a team specific to this offering to enable an operator’s OSDU Data Platform journey at pace. 

As such, the OSDU Data Platform as an industry solution has been a top priority for Accenture with significant investments in skills, people, assets, and our presence and leadership as part of the OSDU Forum.

“We are thrilled to be a services partner for Microsoft Energy Data Services. The partnership with Microsoft and Schlumberger in enabling an open platform has been wonderful. As part of preview, Accenture has been driving for interoperability across the technology ecosystem to bridge siloed teams and has partnered with Schlumberger and several ISVs to make this a reality. We are excited to enable operators to create additional value through improved and accelerated decision making and the development of new workflows and analytics.”—Emma Wild, Managing Director, Global OSDU Lead.

Accenture has been actively involved in the OSDU Data Platform initiative for several years. In addition to our commitment to the OSDU Forum, we have developed our own vision and strategy as to how we can support OSDU Data Platform integration into E&P workflows and how we can increase operators' business capability and data value.

Accenture understands our clients’ challenges and is their partner for complex transformations

Our partners and clients want access to secure, clean, and curated data. To achieve this, they must liberate and migrate their data to the OSDU Data Platform. Our clients are dealing with large and highly complex data sets that have varying quality and formats. They also need to manage their ongoing business using the current and future capabilities of the OSDU Data Platform with its continual improvements of new data types and features. Accenture has planned a strategy that supports the transition from monolithic apps and data to the OSDU Data Platform at pace. Accenture and Microsoft are partners on this transformational journey as seen in Figure 1.

 

Our vision, your value

To be successful we believe there is a need to create and support a solution that provides an end-to-end business capability, focusing on business value and time-to-value acceleration.

Our approach will quickly prepare and present data to the user via the OSDU Data Platform irrespective of its current functionality and capability. By integrating and consolidating data in a standard format and enabling the interoperability of the platform across ISVs, operators can unlock the milestones to the right of the diagram and deliver the accelerated value they’ve been promised.

We think of it as supporting a data life cycle and journey to mitigate perceived risks due to the evolving nature of the OSDU Data Platform while continually improving your business workflows.

Why choose Microsoft Energy Data Services?

We recognize the complexity and risks involved in the transition and migration to the OSDU Data Platform. While energy companies have always managed E&P risk and uncertainty, there is generally a much lower appetite when it comes to IT and digital platforms. As a result, the industry is increasingly seeking packaged solutions or out-of-the-box delivery structures. This enables them to realize the visions promised by the OSDU Data Platform yet still focus on the "day job" and running their operations and business. These solutions and structures help de-risk the journey and minimize disruption to business continuity. Recognizing this, Microsoft developed an open-packaged solution to offer the OSDU Data Platform as a PaaS through Microsoft Energy Data Services. 

Microsoft Energy Data Services was designed to support the energy industry’s ambition to accelerate innovation, develop enhanced insights to drive operational efficiency, and inform new ways of working and workflows. Microsoft Energy Data Services can accelerate the journey to a cloud-based OSDU Data Platform and thus, the path to value.

Accenture and Microsoft Energy Data Services collaboration

Accenture has helped deploy and test Microsoft Energy Data Services through the preview stages to provide feedback to Microsoft Engineering. Accenture is focused on connecting data to business value and working with Microsoft to deliver a fully integrated approach using the OSDU Data Platform to accelerate digital transformation. Accenture demonstrated this during the preview by deploying the Microsoft Energy Data Services solution, ingesting data with OSDU core service tools and Accenture proprietary tools, and stitching together a data workflow across multiple ISVs to validate the openness of the platform. During this process, Accenture has built a team that can help deploy and scale on Microsoft Energy Data Services.

Microsoft Energy Data Services differentiates itself as it will allow and enable:

Integration with virtually any energy dataset, application, or cloud service with built-in tools.
Management for compute-intensive workloads at a global scale.
Compliance with the OSDU Technical Standard for open source innovation.
Ease the deployment of the OSDU Data Platform while providing ongoing platform and management support to align to OSDU Data Platform deployments.
Rapid data ingestion for analytics and decision-making.
Increase operational efficiency and gain global scalability while reducing operational costs.
Comprehensive security and compliance.
Ability to easily leverage native Azure and Microsoft solutions.

Microsoft Energy Data Services further builds on and enables the OSDU Data Platform value drivers:

The ability to access clean and curated historical data under a single data platform.
Open access to innovation and a wider set of technology partners (ISVs).
Removes siloes and barriers between disciplines and lays the foundation for digital transformation.

Accenture’s specific capabilities and toolkit

Data on its own is not the answer, and Accenture has been working hard to offer end-to-end services and tools which connect the full enterprise and business. The journey requires the need to deliver clean data to unlock value through data science, deploy, and roll out these solutions across global operations, and importantly, to instill trust from end users and the business to allow the value to be recognized.

Accenture is spearheading the industry adoption of the OSDU Data Platform to enable energy companies to accelerate their digital transformation. One such platform we are developing is the Accenture OnePlatform, as seen in Figures 3 and 4, which is a working solution to address the current issues and challenges and help execute the data to its maximum limit.

Figure 4: Accenture's OnePlatform Data Workflow.

Accenture OnePlatform is a cloud-agnostic platform and one-stop solution for data extraction, schema mapping, metadata generation, and data ingestion that is operationally efficient. Accenture OnePlatform enables OSDU Data Platform services that are available with just one click without any need for extra plugins or any open source installations. 

Some of the key highlights of Accenture OnePlatform are outlined below:

Orchestration of the OSDU Data Platform: Provide end-to-end delivery of business workflows via a single interface.
Data extraction: Extracting different data types by using a data type converter such as LAS, SegY, or ResQML.
Schema Mapping: Mapping client data with Accenture OnePlatform–compliant data types by using AI/ML models.
Metadata Generation: Generating metadata by using AI rule-based approach.
Data Ingestion: Ingestion workflow. Running on click solution using python utilities.
Data Validation: Validating records using python utilities by adding customized rules.
Data Quality: Intelligent way to set up the rules and do the quality checks automatically.
Knowledge Graph: Build Accenture OnePlatform-based ontology and give the semantic result to the customer.

In addition, the Accenture OnePlatform can serve as an orchestration tool across multiple SaaS ISV solutions. We know interoperability is a key value driver for choosing OSDU. Accenture has played a major role in ISV’s integration by collaborating with various ISVs and Microsoft for collective purpose of consuming the data available in single data platform. Accenture is working with several leading ISVs for development of their applications to fetch data according to the schemas from the OSDU Data Platform and Microsoft Energy Data Services, offering best-in-class interoperability and the ability to deliver end-to-end business workflows. Microsoft Energy Data Services with Accenture’s support has demonstrated the integration of DELFI with multiple ISV applications, such as Interica and Ikon Science, and we were pleased to demonstrate this at the Schlumberger Digital Forum 2022.

Conclusion

In closing, Accenture is committed to being a leading partner to help operators navigate the uncertainties around OSDU Data Platform implementation, manage the risks of deployment, and realize the full value of their data.

We believe Accenture is best placed to deliver on these commitments and enable your value based on our deep industry expertise, investments in accelerators like the Accenture OnePlatform, 14,000+ dedicated oil and gas skilled global practitioners with 250+ OSDU™-trained professionals, and our extensive ecosystem relationships. We are confident that our capabilities and our partnership with Microsoft are key to helping operators execute and scale their OSDU Data Platform transformation with Microsoft Energy Data Services and the interoperability of the platform.

How to work with Accenture on Microsoft Energy Data Services

Microsoft Energy Data Services is an enterprise-grade, fully managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide the scale, security, privacy, and compliance expected by our enterprise customers.

Learn more

Get started with Microsoft Energy Data Services today.
Learn more about Accenture’s OSDU Capabilities.

Quelle: Azure

Microsoft named a Leader in 2022 Gartner® Magic Quadrant™ for Cloud Infrastructure and Platform Services

Gartner® recently published its 2022 Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS) report. For the ninth consecutive year, Microsoft was named a Leader, and for the first time placed furthest on the Completeness of Vision axis.

For years, we’ve understood the industry has trusted Gartner Magic Quadrant reports to provide a holistic review of cloud providers’ capabilities.

Today, we face an uncertain global economy, and as customers consider migrating and modernizing their IT environments, they’re turning to the cloud experts they can trust. Our goal is to be that trusted expert with the most comprehensive cloud platform our customers can rely on to manage their infrastructure and modernize their digital estates, freeing them up to focus on what they do best—create, innovate, and differentiate.

We’re honored by this placement in the Gartner report but know there is more to do, particularly as our customers navigate ongoing uncertainties. As they continue to prioritize cloud investments to build resiliency, we’re committed to making continuous improvements and investments to meet their needs.

From cloud to edge: We help customers innovate anywhere

Our long-standing hybrid and multicloud approach is unique in empowering organizations from any industry, wherever they are in their cloud journey, and for whatever use cases they can dream up, to achieve more with Microsoft Azure.

This approach has long enabled our customers to control and manage their sprawling IT assets, ensure consistency, and meet regulatory and sovereignty requirements. Now, as customers leverage the cloud to build new products and offerings that help them stay agile and competitive, Azure and solutions like Azure Arc help organizations innovate anywhere.

Azure Arc operates as a bridge extending across the Azure platform by allowing applications and services the flexibility to run across datacenters, edge, and multicloud environments. Customers across industries including financial services, retail, consumer goods, and manufacturing are realizing the benefits of Azure Arc to address their unique business needs.

Our investments in Azure Arc continue. At Microsoft Build this year, we announced any Cloud Native Computing Foundation (CNCF)-conformant Kubernetes cluster connected through Azure Arc is now a supported deployment target for Azure application services.

In August, we announced the public preview of Microsoft Dev Box, a managed service that enables developers to create on-demand, high-performance, secure, ready-to-code, project-specific workstations in the cloud so they can work and innovate anywhere. And, more recently at Microsoft Ignite, we announced the availability of Arc-enabled SQL Server and new deployment options for Azure Kubernetes Services enabled by Arc, so customers can run containerized apps regardless of their location.

To help our customers optimize their cloud investments, we have pricing benefits and offers, like Azure Hybrid Benefit, providing a way to use existing on-premises Windows Server and SQL Server licenses on the cloud with no additional cost. We also understand customers may need additional help to ensure workloads remain secure and protected with hybrid flexibility as you move.

Earlier this month, we announced the expansion of Azure Hybrid Benefit to include AKS. Now our customers can deploy the Azure Kubernetes Service on Azure Stack HCI or Windows Server in their own datacenters or edge environments at no additional cost. This ensures a consistent, managed Kubernetes experience from cloud to edge for both Windows and Linux containers.

I am always inspired by the ways our customers use our solutions to do more with less, and at the same time, overcome longstanding security and governance challenges.

Performance, scale, and mission-critical capability for all applications and workloads

We continuously invest to make Azure the best place for customers to run their mission-critical workloads, like SAP. Because of offerings like Azure Center for SAP Solutions, an end-to-end solution to deploy and manage SAP workloads on Azure, we’ve become the platform of choice for SAP apps on the cloud.

We're also making significant investments to support our customers’ largest Windows Server and SQL Server migration and modernization projects, up to 2.5 times more than previous investments1. This will provide even more migration support in two ways: partner assistance with planning and moving workloads, and Azure credits that offset transition costs during the move to Azure Virtual Machines, Azure SQL Managed Instance, and Azure SQL Database.

Global reach and expansion to meet digital sovereignty needs

As the cloud provider with the most datacenter regions—60+ worldwide—we also have a deep commitment to infrastructure expansion for our customers around the world. This year, we launched datacenters in Sweden and Qatar and will launch 10 more regions over the next year.

We also recently launched Microsoft Cloud for Sovereignty for government and public sector customers, designed to meet heightened requirements for data residency, privacy, access control, and operational compliance in cloud and hybrid environments.

Microsoft Cloud and Azure help customers unlock business potential

At Microsoft, we have been through our own digital transformation. We brought products like Microsoft Office to the cloud, and we draw from that experience to empower customers to achieve more through the cloud. We understand the power and promise of technology to help unlock an organization’s potential—for employees, customers, industries, and even society more broadly.

Today Microsoft Azure customers come in all shapes and sizes—from startups to space stations, hybrid to cloud native—and are increasingly capitalizing on the value of the full Microsoft Cloud to enable continuous innovation with integrated solutions.

The National Basketball Association (NBA) is a great example of an organization that chose to migrate its SAP solutions and other resources to Microsoft Azure to improve operations and boost fan engagement. Azure enabled them to spend less time managing technology and focus more on generating fan-centric experiences that bring together business, game, and fan data to enhance the way people can enjoy interacting with the NBA.

Using Azure DevOps and Azure Kubernetes Service, Ernst and Young Global Limited (EY) built more agile practices and shifted into a rolling product-delivery approach for software and services. Now, they’re developing and deploying solutions faster and with more confidence across a wide range of environments.

And global pharmaceutical company Sanofi overcame the limitations of its on-premises infrastructure by adopting a hybrid cloud strategy. They chose Azure as their cloud platform, gaining the speed, agility, and reliability necessary for innovation.

No matter where our customers are in their journey, whether they are migrating, modernizing, or creating new applications in the cloud for their customers, we are here to help them achieve their goals today and empower every organization to build for the future.

Learn more

Read the full complimentary Gartner report.
Learn more about the Azure Migration & Modernization Program (AMMP).
Learn more about Azure Center for SAP.
Read about how organizations can stay resilient by optimizing their cloud investments.
Learn how developers can accelerate innovation on Microsoft Cloud.
Read the latest on how Azure powers your app innovation and modernization with the choice of control and productivity to deploy apps at scale.
Learn more about the Microsoft Dev Box.
Read Jessica Hawk’s blog post about Microsoft as a leader in the 2022 Gartner Magic Quadrant for Data and Integration Tools.
Get started with a free Azure account.

 

 

Gartner, Magic Quadrant for Cloud Infrastructure and Platform Services, 19 October 2022, Raj Bala, Dennis Smith, Kevin Ji, David Wright, and Miguel Angel Borrega.
 
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. GARTNER and Magic Quadrant are registered trademarks and service marks of Gartner, Inc. and its affiliates in the United States and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

1 based on project eligibility through the Azure Migration and Modernization Program.
Quelle: Azure

Announcing Docker Hub OCI Artifacts Support

We’re excited to announce that Docker Hub can now help you distribute any type of application artifact! You can now keep everything in one place without having to leverage multiple registries.

Before today, you could only use Docker Hub to store and distribute container images — or artifacts usable by container runtimes. This became a limitation of our platform, since container image distribution is just the tip of the application delivery iceberg. Nowadays, modern application delivery requires numerous types of artifacts:

Helm chartsWebAssembly modulesDocker VolumesSBOMsOPA bundles…and many other custom artifacts

Developers often share these with clients that need them since they add immense value to each project. And while the OCI working groups are busy releasing the latest OCI Artifact Specification, we still have to package application artifacts as OCI images in the meantime. 

Docker Hub acts as an image registry and is perfectly suited for distributing application artifacts. That’s why we’ve added support for any software artifact — packaged as an OCI image — to Docker Hub.

What’s the Open Container Initiative (OCI)?

Back in 2015, we helped establish the Open Container Initiative as an open governance structure to standardize container image formats, container runtimes, and image distribution.

The OCI maintains a few core specifications. These govern the following:

How to package filesystem bundlesHow to launch containerized, cross-platform appsHow to make packaged content accessible to remote clients

The Runtime Specification determines how OCI images and runtimes interact. Next, the Image Specification outlines how to create OCI images. Finally, the Distribution Specification defines how to make content distribution interoperable.

The OCI’s overall aim is to boost transparency, runtime predictability, software compatibility, and distribution. We’ve since donated our own container format and runC OCI-compliant runtime to the OCI, plus given the OCI-compliant distribution project to the CNCF.

Why are we adding OCI support? 

Container images are integral to supporting your containerized application builds. We know that images accumulate between projects, making centralized cloud storage essential to efficiently manage resources. Developers shouldn’t have to rely on local storage or wonder if these resources are readily accessible. However, we also know that developers want to store a variety of artifacts within Docker Hub. 

Storing your artifacts in Docker Hub unlocks “anywhere access” while also enabling improved collaboration through Docker Hub’s standard sharing capabilities. This aligns us more closely with the OCI’s content distribution mission by giving users greater control over key pieces of application delivery.

How do I manage different OCI artifacts?

We recommend using dedicated tools to help manage non-container OCI artifacts, like the Helm CLI for Helm charts or the OCI Registry-as-Storage (ORAS) CLI for arbitrary content types.

Let’s walk through a few use cases to showcase OCI support in Docker Hub.

Working with Helm charts

Helm chart support was your most-requested feature, and we’ve officially added it to Docker Hub! So, how do you take advantage? We’ll create a simple Helm chart and push it to Docker Hub. This process will follow Helm’s official guide for storing Helm charts as OCI images in registries.

First, we’ll create a demo Helm chart:

$ helm create demo

This’ll generate a familiar Helm chart boilerplate of files that you can edit:

demo
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml

3 directories, 10 files

Once we’re done editing, we’ll need to package the Helm chart as an OCI image:

$ helm package demo

Successfully packaged chart and saved it to: /Users/martine/tmp/demo-0.1.0.tgz

Don’t forget to log into Docker Hub before pushing your Helm chart. We recommend creating a Personal Access Token (PAT) for this. You can export your PAT via an environment variable, and login, as follows:

$ echo $REG_PAT | helm registry login registry-1.docker.io -u martine –password-stdin

Pushing your Helm chart

You’re now ready to push your first Helm chart to Docker Hub! But first, make sure you have write access to your Helm chart’s destination namespace. In this example, let’s push to the docker namespace:

$ helm push demo-0.1.0.tgz oci://registry-1.docker.io/docker

Pushed: registry-1.docker.io/docker/demo:0.1.0
Digest: sha256:1e960ad1693c234b66ec1f9ddce80986cbf7159d2bb1e9a6d2c2cd6e89925e54

Viewing your Helm chart and using filters

Now, If you log in to Docker Hub and navigate to the demo repository detail, you’ll find your Helm chart in the list of repository tags:

You can navigate to the Helm chart page by clicking on the tag. The page displays useful Helm CLI commands:

Repository content management is now easier. We’ve improved content discoverability by adding a drop-down button to quickly filter the repository list by content type. Simply click the Content drop-down and select Helm from the list:

Working with volumes

Developers use volumes throughout the Docker ecosystem to share arbitrary application data like database files. You can already back up your volumes using the Volume Backup & Share extension that we recently launched. You can now also filter repositories to find those containing volumes using the same drop-down menu.

What if you want to push a volume to Docker Hub without using the Volume Backup & Share extension — say from the command line — and still have Docker Hub recognize it as a volume? The easiest method leverages the ORAS project. Let’s walk through a simple use case that mirrors the examples documented by the ORAS CLI.

First, we’ll create a simple file we want to package as a volume:

$ echo "bar" > foo.txt

For Docker Hub to recognize this volume, we must attach a config file to the OCI image upon creation and mark it with a specific media type. The file can contain arbitrary content, so let’s create one:

$ echo "{"name":"foo","value":"bar"}" > config.json

With this step completed, you’re now ready to push your volume.

Pushing your volume

Here’s where the magic happens. The media type Docker Hub needs to successfully recognize the OCI image as a volume is application/vnd.docker.volume.v1+tar.gz. You can attach the media type to the config file and push it to Docker Hub with the following command (plus its resulting output):

$ oras push registry-1.docker.io/docker/demo:0.0.1 –config config.json:application/vnd.docker.volume.v1+tar.gz foo.txt:text/plain

Uploading b5bb9d8014a0 foo.txt
Uploaded b5bb9d8014a0 foo.txt
Pushed registry-1.docker.io/docker/demo:0.0.1
Digest: sha256:f36eddbab8459d0ad1436b7ca8af6bfc512ec74f45d8136b53c16db87562016e

We now have two types of content in the demo repository as shown in the following breakdown:

If you navigate to the content page, you’ll see some basic information that we’ll expand upon in future iterations. This will boost visibility into a volume’s contents.

Handling generic content types

If you don’t use the application/vnd.docker.volume.v1+tar.gz media type when pushing the volume with the ORAS CLI, Docker Hub will mark the artifact as generic to distinguish it from recognized content.

Let’s push the same volume but use application/vnd.random.volume.v1+tar.gz media type instead of the one known to Docker Hub:

$ oras push registry-1.docker.io/docker/demo:0.1.1 –config config.json:application/vnd.random.volume.v1+tar.gz foo.txt:text/plain

Exists 7d865e959b24 foo.txt
Pushed registry-1.docker.io/docker/demo:0.1.1
Digest: sha256:d2fb2b176ee4e326f1f34ecdaede8db742f2c444cb2c9ceff0f5c8b743281c95

You can see the new content is assigned a generic Other type. We can still view the tagged content’s media type by hovering over the type label. In this case, that’s application/vnd.random.volume.v1+tar.gz:

If you’d like to filter the repositories that contain both Helm charts and volumes, use the same drop-down menu in the top-right corner:

Working with container images

Finally, you can continue pushing your regular container images to the exact same repository as your other artifacts. Say we re-tag the Redis Docker Official Image and push it to Docker Hub:

$ docker tag redis:3.2-alpine docker/demo:v1.2.2

$ docker push docker/demo:v1.2.2

The push refers to repository [docker.io/docker/demo]
a1892d5d1a6d: Mounted from library/redis
e41876edb6d0: Mounted from library/redis
7119119b7542: Mounted from library/redis
169a281fff0f: Mounted from library/redis
04c8ef03e935: Mounted from library/redis
df64d3292fd6: Mounted from library/redis
v1.2.2: digest: sha256:359cfebb00bef01cda3bc1ca453e6455c770a246a06ad8df499a28118c144eda size: 1570

Viewing your container images

If you now visit the demo repository page on Docker Hub, you’ll see every artifact listed under Tags and scans:

We’ll also introduce more features soon to help you better organize your application content, so stay tuned for more announcements!

Follow along for more updates

All developers can now access and choose from more robust sets of artifacts while building and distributing applications with Docker Hub. Not only does this remove existing roadblocks, but it’ll hopefully encourage you to create and distribute even more exciting applications.

But, our mission doesn’t end here! We’re continually working to bolster our OCI support. While the OCI Artifact Specification is considered a release candidate, full Docker Hub support for OCI Reference Types and the accompanying Referrers API is on the horizon. Stay tuned for upcoming enhancements, improved repo organization, and more.
Quelle: https://blog.docker.com/feed/

Amazon WorkDocs fügt Support für Apple Silicon MacBooks hinzu

Heute hat Amazon WorkDocs, ein vollständig verwaltetes Produkt zum Erstellen, Teilen und Anreichern digitaler Inhalte, die allgemeine Verfügbarkeit eines Apple Silicon (M1, M2) kompatiblen WorkDocs Drive angekündigt. Der Apple-Silicon-Support für WorkDocs vereinfacht das Installieren und Synchronisieren von WorkDocs-Dateien für Kunden mit Apple Silicon MacBooks.
Quelle: aws.amazon.com

Bekanntgabe der Funktion des Anhaltens und Fortsetzens von Kamera-Streams auf AWS Panorama

Kunden von AWS Panorama können jetzt mithilfe von AWS-Panorama-APIs bestehende Kamera-Streamverbindungen innerhalb von Anwendungen, die in der AWS Panorama Appliance bereitgestellt werden, anhalten und fortsetzen. Da Kunden Panorama Appliances an mehreren Standorten einsetzen, wollen sie eine skalierbare Möglichkeit zur Verwaltung einzelner Kamera-Streams, ohne den Verwaltungsaufwand einer vollständigen Anwendungsbereitstellung. Mit dieser Funktion können Kunden ein Failover zwischen redundanten Kamera-Streams erreichen und so ihre Anforderungen an hohe Verfügbarkeit erfüllen. Außerdem können sie dynamisch zwischen mehreren Kamera-Streams wechseln, um ihre Geschäftsanforderungen zu erfüllen. Weitere Informationen findest du in der Dokumentation zu AWS Panorama.
Quelle: aws.amazon.com