IAM Identity Center fügt Sitzungsmanagement-Funktionen für verbesserte Benutzerfreundlichkeit und Cloud-Sicherheit hinzu

Mit AWS IAM Identity Center (Nachfolger von AWS Single Sign-On) haben Sie jetzt mehr Kontrolle über die Verwaltung von Benutzersitzungen. Über die Konsole können Sie die Sitzungsdauer (bis zu 7 Tage) je nach den Sicherheitsanforderungen Ihres Unternehmens und der gewünschten Endbenutzererfahrung individuell festlegen. Mit dieser Funktion können Sie auch Sitzungen beenden und so Sitzungen verwalten, die nicht mehr benötigt werden oder potenziell verdächtig sind.
Quelle: aws.amazon.com

Amazon Braket unterstützt jetzt den Zugriff auf Pulsebene zur Untersuchung der Leistung heutiger Quantencomputer

Amazon Braket, der Quantencomputerdienst von AWS, wurde entwickelt, um die Forschung und Softwareentwicklung im Bereich Quantencomputer zu beschleunigen. Ab heute unterstützen wir den Zugriff auf supraleitende Quantenprozessoren von Rigetti Computing und Oxford Quantum Circuits (OQC) auf Pulsebene mithilfe von Braket Pulse, einer neuen Funktion zur Ausführung von Quantenprogrammen auf Pulsebene. Mit diesem Launch haben Braket-Kunden mehr Auswahl und können tiefer in ihre Forschung eintauchen, da sie die Möglichkeit haben, ihre Quantenprogramme mit Gattern, Impulsen oder einer Kombination aus beidem zu codieren.
Quelle: aws.amazon.com

Unlocking the power of connected vehicle data and advanced analytics with BigQuery

As software-defined vehicles continue to advance and the quantity of digital services grows to meet consumer demand, the data required to provide these services continue to grow as well. This makes automotive manufacturers and suppliers look for capabilities to log and analyze data, update applications, and extend commands to in-vehicle software.The challenges the automotive sector faces can be quantified. A modern vehicle contains upwards of 70 electronic control units (ECUs), most of which are connected to one or more sensors. Not only is it now possible to exactly measure many aspects of vehicle performance, but new options become available. Using LIDAR (light detection and ranging), for example, vehicles are achieving higher levels of autonomy; this leads to a data stream from such demanding applications that may reach 25 GB per hour. For the in-vehicle processing of data, 100 million lines of software code may be present — more than a fighter jet. This in-vehicle code will have to be maintained with updates and new functionalities.Access to the data will allow manufacturers to gain valuable insights into operational details of their vehicles. The use of this data can help to reduce costs and risks, increase ROI, support ESG initiatives, and provide valuable insights to develop innovative solutions and shorten the time to value for Electric Vehicle innovations.Sibros’ Deep Connected Platform (DCP) makes it possible for these manufacturers to build and launch new connected vehicle use cases from production to post-sale at scale by connecting and managing all software and data throughout every life cycle stage. A key component of this platform is the SibrosDeep Logger that provides capabilities like the following:Full configurability of what to record, when to record it, and how fast to record it.High resolution timestamps of all Controller Area Network (CAN) messages.Dynamic application of live log configurations to receive new data points without deploying new software.For example, properly analyzed engine data enables true predictive maintenance for the first time, which creates the option to repair or replace components before failure happens. Another example would be the evaluation of data regarding the use of certain in-car features with the goal to redesign its interior. Two other components of the DCP are software updates and remote commands to ECUs. The DCP on Google Cloud enables seamless integration with any vehicle architecture and provides OEMs and suppliers with the platform to manage connected vehicle data at rest and in transit using a proven and secure way on a global scale.OEMs can pull data through APIs provided by Sibros into Google Data Cloud (including BigQuery) to gain access to the rich information data sets provided by the DCP within their environment and blend this data with their first party data sets to provide value insights for their business. Some of the Connected Vehicle insights that DCP information enables are:Damage prevention, improved operation, or development of the next generation of engines with insights from complex analyses that could consider parameters like model, engine type, mileage, overall speed, temperature, air pressure, load, services, and more.The combination of electric vehicle battery usage data like charging cycles, engine performance, and battery age with contributing factors as the use of the air conditioning to determine if such factors contribute to hazardous battery conditions and for improved battery development.Cross-organization collaboration in R&D by the provision of information on all these metrics and more from real-world driving, like engine knock data and even tire pressure.Google Cloud’s unified data cloud offering provides a complete platform for building data-driven applications like those from Sibros — from simplified data ingestion, processing, and storage to powerful analytics, AI, ML, and data sharing capabilities — integrated with Google Cloud. With a diverse partner ecosystem and support for multi-cloud, open-source tools and APIs, Google Cloud provides Sibros the portability and the extensibility they need to avoid data lock-in.“Software has an ever increasing importance in the automotive world, even more so with electric vehicles and new mobility services. Google Cloud is partnering with Sibros to bring their award winning Deep Connected Platform to deliver high frequency, low latency over-the-air software updates, data logging & diagnostics capabilities to our automotive customers, leveraging the security and scale of Google Cloud. This is revolutionizing everything from development cycles to business models and customer relationships.” — Matthias Breunig, Director, Global Automotive Solutions, Google CloudThrough Built with BigQuery, Google Cloud is helping tech companies like Sibros build innovative applications on Google’s Data Cloud with simplified access to technology, helpful and dedicated engineering support, and joint go-to-market programs.“Sibros is looking forward to partnering with Google Cloud, which will enable vehicle manufacturers and suppliers to reach the next level in their use of data. Sibros solutions for Deep Data Logging and Updating on the Google Data Cloud, combined with Google BigQuery, will help them to mitigate risks, reduce costs, add innovative products, and introduce value-added use cases.” — Xiaojian Huang,Chief Digital Officer, Software, SibrosSibros and Google Cloud are driving Connected Mobility transformation to help our customers accelerate R&D innovation, power efficient operations, and unlock software-defined vehicle use cases with a full stack connected vehicle platform. Click here to learn more about Sibros on Google Cloud.
Quelle: Google Cloud Platform

UKG Ready, People Insights on Google Cloud

Business Problem UKG Ready primarily operates in the Small and Medium Business (SMB) space, so inherently many customers are forced to operate and make key business decisions with less Workforce Management (WFM) / Human Capital Management (HCM) data. In addition to volume, SMB lacks the variety of data needed to create a dynamic and agile organization. This puts SMB at a major disadvantage compared to larger segments.Project Goals People Insights module is committed to surfacing insights to customers in the context of their day-to-day duties and aid in decision making. With the SMB customer data limitations mentioned above, the goal of this project was to create a global dataset that augments individual customer data to bring light to less obvious, yet important information.Challenges UKG Ready is a highly configurable application that gives customers the opportunity to build solutions on a platform that meets their specific business needs. High configurability gives high flexibility to customers in their usage of the software. However, it becomes nearly impossible to create a global dataset for machine learning and data insights. UKG Ready manages just under 4 million of the US workforce and some 30,000+ customers. Despite the large employee dataset size, machine learning models that are specific to customers are starved for data because the individual customers have a relatively small employee population. Does that mean we cannot support our SMB customers’ decision making with ML? Result Partnering with Google, we were able to develop an approach that allowed us to standardize various domain entities (pay categories, time off codes, job titles, etc.) so that we could build a global dataset to augment SMB customer data. Using machine learning we were able to build a common vocabulary across our customer base. This common vocabulary encapsulates the nuances of how our customers manage their business and yet is generalized and standardized such that the data can be aggregated over the variety of customer configurations. This allows us to serve up practical insights to customers through various use cases. Our partnership allowed us to leverage Google Cloud Services to meet the needs of our complex machine learning models, distributed data sets and CI/CD processes. How UKG Ready decided to partner with Google for an end-to-end solution for the analytics offering. This allowed us to focus on our core business logic without having to worry about the platform, environment configurations, performance and scalability of the entire solution. We make use of various Google Cloud services such as Cloud Triggers, Cloud Storage, Cloud Functions, Cloud Composer, Cloud Dataflow, Big Query, Vertex AI, Cloud Pub/Sub… to host our analytics solution. Jenkins manages the entire CI/CD pipelines and cloud environments are configured and deployed using Terraform.The standardization of business entities problem was solved in three distinct steps:Step 1: Collecting aggregated dataWe needed an approach to collect aggregated data from our highly distributed, sharded, multi-tenant data sources. We developed a custom solution that allows us to extract data aggregated at source for PII and GDPR considerations and transfer to Google Cloud Storage in the fastest manner possible. Data is then transformed and stored in Big Query. Services used: GCS, Cloud Functions, DataFlow, Cloud Composer and Big Query. All processes are orchestrated using Cloud Composer and detailed logging is available in Cloud Logging (Stackdriver).Step 2: Applying NLP (Natural Language Processing)Once we had the variety of customer configurations or the business entities available, we then applied NLP algorithms to categorize and standardize these in buckets. This approach assumes that customers use natural language for configurations like job titles, pay codes etc. String Preparation The input data for string preparation process is an entity string or several strings, that describe one entity object (like name-description pair or code-name pair). The output represents set of tokens that may be used to run classification/clustering model. The process of string preparation tokenizes strings, replaces shortcuts, handles abbreviations, translates tokens, handles grammatical errors and mistypes.ML Models Statistical The idea of the model is to use defined target classes (clusters) and assign several tokens (anchors) to each of them an entity that has any of those tokens would be “attracted” to appropriate class. All other tokens are weighted according to frequencies of usage of theses tokens in the entities with anchor tokens: Using anchor tokens, we are building kind-of Word2Vec – dimensionality of vector is equal to number of target classes. The higher specific dimension (cluster) value, the higher the probability of entity to be included in appropriate cluster. Final prediction for entity tokens list for specific class is sum of weights of all the tokens included. Predicted cluster is a cluster that has maximal prediction score. Lexical Model We managed to generate reasonable amount of labeled data during statistical model implementation and testing. That opens a possibility to build “classical” NLP model that uses labeled data to train classification neural network using pretrained layers to produce token embeddings or even string embeddings. We started experimentation with pre-trained models like GloVe and got good results with single words and bi-grams but started getting issues in handling of n-grams. Our Google account team came to our rescue and recommended some white papers that helped formulate our strategy. We now use Tensorflow nnlm-en-dim128 model to produce string embeddings – it was trained on 200B records English Google News corpus and produces for each input string 128-dimensional vector. After that we use several Dense and Dropout layers to build a classification model. Ensembling To perform ensembling all the model results for each class are cast to probabilities using softmaxtransformation with scale normalization. Final predicted probability is maximal average score of both models among all the classes scores – appropriate class is predicted class. The machine learning models are deployed on Vertex AI and are used in batch predictions. Model performance is captured at every prediction boundary and monitored for quality in production. Step 3: Making available common vocabularyHaving the standardized vocabulary, we then needed a mechanism to have the results be available in UKG Ready reports and customer specific models like Flight Risk and Fatigue. For this we again used Google Services for orchestration, data transformation and data storage.Once the modeling is complete, we made the customer specific models leveraging the above architecture be available in Reports. We utilized our proven existing technology choices in GCP for orchestration, data transformation and data storage Results We are able to build a common vocabulary of our customers’ business entities with good confidence. And be an expert advisor to our SMB customers in their decision-making using machine learning. With the advice of our Google account team and using Google services we can add value to our product in a relatively short amount of time. And we are not done! We continue to use this platform for new use cases, complex business problems and innovative machine learning solutions.Sample result:Special thanks to Kanchana Patlolla , AI Specialist, Google for the collaboration in bringing this to light
Quelle: Google Cloud Platform

Document AI adds one-click model training with ML Workbench

Each day, countless documents are created, revised, and shared across organizations. The result is a treasure trove of information, but because the data is primarily unstructured — without rows, columns or some other predefined organizational schema — it is difficult to interpret, analyze or use for business processes. That’s why we introducedDocument AI: so users can extract structured data from documents through machine learning (ML) models to automate business processes and improve decision-making. Over the last two years, our customers have used Document AI toaccelerate and enhance document-based workflows in area like procurement, lending, identity, and contracts—and at Google Cloud Next ’22, we expanded these capabilities in a big way with the release of Document AI Workbench, a new feature that makes it fast and easy to apply ML to virtually any document-based workflow. Document AI Workbench lets analysts, developers, and even business users ingest their own data to train models with the click of a button, then extract fields from documents needed for the business. Relative to traditional development approaches, Document AI Workbench lets organizations build models faster and with less data—thus accelerating time-to-value for processing and analysis of unstructured data in documents.This [Document AI Workbench] is poised to be a game changer, because we can now uptrain various text documents and forms utilizing powerful Google machine learning models to get the desired accuracy creating greater time and resource efficiencies for our clients Daan De Groodt Managing Director, Deloitte Consulting LLPIn this blog post, we’ll explore Document AI Workbench’s capabilities, as well as ways customers are already putting this new feature to work. Benefits of custom modeling with Document AI Workbench Our customers use Document AI Workbench to ultimately save time and money. “Document AI Workbench is helping us expand document automation more quickly and effectively.  By using Document AI Workbench, we have been able to train our own document parser models in a fraction of the time and with less resources.  We feel this will help us realize important operational improvements for our business and help us serve our customers much better,” said Daniel Ordaz Palacios, Global Head Business Process & Operations, at financial services company BBVA. Let’s break down some of the ways the feature delivers these benefits.  Democratized MLData scientists’ time is scarce. With Document AI Workbench, developers, analysts, and others with limited ML experience can create ML models by labeling data with a simple interface, then initiating training with the click of a button. By leveraging training data to create a model behind the scenes, Document AI Workbench expands the range of users who can contribute to ML models while preserving data scientists’ efforts for the most sophisticated projects.Many document typesWith Document AI Workbench, organizations can bring their own data to create ML models for many document types and attributes, including printed or handwritten text, tables and other nested entities, checkboxes, and more. Customers can process document images whether they were professionally scanned or captured in a quick photo, and they can import data in multiple formats, such as PDFs, common image formats, and JSON document.proto.Time-to-marketDocument AI Workbench significantly reduces customers’ time to market, compared to building custom ML models, because users simply provide training data, with Document AI handling the rest. Our users don’t have to worry about model weights, parameters, anchors, etc.Less training dataDocument AI Workbench helps customers build ML models that achieve accurate results with less training data. This is especially true when “uptraining,” in which  Document AI Workbench transfers learnings from pre-trained models to produce more accurate results. We support uptraining for Invoice, Purchase Order, Contracts, W2, 1099-R, and paystub documents.We plan to support more document types in the future, and to make ML model training even easier by continuing to reduce the amount of training data required for accurate output. As an example, Google’s DeepMind team recently developed a new method that allows the creation of document parsing  ML models for utility bills and purchase orders with 50%-70% less training data than what was previously needed for Document AI. We’re working on integrating this method into Document AI Workbench in the coming months.1No-cost trainingInstead of having to pay to spin up servers and wait while models are trained, Document AI Workbench lets users create and evaluate ML models for free. Customers simply pay as they go once models are deployed.With Document AI Workbench, organizations can enjoy all these features and more. And organizations own the data used to train models.Thanks to these benefits, many customers are already seeing impressive results. Muthukumaraswamy B, VP of Data Science at technology firm Searce said, “We estimate that our time-to-market will reduce by up to ~80% with Document AI Workbench vs. building custom models.”Similarly, software company Libeo “uptrained an invoice processor with 1,600 documents and increased testing accuracy from 75.6% (with pretrained models) to 83.9% with uptraining on Document AI Workbench,” said CPO & CTO Pierre-Antoine Glandier. “Thanks to uptraining, Document AI results beat the results of a competitor and will help Libeo save ~20% on the overall cost over the long run.”Technology firm Zencore is making strides as well. “Document AI Workbench allows us to develop highly accurate document parsing models in a matter of days. Our customers have completely automated tasks that formerly required significant human labor,” said Sean Earley, VP of Delivery Service.How to use Document AI WorkbenchUsers can leverage a simple interface in the Google Cloud Consoleto prepare training data, create and evaluate models, and deploy a model into production, at which point it can be called to extract data from documents. Import and prepare training dataTo get started, users import and label documents to train an ML model. If documents are labeled using other tools, users can simply import labels with JSON in the doc.proto format. If documents need to be labeled, they can create their document schema and use our simple interface to label documents. Optical character recognition (OCR) will automatically detect the content and prepare training data.Train a modelWith one click, users can train a model. If they are working with a document type similar in layout and schema to an existing document model, they can uptrain the relevant model to get accurate results faster. If there is no relevant, uptrainable model available for the document, they can create a model with Document AI Workbench’s Customer Document Extractor.Evaluate a model and iterateOnce a model is trained, it’s time to evaluate it by looking at the performance metrics–F1 score, precision, recall, etc. Users can dive into specific instances where the model predicted an error and provide additional training data to improve future performance.Going into productionOnce a model meets accuracy targets, it’s time to deploy into production, after which the model endpoint can be called to extract structured data from documents. Finally, users can configure human-in-the-loop review workflows to correct predictions whose confidence levels are below the required threshold. With human review, organizations can correct or confirm output before they use it in production and leverage the corrected data to train the model and improve the accuracy of future predictions.“Google’s Document AI Workbench offers a flexible, easy-to-use interface with the end-to-end functionality our clients demand. Custom document extractors and classifiers not only reduce our prototyping from months to weeks, but also offer clients added cost reductions compared to their current technologies,” said Erik Willsey, CTO at data and analytics modernization firm Pandera.     Getting started with Document AI WorkbenchWe’re thrilled to see so many customers and partners already driving value with Document AI Workbench. To learn more, check out the Document AI Workbench landing page, watch this Next’22 session on Document AI or peruse our key partners who are ready to help you get started.1. Deepmind used thousands of Google’s internal documents, such as utility bills and purchase orders from a variety of vendors to develop and evaluate this method. The performance will vary and depend on the evaluation dataset.
Quelle: Google Cloud Platform

Delivering ongoing value with a deliberate customer success strategy

Google invests in customer experienceData Capital Management (DCM) was looking to advance the development of AI investment methods by migrating to Google Cloud. DCM needed a cloud provider and support service that was more than just technical support to fix short-term issues. Instead, they wanted a true strategic ally that could provide access to engineering resources to help solve their highly complex needs and drive long-term innovation and business growth. To support customers like DCM, we’ve invested in our Customer Experience team and organized resources to be closer to our customers. From Technical Account Managers (TAM) to Customer Success teams and Professional Service Consultants, we offer a unified approach to reducing the complexities of cloud migration. This includes guided strategies spanning people, process, and technology to help customers map a successful journey.The experience that customers have with an enterprise can have a huge influence in their purchase decisions. IDC covers this in their market note “Google Cloud: Delivering Ongoing Customer Value with a Deliberate Customer Success Strategy” by Sudhir Rajagopal, which highlights the significant moves we’ve made to deliver the right service with the right specialist to our customers. IDC developed this note after spending a day with Google executives, learning about the future of our customer experience directions.We’ve pulled out a few highlights from the report to help you identify some common challenges and how Google is adapting to better address those.IDC notes that customer experience is key to buying “As we move into the age of experiences, the relationship between customers and brands continues to evolve. Customer experience (CX) reigns as the number one influencer of buying decisions. Indeed, IDC’s research shows that 80% of B2B buyers agree that their overall customer experience with a vendor will have a strong influence in purchasing from that vendor.”In addition, IDC notes that “next-generation customer experience will require organizations to elevate their customer context for emotionally intelligent engagement; deliver fluid, omni-channel experiences; and deliver value for the customer in the micro-moments and the whole journey, with a focus on empathetic customer outcomes.”Google Cloud is easier to partner withIDC notes that “global enterprises should find it easier to partner with Google with the release of new products and configurations that address special customer needs at the regional/local level (e.g., sovereign cloud offerings in Europe). Scaled solution delivery and implementation is enabled through the professional services organization (PSO), which is positioned as fundamental to Google Cloud’s customer success strategy.”“Contextual, purpose-built cloud solutions that are specific to a customer’s sector needs are key enablers in transformation programs. Google Cloud is making a deliberate effort to understand its customers in the context of its industry with tailored industry solutions and support offerings that address the challenges of the customer’s core business.”Context-aware customer engagementIDC states that “Google Cloud has a go-to-market approach that is contextualized by buyer personas with tailored value outcomes to each persona (e.g., IT leaders, LOB leaders, CISOs, CEOs), product/business function (e.g., data and analytics, security), market segment (e.g., digital native versus traditional companies that may require more complete/E2E transformation), and the engagement life cycle (e.g., early-stage engagement, implementation ramp-up).”Customer satisfactionIDC believes that “with customer experience now anchored by two-way conversational engagement, not only do customers want prescriptive experiences, but they also want the brand to know how they feel — the level of satisfaction.”  IDC notes: “To shore up customer advocacy, this year, Google Cloud is launching four new executive briefing centers in London, Paris, Munich, and Singapore set on Google Cloud campuses. Google Cloud has made efforts to extend customer experience throughout the relationship from decision to partner, solution delivery/implementation, and relationship expansion, with investments to offer scalability along with quality. IDC believes that with a continued focus on customer experience and value outcomes, Google Cloud should be able to sustain customer momentum in the market.”For all the details, read the full IDC Market Note, “Google Cloud: Delivering Ongoing Customer Value with a Deliberate Customer Success Strategy”.What’s next?To learn about how our customer experience offerings have evolved over the past year, read about our offerings.
Quelle: Google Cloud Platform

Docker Captain Take 5 — Nelson

Docker Captains are select members of the community that are both experts in their field and passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Nelson, one of our newest Captains. He’s the founder of Amigoscode and based in London, United Kingdom. You can follow Nelson on Twitter and Instagram for updates on his programming courses, YouTube videos, and more. 

How/when did you first discover Docker?

I discovered Docker back in 2015 while building a PoC that needed a local Postgres database. I was so impressed that with one single command I had a database up and running and was able to connect the backend to it. Ever since then, I’ve learned even more about Docker and use it in all of my projects. It has been a game-changer.

What’s your favorite Docker command?

docker exec — it container-name/id /bin/(bash/sh)

What’s your top tip for working with Docker that others may not know?

Whenever building a PoC, you should always use Docker Compose. With this Docker tool, you can spin an entire set of applications that can talk to each other with a single command. In most cases, you can spin up your entire dev environment using Docker Compose. 

What’s the coolest Docker demo you have done/seen?

I’ve built a Golang CLI tool that transcribes all of my videos and translates captions to any language. 

What have you worked on in the past six months that you’re particularly proud of?

Serverless email marketing tool to send emails to my students and subscribers. This tool is written in Golang and deployed on AWS Lambdas running as a Docker container.

What do you anticipate will be Docker’s biggest announcement this year?

I have no idea, but I know it’ll be cool.

What are some personal goals for the next year with respect to the Docker community?

Provide end-to-end programming content that teaches real-world applications by incorporating tools such as Docker. 

What was your favorite thing about DockerCon 2022?

The diversity of software engineers. There’s always something new to take from the talks/presentations.

Looking to the distant future, what’s the technology that you’re most excited about and you think holds a lot of promise?

I think serverless technology will enable engineering teams to think less about servers and focus more on the business side. 

Rapid fire questions…

What new skill have you mastered during the pandemic?

Cooking

Cats or Dogs?

Cats

Salty, sour, or sweet?

Sweet

Beach or mountains?

Mountains

Your most often-used emoji?

😁
Quelle: https://blog.docker.com/feed/

Amazon CloudFront fügt Felder für Ursprungslatenz und ASN in Echtzeitprotokollen hinzu, um einen detaillierteren Einblick zu ermöglichen

Amazon CloudFront bietet jetzt drei zusätzliche Datenfelder in CloudFront Echtzeit-Protokollen: Ursprungslatenzzeit im ersten Byte, Ursprungslatenzzeit im letzten Byte und Autonome Systemnummer (ASN). CloudFront-Echtzeitprotokolle enthalten detaillierte Informationen über die von CloudFront gelieferten Anfragen, wie z.B. die HTTP-Statuscodes der Antwort oder ob die Antwort im Cache gespeichert wurde. Mit den drei neuen Datenfeldern können Kunden bei der Analyse von Echtzeit-Protokollen oder in den mit den Protokollen erstellten Dashboards detailliertere Einblicke in die Leistung von CloudFront erhalten. Die Ursprungslatenz von Origin für das erste Byte gibt die Zeit in Sekunden an, die der Origin-Server benötigt, um mit dem ersten Byte der Antwort zu antworten. Die Origin-Latenzzeit für das letzte Byte gibt die Zeit in Sekunden an, die der Origin-Server benötigt, um mit dem letzten Byte der Antwort zu antworten. ASN ist eine eindeutige Nummer, die das Netzwerk identifiziert, z.B. das Netzwerk eines Internet Service Providers (ISP), der die Viewer-IP-Adresse bereitstellt. 
Quelle: aws.amazon.com