Elektroauto: Opel Corsa-e soll 330 km weit kommen
Vor der Vorstellung des Opel Corsa-e am 4. Juni hat der deutsch-französische Konzern bereits erste Details des kommenden Elektroautos bekanntgegeben. (Elektroauto, Technologie)
Quelle: Golem
Vor der Vorstellung des Opel Corsa-e am 4. Juni hat der deutsch-französische Konzern bereits erste Details des kommenden Elektroautos bekanntgegeben. (Elektroauto, Technologie)
Quelle: Golem
Die Starlink-Konstellation soll weltweit Internetverbindungen mit Satelliten möglich machen. Am Freitag sind die ersten Prototypen gestartet. Golem erklärt, was es mit der Konstellation auf sich hat. Von Frank Wunderlich-Pfeiffer (SpaceX, Amazon)
Quelle: Golem
Der Elektroautohersteller Streetscooter steht offenbar vor dem Verkauf. Der Aachener Maschinenbauprofessor und Mitbegründer Günther Schuh will das Unternehmen angeblich von der Deutschen Post zurückerwerben. (Streetscooter, Technologie)
Quelle: Golem
Skoda hat mit dem Citigo e iV ein neues Elektroauto auf Basis des VW Up vorgestellt. Das Fahrzeug soll eine Reichweite von 265 km nach WLTP erreichen. Das ist durch einen größeren Akku möglich. (Elektroauto, Technologie)
Quelle: Golem
Weedcraft Inc. und Imperator Rome fordern Strategie, Katana Zero die Reflexe – und Steamworld Quest bringt Taktik ins Kartenspiel. Von Rainer Sigl (Indiegames Rundschau, Dark Souls)
Quelle: Golem
At Google, our Customer Reliability Engineering (CRE) teams work with customers to help implement Site Reliability Engineering (SRE) practices to continually attain their reliability goals. This work often includes defining objectives and implementing operational best practices like blameless postmortems or analyzing error budget spend.Following CRE practices is especially important when changes are made in the customer’s product. But what about when changes are released within Google Cloud Platform (GCP), where the product runs? We’ve heard that you want to test your products against future GCP releases to ensure reliability and performance when the underlying cloud service changes. We are happy to announce that preview zones are now available to let you test your own production code against future releases of GCP.We’ve been working recently with many of our SaaS company partners and we’re happy to announce that we’ve expanded our CRE for SaaS program to address these needs. You can see how it works here:With this expansion, our SaaS partners who have enrolled in our CRE for SaaS program now have an option to run a copy of their production applications in the preview zone. This lets partners detect unanticipated failures of applications running on future releases of GCP services. We put a number of unreleased “Day 0 binaries,” our soon-to-be-released code, in this zone. Then partners can test their production applications against that code. This way, we can anticipate and avoid previously unknown failure modes before users do—giving both us and our partners a chance to investigate the pending changes and address them.BrightInsight (a Flex company), this year’s winner of the Google Cloud Healthcare Partner award, has been using the preview zone, and finds it helpful both in preventing unanticipated failures as well as supporting regulatory compliance requirements within the healthcare industry.To use the preview zone, you’ll need to have defined your SLOs so that Google can integrate them with additional test frameworks. If you don’t have SLOs defined, we’ve built SLO Guide, a new tool to help you discover what you should measure based on common architectures and critical user journeys. It will help you quickly create SLOs that measure what your users actually care about. You can request access to the tool here. Finally, if you’re not a Google Cloud SaaS partner yet, kick off the process here.
Quelle: Google Cloud Platform
Logs are critical for many scenarios in the modern digital world. They are used in tandem with metrics for observability, monitoring, troubleshooting, usage and service level analytics, auditing, security, and much more. Any plan to build an application or IT environment should include a plan for logs.
Logs architecture
There are two main paradigms for logs:
Centralized: All logs are kept in a central repository. In this scenario, it is easy to search across resources and cross-correlate logs but, since these repositories get big and include logs from all kind of sources, it's hard to maintain access control on them. Some organizations completely avoid centralized logging for that reason, while other organizations that use centralized logging restrict access to very few admins, which prevents most of their users from getting value out of the logs.
Siloed: Logs are either stored within a resource or stored centrally but segregated per resource. In these instances, the repository can be kept secure, and access control is coherent with the resource access, but it's hard or impossible to cross-correlate logs. Users who need a broad view of many resources cannot generate insights. In modern applications, problems and insights span across resources, making the siloed paradigm highly limited in its value.
To accommodate the conflicting needs of security and log correlations many organizations have implemented both paradigms in parallel, resulting in a complex, expensive, and hard-to-maintain environment with gaps in logs coverage. This leads to lower usage of log data in the organization and results in decision-making that is not based on data.
New access control options for Azure Monitor Logs
We have recently announced a new set of Azure Monitor Logs capabilities that allow customers to benefit from the advantages of both paradigms. Customers can now have their logs centralized while seamlessly integrated into Azure and its role based access control (RBAC) mechanisms. We call this resource-centric logging. It will be added to the existing Azure Monitor Logs experience automatically while maintaining the existing experiences and APIs. Delivering a new logs model is a journey, but you can start using this new experience today. We plan to enhance and complete alignment of all Azure Monitor's components over the next few months.
The basic idea behind resource-centric logs is that every log record emitted by an Azure resource is automatically associated with this resource. Logs are sent to a central workspace container that respects scoping and RBAC based on the resources. Users will have two options for accessing the data:
Workspace-centric: Query all data in a specific workspace–Azure Monitor Logs container. Workspace access permissions apply. This mode will be used by centralized teams that need access to logs regardless of the resource permissions. It can also be used for components that don't support resource-centric or off-Azure resources, though a new option for them will be available soon.
Resource-centric: Query all logs related to a resource. Resource access permissions apply. Logs will be served from all workspaces that contain data for that resource without the need to specify them. If workspace access control allows it, there is no need to grant the users access to the workspace. This mode works for a specific resource, all resources in a specific resource group, or all resources in a specific subscription. Most application teams and DevOps will be using this mode to consume their logs.
Azure Monitor experience automatically decides on the right mode depending on the scope the user chooses. If the user selects a workspace, queries will be sent in workspace-centric mode. If the user selects a resource, resource group, or subscription, resource-centric is used. The scope is always presented in the top left section of the Log Analytics screen:
You can also query all logs of resources in a specific resource group using the resource group screen:
Soon, Azure Monitor will also be able to scope queries for an entire subscription.
To make logs more prevalent and easier to use, they are now integrated into many Azure resource experiences. When log search is opened from a resource menu, the search is automatically scoped to that resource and resource-centric queries are used. This means that if users have access to a resource, they'll be able to access their logs. Workspace owners can block or enable such access using the workspace access control mode.
Another capability we're adding is the ability to set permissions per table that store the logs. By default, if users are granted access to workspaces or resources, they can read all their log types. The new table RBAC allows admins to use Azure custom roles to define limited access for users, so they're only able to access some of the tables, or admins can block users from accessing specific tables. You can use this, for example, if you want the networking team to be able to access only the networking related table in a workspace or a subscription.
As result of these changes, organizations will have simpler models with fewer workspaces and more secure access control. Workspaces now assume the role of a manageable container, allowing administrators to better govern their environments. Users are now empowered to view logs in their natural Azure context, helping them to leverage the power of logs in their day-to-day work.
The improved Azure Monitor Logs access control lets you now enjoy both worlds at once without compromise on usability and security. Central teams can have full access to all logs while DevOps teams can access logs only for their resources. This comes on top of the powerful log analytics, integration and scalability capabilities that are used by tens of thousands of customers.
Next steps
To use it today, you need to:
Decide which workspaces should be used to store all data. Take into account billing, regulation, and data ownership.
Change your workspace access control mode to “Use resource or workspace permissions” to enable them for resource-centric access. Workspaces created after March 2019 are configured to this mode by default.
Remove workspace access permissions from your application teams and DevOps.
Let your users become master of their logs.
Quelle: Azure
Die Telekom wollte in einer Gemeinde keine Glasfaser ausbauen. Als Karlsdorf-Neuthard selbst ein Netz verlegte, wurde mit Vectoring überbaut. Die Gemeinde musste aufhören. (Telekom, Glasfaser)
Quelle: Golem
Today, I’m excited to share our ability to provide Azure public services that meet US Federal Risk and Authorization Management Program (FedRAMP) High impact level and extend FedRAMP High Provisional Authorization to Operate (P-ATO) to all of our Azure public regions in the United States. In October, we told customers of our plan to expand public cloud services and regions from FedRAMP Moderate to FedRAMP High impact level. FedRAMP High was previously available only to customers using Azure Government. Additionally, we’ve increased the number of services available at High impact level to 90, including powerful services like Azure Policy and Azure Security Center, as we continue to drive to 100 percent FedRAMP compliance for all Azure services per our published listings and roadmap. Azure continues to support more services at FedRAMP High impact levels than any other cloud provider.
Achieving FedRAMP High means that both Azure public and Azure Government data centers and services meet the demanding requirements of FedRAMP High, making it easier for more federal agencies to benefit from the cost savings and rigorous security of the Microsoft Commercial Cloud.
While FedRAMP High in the Azure public cloud will meet the needs of many US government customers, certain agencies with more stringent requirements will continue to rely on Azure Government, which provides additional safeguards such as the heightened screening of personnel. We announced earlier the availability of new FedRAMP High services available for Azure Government.
FedRAMP was established to provide a standardized approach for assessing, monitoring, and authorizing cloud computing products and services to federal agencies, and to accelerate the adoption of secure cloud solutions by federal agencies. The Office of Management and Budget now requires all executive federal agencies to use FedRAMP to validate the security of cloud services. Cloud service providers demonstrate FedRAMP compliance through an Authority to Operate (ATO) or a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB). FedRAMP authorizations are granted at three impact levels based on NIST guideline slow, medium, and high.
Microsoft is working closely with our stakeholders to simplify our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly by reducing the time required to take a service from available to certified. Our published FedRAMP services roadmap lists all services currently available in Azure Government to our FedRAMP High boundary, as well as services planned for the current year. We are committed to ensuring that Azure services to government provides the best the cloud has to offer and that all Azure offerings are certified at the highest level of FedRAMP compliance.
New FedRAMP High Azure Government Services include:
Azure DB for MySQL
Azure DB for PostgreSQL
Azure DDoS Protection
Azure File Sync
Azure Lab Services
Azure Migrate
Azure Policy
Azure Security Center
Microsoft Flow
Microsoft PowerApps
We will continue our commitment to provide our customers the broadest compliance in the industry, as Azure now supports 91 compliance offerings, more than any other cloud service provider. For a full listing of our compliance offerings, visit the Microsoft Trust Center.
Quelle: Azure
Fire Emblem Heroes und Animal Crossing Pocket Camp haben demnächst ein paar Spieler weniger: Nintendo will die beiden Mobile Games in Belgien vollständig vom Markt nehmen. (Lootbox, Nintendo)
Quelle: Golem