ÖPNV: Montpellier gibt Wasserstoffbusse auf und nimmt E-Busse
Das Wasserstoffbus-Projekt in Montpellier ist Geschichte, noch bevor es angefangen hat. Die Stadt setzt nun auf Elektrobusse. (Elektrobus, Technologie)
Quelle: Golem
Das Wasserstoffbus-Projekt in Montpellier ist Geschichte, noch bevor es angefangen hat. Die Stadt setzt nun auf Elektrobusse. (Elektrobus, Technologie)
Quelle: Golem
Mercedes muss zahlreiche Ladekabel aus Sicherheitsgründen zurückrufen, die zur C-Klasse, S-Klasse, GLC sowie dem EQS gehören. (Mercedes Benz, Technologie)
Quelle: Golem
Wenig Glasfaser findet sich in Deutschland. 2021 begannen deshalb Investoren, Milliarden-Euro-Pakete abzuwerfen und neue Anbieter anzulocken. Von Achim Sawall (Glasfaser, DSL)
Quelle: Golem
Der Civilization-Konkurrent Humankind bekommt eine erste Erweiterung. Das Addon trägt den Titel Cultures of Africa. (Strategiespiel, Sega)
Quelle: Golem
Der Rock 5 Model B misst nur 100 x 72 mm und integriert notwendige Komponenten für diverse Projekte – auch ein ARM-SoC bis zu 16 GByte RAM. (Bastelrechner, Embedded Systems)
Quelle: Golem
DevOps continues to be a major business accelerator for our customers and we continually see success from customers applying DevOps Research and Assessment (DORA) principles and findings to their organization. This is why the first annual DevOps Awardsis targeted to recognize customers shaping the future of DevOps with DORA. Share your inspirational story, supported by examples of business transformation and operational excellence, today. With inputs from over 32,000 professionals worldwide and seven years of research, the Accelerate State of DevOps Report is the largest and longest running DevOps research of its kind. The different categories of DevOps Awards map closely to the practices and capabilities that drive high performance, as identified by the report. Organizations, irrespective of their size, industry, and region are able to apply to one or all ten categories. Please find the categories and their descriptions below:Optimizing for Speed without sacrificing stability: This award recognizes one Google Cloud customer that has driven improvements in speed without sacrificing quality. Embracing easy-to-use tools to improve remote productivity: The research showcases how high performing engineers are 1.5 times more likely to have easy to-use tools. To be eligible for this award, share your stories on how easy to use DevOps tools have helped you improve engineer productivity.Mastering effective disaster recovery: This award winner will be awarded to demonstrate how a robust, well-testeddisaster recovery (DR) plan can protect business operations.Leveraging loosely coupled architecture: This award recognizes one customer that successfully transitioned from a tightly coupled architecture to service-oriented and microservice architectures.Unleashing the full power of the Cloud: This award recognizes a Google Cloud customer leveraging all five capabilities of cloud computing to improve software delivery and organizational performance. Specifically, these five capabilities include: – On demand self-service- Broad network access- Measured service- Rapid elasticity- Resource pooling.Read more about the five essential characteristics of cloud computingMost improved documentation quality: This award recognizes one customer that has successfully integrated documentation into their DevOps workflow using Google Cloud Platform tools.Reducing burnout during COVID-19: We will recognize one customer that implemented effective processes to improve work/life balance, foster a healthy DevOps culture, and ultimately prevent burnout.Utilizing IT operations to drive informed business decisions: This award will go to one customer that employed DevOps best practices to break down silos between development and operations teams.Driving inclusion and diversity in DevOps: To highlight the importance of a diverse organization, this award honors one Google Cloud customer that: Prioritizes diversity and inclusion initiatives for their organization to transform and strengthen their business. -orCreates unique solutions to help build a more diverse, inclusive, and accessible workplace for your customer, leading to higher levels of engagement, productivity, and innovation.Accelerating DevOps with DORA: This award recognizes one customer that has successfully integrated the most DORA practices and capabilities into their workflow using Google Cloud Platform tools.This is your chance to show your innovation globally and become a role model for the industry to improve. Winners will receive invitations to roundtables and discussions, press materials, website and social badges, special announcements and even a trophy award.We are excited to see all your great submissions. Applications are open until January 31st, so apply for what best suits your company and stay tuned for our awards show in February 2022!For more information on the awards visit our webpageand check out The Google Cloud DevOps Awards Guidebook.
Quelle: Google Cloud Platform
In this blog, we’ll cover data governance as it relates to managing data in the cloud. We’ll discuss the operating model which is independent of technologies whether on-prem or cloud, processes to ensure governance, and finally the technologies that are available to ensure data governance in the cloud. This is a two part blog on data governance. In this first part, we’ll discuss the role of data governance, why it’s important, and processes that need to be implemented to run an effective data governance program. In the second part, we’ll dive into the tools and technologies that are available to implement data governance processes, e.g. data quality, data discovery, tracking lineage, and security.For an in-depth and comprehensive text on Data governance, check Data Governance: People, Processes, and Tools to Operationalize Data Trustworthiness.What is Data Governance?Data governance is a function of data management which creates value for the organization by implementing processes to ensure high data quality, and provides a platform that makes it easier to share data securely across the organization while ensuring compliance with all the regulations. The goal of data governance is to maximize the value derived from data, build user trust, and ensure compliance by implementing required security measures.Data governance needs to be in place from the time a factoid of data is collected or generated and until the point in time at which that data is retired. Along the way, in this full lifecycle of the data, data governance focuses on making the data available to all stakeholders in a form that they can readily access and use in a manner that generates the desired business outcomes (insights, analysis), and if relevant, conforms to regulatory standards. These regulatory standards are often an intersection of industry (e.g. healthcare), government (e.g. privacy), and company (e.g. non-partisan) rules and codes of behavior. See more details here.Why is Data Governance Important?In the last decade, the amount of data generated by users using mobile phones, health & fitness and IOT devices, retail beacons etc. have caused an exponential growth in data. At the same time, the cloud has made it easier to collect, store, and analyze this data at a lower cost. As the volume of data and adoption of cloud continues to grow, organizations are challenged with a dual mandate to democratize and embed data in all decision making while ensuring that it is secured and protected from unauthorized use. An effective data governance program is needed to implement this dual mandate to make the organization data driven on one hand and securing data from unauthorized use on the other. Organizations without an effective data governance program will suffer from compliance violations leading to fines, poor data quality which leads to lower quality insights impacting business decisions, challenges in finding data which results in delayed analysis and missed business opportunities, poorly trained data models for AI which reduces the model accuracy and benefits of using AI.An effective data governance strategy encompasses people, processes, and tools & technologies. It drives data democratization to embed data in all decision making, builds user trust, increases brand value, reduces the chances of compliance violations which can lead to substantial fines, and loss of business.Components of Data GovernancePeople and Roles in Data GovernanceA comprehensive data governance program starts with a data governance council composed of leaders representing each business unit in the organization. This council establishes the high level governing principles on how the data will be used to drive business decisions. The council with the help of key people in each b business functions identify the data domains, e.g. customer, product, patient, and provider. The council then assigns data ownership and stewardship roles for each data domain. These are senior level roles and each owner is held accountable and accordingly rewarded for driving the data goals set by the data governance council. Data owners and stewards are assigned from business, for example customer data owner may be from marketing or sales, finance data owner from finance, while HR data owner from HR.The role of IT is that of data custodian. IT ensures the data is acquired, protected, stored, and shared according to the policies specified by data owners. As data custodians, IT does not make the decisions on data access or data sharing. IT’s role is limited to managing technology to support the implementation of data management policies set by data owners.Processes in Data GovernanceEach organization will establish processes to drive towards the implementation of goals set by the data governance council. The processes are established by data owners and data stewards for each of their data domains. The processes focus on the following high level goals:1. Data meets the specified data quality standards – e.g. 98% completeness, no more than 0.1% duplicate values, 99.99% consistent data across different tables, and what constitutes on-time delivery2. Data security policies to ensure compliance with internal and external policiesData is encrypted at rest and on wireData access is limited to authorized users onlyAll sensitive data fields are redacted or encrypted and dynamically decrypted only for authorized usersData can be joined for analytics in de-identified form, e.g. using deterministic encryption or hashingAudits are available for authorized access as well as unauthorized attempts3. Data sharing with external partners is available securely via APIs4. Compliance with industry and geo specific regulations, e.g. HIPAA, PCI DSS, GDPR, CCPA, LGPD5. Data replication is minimized6. Centralized data discovery for data users via data catalogs7. Trace data lineage to identify data quality issues, data replication sources, and help with auditsTechnologyImplementing the processes as specified in the data governance program requires use of technology. From securing data, retaining and reporting audits, to automate monitoring and alerts, multiple technologies are integrated to manage data life cycle.In Google Cloud, a comprehensive set of tools enables organizations to manage their data securely and drive data democratization. Data Catalog enables users to easily find data from one centralized place across Google Cloud. Data Fusion tracks lineage so data owners can trace data at every point in the data life cycle and fix issues that may be corrupting data. Cloud Audit Logs retain audits needed for compliance. Dataplex provides intelligent data management, centralized security and governance, automatic data discovery, metadata harvesting, lifecycle management, and data quality with built-in AI-driven intelligence.We will discuss the use of tools and technologies to implement governance in part 2 of this blog.
Quelle: Google Cloud Platform
We are often asked if the cloud is more secure than on-premise infrastructure. The quick answer is that, in general, it is. The complete answer is more nuanced and is grounded in a series of cloud security “megatrends” that drive technological innovation and improve the overall security posture of cloud providers and customers.An on-prem environment can, with a lot of effort, have the same default level of security as a reputable cloud provider’s infrastructure. Conversely, a weak cloud configuration can give rise to many security issues. But in general, the base security of the cloud coupled with a suitably protected customer configuration is stronger than most on-prem environments. Google Cloud’s baseline security architecture adheres to zero-trust principles—the idea that every network, device, person, and service is untrusted until it proves itself. It also relies on defense in depth, with multiple layers of controls and capabilities to protect against the impact of configuration errors and attacks. At Google Cloud, we prioritize security by design and have a team of security engineers who work continuously to deliver secure products and customer controls. Additionally, we also take advantage of industry megatrends that increase cloud security further, outpacing the security of on-prem infrastructure.These eight megatrends actually compound the security advantages of the cloud compared with on-prem environments (or at least those that are not part of a distributed or trusted partner cloud). IT-decision makers should pay close attention to these megatrends because they’re not just transient issues to be ignored once 2023 rolls around—they guide the development of cloud security and technology, and will continue to do so for the foreseeable future.At a high level, these eight megatrends are:Economy of scale: Decreasing the marginal cost of security raises the baseline level of security. Shared fate: A flywheel of increasing trust drives more transition to the cloud, which compels even higher security and even more skin-in-the-game from the cloud provider.Healthy competition: The race by deep-pocketed cloud providers to create and implement leading security technologies is the tip of the spear of innovation. Cloud as the digital immune system: Every security update the cloud gives the customer is informed by some threat, vulnerability, or new attack technique often identified by someone else’s experience. Enterprise IT leaders use this accelerating feedback loop to get better protection.Software-defined infrastructure: Cloud is software defined, so it can be dynamically configured without customers having to manage hardware placement or cope with administrative toil. From a security standpoint, that means specifying security policies as code, and continuously monitoring their effectiveness.Increasing deployment velocity: Because of cloud’s vast scale, providers have had to automate software deployments and updates, usually with automated continuous integration/continuous deployment (CI/CD) systems. That same automation delivers security enhancements, resulting in more frequent security updates.Simplicity: Cloud becomes an abstraction-generating machine for identifying, creating and deploying simpler default modes of operating securely and autonomically. Sovereignty meets sustainability: The cloud’s global scale and ability to operate in localized and distributed ways creates three pillars of sovereignty. This global scale can also be leveraged to improve energy efficiency.Let’s look at these megatrends in more depth. Economy of scale: Decreasing marginal cost of securityPublic clouds are of sufficient scale to implement levels of security and resilience that few organizations have previously constructed. At Google, we run a global network, we build our own systems, networks, storage and software stacks. We equip this with a level of default security that has not been seen before, from our Titan security chips which assure a secure boot; our pervasive data-in-transit and data-at-rest encryption; and make available confidential computing nodes that encrypt data even while it’s in use. We prioritize security, of course, but prioritizing security becomes easier and cheaper because the cost of an individual control at such scale decreases per unit of deployment. As the scale increases, the unit cost of control goes down. As the unit cost goes down, it becomes cheaper to put those increasing baseline controls everywhere. Finally, where there is necessary incremental cost to support specific configurations, enhanced security features, and services to support customer security operations and updates, then even the per-unit cost of that will decrease. It may be chargeable but it is still a lower cost than on-prem services whose economics are going in the other direction. Cloud is, therefore, the strategic epitome of raising the security baseline by reducing the cost of control. The measurable level of security can’t help but increase.Shared fate: The flywheel of cloud expansionThe long-standing shared responsibility model is conceptually correct. The cloud provider offers a secure base infrastructure (security of the cloud) and the customer configures their services on that in a secure way (security in the cloud). But, if the shared responsibility model is used more to allocate responsibility when incidents occur and less as a means of understanding mutual collective responsibility, then we are not living up to mutual expectations or responsibility. Taking a broader view of a “shared responsibility” model, we should use such a model to create a mutually beneficial shared fate. We’re in this together. We know that if our customers are not secure, then we as cloud providers are collectively not successful. This shared fate extends beyond just Google Cloud and our customers—it affects all the clouds because a trust issue in one impacts the trust in all. If that trust issue makes the cloud “look bad,” then current and potential future customers might shy away from the cloud, which ultimately puts them in a less-secure position. This is why our security mission is a triad of Secure the Cloud (not only Google Cloud), Secure the Customer (shared fate) and Secure the Planet (and beyond). Further, “shared fate” goes beyond just the reality of shared consequences. We view this as a philosophy of deeply caring about customer security, which gives rise over time to elements like:Secure-by-default configurations. Our default configurations ensure security basics have been enabled and that all customers start from a high security baseline, even if some customers change that later. Secure blueprints. Highly opinionated configurations for assemblies of products and services in secure-by-default ways, with actual configuration code, so customers can more easily bootstrap a secure cloud environment.Secure policy hierarchies. Setting policy intent at one level in an application environment should automatically configure down the stack so there’s no surprises or additional toil in lower-level security settings.Consistent availability of advanced security features. Providing advanced features to customers across a product suite and available for new products at launch is part of the balancing act between faster new launches and the need for security consistency across the platform. We reduce the risks customers face by consistently providing advanced security featuresHigh assurance attestation of controls. We provide this through compliance certifications, audit content, regulatory compliance support, and configuration transparency for ratings and insurance coverage from partners such as our Risk Protection Program. Shared fate drives a flywheel of cloud adoption. Visibility into the presence of strong default controls and transparency into their operation increases customer confidence, which in turn drives more workloads coming onto cloud. The presence of and potential for more sensitive workloads in turn inspires the development of even stronger default protections that benefit customers. Healthy competition: The race to the top The pace and extent of security feature enhancement to products is accelerating across the industry. This massive, global-scale competition to keep increasing security in tandem with agility and productivity is a benefit to all.For the first time in history, we have companies with vast resources working hard to deliver better security, as well as more precise and consistent ways of helping customers manage security. While some are ahead of others, perhaps sustainably so, what is consistent is that cloud will always lead on-prem environments which have less of a competitive impetus to provide progressively better security. On-prem may not ever go away completely, but cloud competition drives security innovation in a way that on-prem hasn’t and won’t.Cloud as the digital immune system: Benefit for the many from the needs of the few(er) Security improvements in the cloud happen for several reasons:The cloud provider’s large number of security researchers and engineers postulate a need for an improvement based on a deep theoretical and practical knowledge of attacks.A cloud provider with significant visibility on the global threat landscape applies knowledge of threat actors and their evolving attack tactics to drive not just specific new countermeasures but also means of defeating whole classes of attacks. A cloud provider deploys red teams and world-leading vulnerability researchers to constantly probe for weaknesses that are then mitigated across the platform. The cloud provider’s software engineers often incorporate and curate open-source software and often support the community to drive improvements for the benefit of all. The cloud provider embraces vulnerability discovery and bug bounty programs to attract many of the world’s best independent security researchers.And, perhaps most importantly, the cloud provider partners with many of its customer security teams, who have a deep understanding of their own security needs, to drive security enhancements and new features across the platform. This is a vast, global forcing function of security enhancements which, given the other megatrends, is applied relatively quickly and cost-effectively. If the customer’s organization can not apply this level of resources, and realistically even some of the biggest organizations can’t, then an optimal security strategy is to embrace every security feature update the cloud provides to protect networks, systems, and data. It’s like tapping into a global digital immune system.Software-defined infrastructure: Continuous controls monitoring vs. policy intent One of the sources of the comparative advantage of the cloud over on-prem is that it is a software-defined infrastructure. This is a particular advantage for security since configuration in the cloud is inherently declarative and programmatically configured. This also means that configuration code can be overlaid with embedded policy intent (policy-as-code and controls-as-code). The customer validates their configuration by analysis, and then can continuously assure that configuration corresponds to reality. They can model changes and apply them with less operating risk, permitting phased-in changes and experiments. As a result, they can take more aggressive stances to apply tighter controls with less reliability risk. This means they can easily add more controls to their environment and update it continuously. This is another example of where cloud security aligns fully with business and technology agility. The BeyondProd model and SLSA framework are prime examples of how our software-defined infrastructure has helped improve cloud security. BeyondProd and the BeyondCorp framework apply zero-trust principles to protecting cloud services. Just like not all users are in the same physical location or using the same devices, developers do not all deploy code to the same environment. BeyondProd enables microservices to run securely with granular controls in public clouds, private clouds, and third-party hosted services.The SLSA framework applies this approach to the complex nature of modern software development and deployment. Developed in collaboration with the Open Source Security Foundation, the SLSA framework formalizes criteria for software supply chain integrity. That’s no small hill to climb, given that today’s software is made up of code, binaries, networked APIs and their assorted configuration files. Managing security in a software-defined infrastructure means the customer can intrinsically deliver continuous controls monitoring, constant inventory assurance and be capable of operating at an “efficient frontier” of a highly secure environment without having to incur significant operating risks. Increasing deployment velocityCloud providers use a continuous integration/continuous deployment model. This is a necessity for enabling innovation through frequent improvements, including security updates supported by a consistent version of products everywhere, as well as achieving reliability at scale. Cloud security and other mechanisms are API based and uniform across products, which enables the management of configuration in programmatic ways—also known as configuration-as-code. When configuration-as-code is combined with the overall nature of cloud being a software-defined infrastructure, it enables customers to implement CI/CD approaches for software deployment and configuration to enable consistency in their use of the cloud. This automation and increased velocity decreases the time customers spend waiting for fixes and features to be applied. That includes the speed of deploying security features and updates, and permits fast roll-back for any reason. Ultimately, this means that the customer can move even faster yet with demonstrably less risk—eating and having your cake, as it were. Overall, we find deployment velocity to be a critical tool for strong security.Simplicity: Cloud as an abstraction machine A common concern about moving to the cloud is that it’s too complex. Admittedly, starting from scratch and learning all the features the cloud offers may seem daunting. Yet even today’s feature-rich cloud offerings are much simpler than prior on-prem environments—which are far less robust. The perception of complexity comes from people being exposed to the scope of the whole platform, despite more abstraction of the underlying platform configuration. In on-prem environments, there are large teams of network engineers, system administrators, system programmers, software developers, security engineering teams, storage admins, and many more roles and teams. Each has their own domain or silo to operate in. That loose-flying collection of technologies with its myriad of configuration options and incompatibilities required a degree of artisanal engineering that represents more complexity and less security and resilience than customers will encounter in the cloud. Cloud is only going to get simpler because the market rewards the cloud providers for abstraction and autonomic operations. In turn, this permits more scale and more use, creating a relentless hunt for abstraction. Like our digital immune system analogy, the customer should see the cloud as an abstraction pattern-generating machine: It takes the best operational innovations from tens of thousands of customers, and assimilates them for the benefit of everyone. The increased simplicity and abstraction permit more explicit assertion of security policy in more precise and expressive ways applied in the right context. Simply put, simplicity removes more potential surprise—and security issues are often rooted in surprise.Sovereignty meets sustainability: Global to local The cloud’s global scale and ability to operate in localized and distributed ways creates three potential pillars of sovereignty, which will be increasingly important in all jurisdictions and sectors. It can intrinsically support the need for national or regional controls, limits on data access, as well as delegation of certain operations, and means for greater portability across services.The global footprint of many cloud providers means that cloud can more easily meet national or regional deployment needs. Workloads can be more easily deployed to more energy-efficient infrastructures. That, coupled with cloud’s inherent efficiency due to higher resource utilization, means cloud is more sustainable overall. By engaging with customers and policymakers across these pillars, we can provide solutions that address their requirements, while optimizing for additional considerations like functionality, cost, infrastructure consistency, and developer experience. Data sovereignty provides customers with a mechanism to prevent the provider from accessing their data, approving access only for specific provider behaviors that customers think are necessary. Examples of customer controls provided by Google Cloud include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. With these features, the customer is the ultimate arbiter of access to their data. Operational sovereignty provides customers with assurances that the people working at a cloud provider cannot compromise customer workloads. The customer benefits from the scale of a multi-tenant environment while preserving control similar to a traditional on-prem environment. Examples of these controls include restricting the deployment of new resources to specific provider regions, and limiting support personnel access based on predefined attributes such as citizenship or a particular geographic location. Software sovereignty provides customers with assurances that they can control the availability of their workloads and run them wherever they want, without being dependent on or locked-in to a single cloud provider. This includes the ability to survive events that require them to quickly change where their workloads are deployed and what level of outside connection is allowed. This is only possible when two requirements are met, both of which simplify workload management and mitigate concentration risks: first, when customers have access to platforms that embrace open APIs and services; and second, when customers have access to technologies that support the deployment of applications across many platforms, in a full range of configurations including multi-cloud, hybrid, and on-prem, using orchestration tooling. Examples of these controls are platforms that allow customers to manage workloads across providers; and orchestration tooling that allows customers to create a single API that can be backed by applications running on different providers, including proprietary cloud-based and open-source alternatives.This overall approach also provides a means for organizations (and groups of organizations that make up a sector or national critical infrastructure) to manage concentration risks. They can do this either by relying on the increased regional and zonal isolation mechanisms in the cloud, or through improved means of configuring resilient multi-cloud services. This is also why the commitment to open source and open standards is so important.The bottom line is that cloud computing megatrends will propel security forward faster, for less cost and less effort than any other security initiative. With the help of these megatrends, the advantage of cloud security over on-prem is inevitable.Related ArticleRead Article
Quelle: Google Cloud Platform
AWS Firewall Manager ermöglicht Ihnen jetzt die zentrale Bereitstellung von DDoS-Schutz für automatische Anwendungsebenen (L7) von AWS Shield Advanced für alle Konten in Ihrer Organisation. Der automatische L7-DDoS-Schutz von AWS Shield Advanced blockiert DDoS-Ereignisse auf Anwendungsebene, ohne dass ein manueller Eingriff erforderlich ist. Mit dieser Einführung können Sicherheitsadministratoren für AWS Firewall Manager jetzt mithilfe der Firewall-Manager-Sicherheitsrichtlinie für AWS Shield Advanced den automatischen L7-DDoS-Schutz für alle Konten aktivieren.
Quelle: aws.amazon.com
Amazon EMR Studio ist eine integrierte Entwicklungsumgebung (IDE), die es Datenwissenschaftlern und Dateningenieuren leicht macht, Big-Data- und Analytik-Anwendungen zu entwickeln, zu visualisieren und zu debuggen, die in R, Python, Scala und PySpark geschrieben wurden. Mit EMR Studio erhalten Sie vollständig verwaltete Notebooks basierend auf JupyterLab. JupyterLab ist die webbasierte Benutzeroberfläche der nächsten Generation für das Open-Source-Projekt Jupyter. Heute freuen wir uns, Ihnen mitteilen zu können, dass EMR Studio jetzt auf JupyterLab v3.1.4 aktualisiert wurde und ein verbessertes Benutzererlebnis und neue Benutzerfreundlichkeitsfunktionen bietet.
Quelle: aws.amazon.com