Industriestrategie: EU plant Allianz für sauberen Wasserstoff
Die neue Industriestrategie der EU-Kommission soll Europa grüner und digitaler machen. Die Energie- und IT-Wirtschaft finden das gut. (Wasserstoff, Brennstoffzelle)
Quelle: Golem
Die neue Industriestrategie der EU-Kommission soll Europa grüner und digitaler machen. Die Energie- und IT-Wirtschaft finden das gut. (Wasserstoff, Brennstoffzelle)
Quelle: Golem
At Microsoft Ignite, we announced new Microsoft Azure Migrate assessment capabilities that further simplify migration planning. In this post, I will talk about how you can plan migration of physical servers. Using this feature, you can also plan migration of virtual machines of any hypervisor or cloud. You can get started right away with these features by creating an Azure Migrate project or using an existing project.
Previously, Azure Migrate: Server Assessment only supported VMware and Hyper-V virtual machine assessments for migration to Azure. At Ignite 2019, we added physical server support for assessment features like Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis. You can now plan at-scale, assessing up to 35K physical servers in one Azure Migrate project. If you use VMware or Hyper-V as well, you can discover and assess both physical and virtual servers in the same project. You can create groups of servers, assess by group and refine the groups further using application dependency information.
While this feature is in preview, the preview is covered by customer support and can be used for production workloads. Let us look at how the assessment helps you plan migration.
Azure suitability analysis
The assessment checks Azure support for each server discovered and determines whether the server can be migrated as-is to Azure. If incompatibilities are found, remediation guidance is automatically provided. You can customize your assessment by changing its properties, and recomputing the assessment. Among other customizations, you can choose a virtual machine series of your choice and specify the uptime of the workloads you will run in Azure.
Cost estimation and sizing
Assessment also provides detailed cost estimates. Performance-based rightsizing assessments can be used to optimize on cost; the performance data of your on-premise server is used to recommend a suitable Azure Virtual Machine and disk SKU. This helps to optimize on cost and right-size as you migrate servers that might be over-provisioned in your on-premise data center. You can apply subscription offers and Reserved Instance pricing on the cost estimates.
Dependency analysis
Once you have established cost estimates and migration readiness, you can plan your migration phases. Using the dependency analysis feature, you can understand which workloads are interdependent and need to be migrated together. This also helps ensure you do not leave critical elements behind on-premise. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration by reviewing the dependencies.
Assess your physical servers in four simple steps
Create an Azure Migrate project and add the Server Assessment solution to the project.
Set up the Azure Migrate appliance and start discovery of your server. To set up discovery, the server names or IP addresses are required. Each appliance supports discovery of 250 servers. You can set up more than one appliance if required.
Once you have successfully set up discovery, create assessments and review the assessment reports.
Use the application dependency analysis features to create and refine server groups to phase your migration.
When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You can read more about migrating physical servers here. In the coming months, we will add support for application discovery and agentless dependency analysis on physical servers as well.
Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Brazil, Canada, Europe, France, India, Japan, Korea, United Kingdom, and United States geographies.
Get started right away by creating an Azure Migrate project. In the upcoming blogs, we will talk about import-based assessments, application discovery, and agentless dependency analysis.
Resources to get started
Tutorial on how to assess physical servers using Azure Migrate: Server Assessment.
Prerequisites for assessment of physical servers
Guide on how to plan an assessment for a large-scale environment. Each appliance supports discovery of 250 servers. You can discover more servers by adding
Tutorial on how to migrate physical servers using Azure Migrate: Server Migration.
Quelle: Azure
Per Load Value Injection werden fehlerhafte Werte vom Angreifer selbst in die Sicherheitsenklave eingeschleust. (Meltdown, Prozessor)
Quelle: Golem
Cisco bringt Ausrüstung für 5G-Netze, die dort beginnt, wo die Funkstrecke endet. Sie soll die hohen Datenmengen bewätigen können. (Cisco, Netzwerk)
Quelle: Golem
The increased use of renewables, resiliency challenges, and sustainability concerns are all disrupting the energy industry today. New technologies are accelerating the way we source, store, and distribute energy. With IoT, we can gain new insights about the physical world that enables us to optimize and create more efficient processes, reduce energy waste, and track specific consumption. This is a great opportunity for IoT to support power and utilities (P&U) companies across grid assets, electric vehicles, energy optimization, load balancing, and emissions monitoring.
We've recently published a new IoT Signals report focused on the P&U industry. The report provides an industry pulse on the state of IoT adoption to help inform us how to better serve our partners and customers, as well as help energy companies develop their own IoT strategies. We surveyed global decision-makers in P&U organizations to deliver an industry-level view of the IoT ecosystem, including adoption rates, related technology trends, challenges, and benefits of IoT.
The study found that while IoT is almost universally adopted in P&U, it comes with complexity. Companies are commonly deploying IoT to improve the efficiency of operations and employee productivity, but can be challenged by skills and knowledge shortages, privacy and security concerns, and timing and deployment issues. To summarize the findings:
Top priorities and use cases for IoT in power and utilities
Optimizing processes through automation is critical for P&U IoT use. Top IoT uses cases in P&U include automation-heavy processes such as smart grid automation, energy optimization and load balancing, smart metering, and predictive load forecasting. In support of this, artificial intelligence (AI) is often a component of energy IoT solutions, and they are often budgeted together. Almost all adopters have either already integrated AI into an IoT solution or are considering integration.
Using IoT to improve both data security and employee safety is a top priority. Almost half of decision-makers we talked to use IoT to make their IT practices more secure. Another third are implementing IoT to make their workplaces safer, as well as improve the safety of their employees.
P&U companies also leverage IoT to secure their physical assets. Many P&U companies use IoT to secure various aspects of their operations through equipment management and infrastructure maintenance.
The future is bright with IoT adoption continuing to focus on automation, with growth in adoption for use cases related to optimizing energy and creating more efficient maintenance systems.
Today, customers around the world are telling us they are heavily investing in four common use cases for IoT in the energy sector:
Grid asset maintenance
Visualize your grid’s topology, gather data from grid assets, and define rules to trigger alerts. Use these insights to predict maintenance and provide more safety oversight. Prevent failures and avoid critical downtime by monitoring the performance and condition of your equipment.
Energy optimization and load balancing
Balance energy supply and demand to alleviate pressure on the grid and prevent serious power outages. Avoid costly infrastructure upgrades and gain flexibility by using distributed energy resources to drive energy optimization.
Emissions monitoring and reduction
Monitor emissions in near real-time and make your emissions data more readily available. Work towards sustainability targets and clean energy adoption by enabling greenhouse gas and carbon accounting and reporting.
E-mobility
Remotely maintain and service electric vehicle (EV) charging points that support various charging speeds and vehicle types. Make it easier to own and operate electric vehicles by incentivizing ownership and creating new visibility into energy usage.
Learn more about IoT for energy
Read about the real world customers doing incredible things with IoT for energy where you can learn about market leaders like Schneider Electric making remote asset management easier using predictive analytics.
"Traditionally, machine learning is something that has only run in the cloud … Now, we have the flexibility to run it in the cloud or at the edge—wherever we need it to be." Matt Boujonnier, Analytics Application Architect, Schneider Electric.
Read the blog where we announced Microsoft will be carbon negative by 2030 and discussed our partner Vattenfall delivering a new, highly transparent 24/7 energy matching solution; a first-of-its-kind approach that gives customers the ability to choose the green energy they want and ensure their consumption matches that goal using Azure IoT.
We are committed to helping P&U customers bring their vision to life with IoT, and this starts with simplifying and securing IoT. Our customers are embracing IoT as a core strategy to drive better outcomes for energy providers, energy users, and the planet. We are heavily investing in this space, committing $5 billion in IoT and intelligent edge innovation by 2022, and growing our IoT and intelligent edge partner ecosystem.
When IoT is foundational to a transformation strategy, it can have a significantly positive impact on the bottom line, customer experiences, and products. We are invested in helping our partners, customers, and the broader industry to take the necessary steps to address barriers to success. Read the full IoT Signals energy report and learn how we're helping power and utilities companies embrace the future and unlock new opportunities with IoT.
Quelle: Azure
I remember the first time one of my co-workers told me about Docker. There is a longer story behind it, but it ended with “it was so easy and saved me so much time.” That compelled me to install Docker and try it for myself. Yup, she was right. Easy, simple, efficient. Sometime later, at a conference, while catching up with some friends who are developers, I asked them “how are things going?” The conversation eventually led to the topic of where things are going in the container space. I asked, “what’s the biggest issue you are having right now?” I expected the response to be something Kubernetes related. I was surprised the answer was “managing all the tech that gets my code deployed and running.”
The above sentiment is echoed by our CEO, Scott Johnston, in this post. Millions of you use Docker today (check out the Docker Index for the latest usage stats), and we are so very thankful for the vibrant Docker Community. We heard from you that easily going from code to cloud is a problem, and Scott outlined the complexities. There are many choices across packaging, inner loop, packaging, registry, CI, security, CD, and public cloud runtimes. Those choices exist at almost every step, and once you make those choices, you have to stitch them together and manage them yourself. Things are a little easier if you are “all-in” on a particular public cloud provider.
However, what if you are a developer in a small team at a startup, and need something easy, fast, and efficient? Or, if you are a developer who is part of a team in a large organization that uses multiple clouds? Not so straightforward.
This is where Docker will be spending our effort to help. Building on the foundational Docker tools, Docker Desktop and Docker Hub, to help you, the developer, get your work from SCM to public cloud runtime in the easiest, most efficient, and cloud-agnostic way.
How are we going to do this? By focusing on developer experience through Docker Desktop, partnering with the ecosystem, and making Docker Hub the nexus for all the integrations, configuration, and management of the application components which constitute your apps and microservices.
First, we will be expanding on the tooling and experiences in Docker Desktop to (a) accelerate the onboarding of new developers to development team processes and workflow, (b) help new developers onboard to developing with containers, and (c) provide features that help improve team collaboration and communication.
We believe a key way to help here is providing more features for the Docker CLI and Docker Desktop UI delivered from Docker Hub. We want to help you accomplish as much as possible in your local development environment without having to jump around interfaces. We also want you to be able to access and interact with services upstream (registry, CI, deployment to runtime) without having to leave the CLI. More to come here.
In addition, we will expand Docker Hub to help you manage all the application components you generate as part of development and deployment. Containers, serverless functions, <insert YAML here>, and all the lineage and metadata which these components generate. Docker Hub will be more than just a registry.
Speaking of “more than just a registry”, we will make Docker Hub the central point for the ecosystem of tools to partner with us in delivering you a great experience. Docker Hub will provide a range of pipeline options from high abstraction/opinion options, to construct and stitch yourself. We’ve already begun talking with some great partners in the industry and are excited to bring to you what we’ve been thinking here. The overall goal is to provide you solutions here that match your level of maturity or desired level of abstraction, all in a multi-cloud and vendor-neutral way.
Across all of the above, open source will be at the center. Compose, Engine, and Notary will continue to be big contributors to our products, especially Docker Desktop. We will continue to build on these projects with the community, and you will see us contributing to other projects as well.
We will deliver all of this through a monthly SaaS subscription model. We want you to be able to consume on your terms.
Finally, we very much want your participation in how we think about helping you deliver the best products to your customers. Today, for the first time at Docker, we are launching a public roadmap. You can find it here. We invite you to participate by adding new feature ideas in the issues, up-voting other feature ideas you think are great (and down-voting ones you think are not), and helping us with prioritization. We are here for you and want to make sure we are as transparent as possible, while constantly listening to your feedback.
We look forward to working with you to help Docker help you and your customers. If you would like to engage with us, please do so!
I’ll be doing an AMA about this strategy during our #myDockerBday Live Show on March 26, 2020. RSVP with your Docker ID here or on meetup.com here. I’ll be speaking at the Docker Las Vegas Meetup on March 19th, 2020. Sign up here. Save the date for our virtual conference DockerCon Live on May 28, 2020. Sign up for updates here.Find me on GitHub through our public roadmap!
Thank you! Onward.
The post Helping You and Your Development Team Build and Ship Faster appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
Die Portierung des bislang PS4-exklusiven Horizon Zero Dawn für PC ist nach vielen Gerüchten von Sony für Sommer 2020 bestätigt. (Horizon Zero Dawn, Sony)
Quelle: Golem
Mercedes stattet seinen elektrischen Kleintransporter E-Vito mit einer größeren Batterie aus. Ein Novum ist zudem der digitale Innenspiegel. (Elektroauto, Technologie)
Quelle: Golem
At Google Cloud, our Dataproc team has been working to improve the integration of open source processing engines with Kubernetes. Dataproc is our managed service for running many of these open source engines, including Apache Spark and Apache Hadoop. Our goal is to make the underlying infrastructure for data processing engines more reliable and secure, so your company can trust open source to run your business, no matter where the data lives. With this in mind, we’re announcing the availability of the open source Apache Flink for Kubernetes operator in the Google Cloud Marketplace. With this offering, Google’s experience and best practices for running Apache Flink are captured and automated in a Kubernetes Operator, and made easily deployable in your own cluster within minutes. Anthos customers can also deploy this new operator in their on-prem environments by following these prerequisites.This follows up on several releases designed to make it easier to run open source data processing on Kubernetes. This means you can reduce stack dependencies and have your jobs run across multi-cloud and hybrid environments. We started by releasing an open source Kubernetes operator for Apache Spark and followed up by integrating the Spark operator with the Dataproc Jobs API. This gives you a single place to securely manage containerized Spark workloads across various types of deployments, all with the support and SLAs that Dataproc provides. Make open source easier with Google CloudOpen source has always been a core pillar of Google Cloud’s data and analytics strategy. Starting with the MapReduce paper in 2004, to more recent open source releases of Tensorflow for ML, Apache Beam for data processing, and even Kubernetes itself, we’ve built communities around open source technology and across company boundaries. To accompany these popular open source technologies, Google Cloud offers managed versions of the most popular open source software applications. Dataproc is one, and Anthos is an open hybrid and multi-cloud application platform that enables you to modernize your existing applications, build new ones, and run them anywhere in a secure manner. Anthos is also built on open source technologies pioneered by Google, including Kubernetes, Istio, and Knative. Why should you run Apache Flink on Kubernetes? Recently, our Dataproc team has been exploring how customers use open source data processing technologies like Apache Flink, and we’ve heard several pain points related to library and version dependencies that break systems, and balancing that with having to isolate environments and have resources that sit idle. These are challenges that Kubernetes and Anthos are well-positioned to address. Kubernetes can improve the reliability of your infrastructure. This is very important for Apache Flink, since many Apache Flink jobs are streaming applications that need to stay up 24/7 and be resistant to failure. By combining the features of Kubernetes with Apache Flink, operators have much more control over their architecture and can keep streaming jobs up and running while still performing updates, patches, and upgrades of their system. By using containerization, you can even have different Flink jobs with conflicting versions and different dependencies all sharing the same Kubernetes cluster. The Apache Flink Runner for Apache Beam also makes Beam pipelines portable to nearly any public or private cloud environment. We hear that developers and data engineers love Google Cloud’s Dataflow for streaming pipelines because it offers a way to run Apache Beam data processing pipelines in the cloud with fully automated provisioning and management of resources. However, many companies have either technical or compliance constraints on what data can be taken to the cloud. Using the Kubernetes operator for Apache Flink makes it easy to deploy Flink jobs, including ones authored with the Beam SDK that target the Flink runner. This enables Flink users to run Beam pipelines in the cloud using a service like GKE, while still making it easy to run jobs on-prem in Anthos.Apache Beam fills an important gap for Flink users who prefer a mature Python API. For example, if you are a machine learning engineer using TFX on-prem for your end-to-end machine learning lifecycle, you can author your pipeline using the Beam and TFX libraries, then run them on the Flink runner.You can get started with the Flink Operator in Kubernetes by deploying it from the Google Cloud Marketplace today. For those interested in contributing to the project, find us on GitHub. Learn more in this video about the Flink on Kubernetes operator and take a look at the operations it provides.
Quelle: Google Cloud Platform
Google Cloud, at its core, is about helping organizations drive efficiencies that enable innovation and broad digital transformation. Such is the case with the Chicago Department of Transportation (CDOT), a longtime customer, partnered with SADA to develop dotMaps application. The application uses Google Cloud in coordination with Google Maps to ingest multiple data sources to power CDOT’s newChiStreetWork website. This public website helps inform Chicagoans on everything from when and where special events are taking place, to how road repairs and construction projects are impacting traffic patterns for their daily commutes.Chicago has become one of the most densely populated areas in the United States, with almost a quarter of Illinois residents living within its city limits. The DOT’s job is to manage traffic in and around the construction on thousands of miles of alleys and street surfaces. Adding to the street congestion are annual festivals like Lollapalooza and the St. Patrick’s Day Parade, Chicago’s famous block parties, and many more events.The CDOT needed a solution that would put all of its transportation data at the fingertips of Chicago’s residents, while at the same time increasing the accountability and transparency of the city’s public-facing work across all of its departments. The solution is ChiStreetWork, a future-looking forecast website that predicts traffic patterns and disruptions in the same way a customized weather model predicts hailstorms and big temperature shifts.ChiStreetWorks integrates upcoming road and infrastructure projects, along with other street closure information, so citizens can find event dates and locations and pinpoint CDOT permit details—including lane closures and work hours. The site also provides bus routes, potential parking impacts, modified bike routes, viaduct heights, and red light camera locations.Using a familiar Google Maps interface, residents and visitors can subscribe to a targeted area, like their neighborhoods or workplaces, and define what public works and event information they’d like to receive (and at what frequency). This gives residents a new level of transparency on street work happening in their neighborhoods.With ChiStreetWork, the CDOT has been able to cut down on calls made to the office, freeing up resources and, at the same time, providing much greater transparency to its citizens.“I would listen to the calls coming into the office, and citizens just weren’t aware of what was going on. Someone would have a block party and then water management would arrive to dig up the street,” said CDOT Deputy Commissioner Michael Simon. “ChiSreetWork is more user-friendly and coordinates all events and work projects. The new subscription feature makes it easy for residents to get the information they need, and it provides an unprecedented level of visibility.”The website was revamped in collaboration with Google Cloud, Collins Engineers, Inc., and longtime partner SADA. As one of Google Cloud’s longest-standing partners, SADA’s expertise is built on years of experience in building customer solutions that are tailored for their unique needs. “We evaluated different companies, and what we liked best was the flexibility, scalability, and speed of Google Cloud,” said Simon. “Working for a city can be very bureaucratic sometimes. Working with Google Cloud, however, allowed us to implement new tools and processes quickly, helping us to get citizens the right information in real-time. With Google Cloud, we’ve been able to move forward with an eye on future technologies, including AI.”In working with agencies like the CDOT, Google Cloud is thrilled to help public sector organizations transform their departments to better serve citizens. We look forward to continuing to partner with state and local customers to identify ways our technologies can positively impact communities and give residents access to critical information. Learn more about our work with the public sector here.
Quelle: Google Cloud Platform