Quellcode: Ericsson-Chef sieht Tests von Netzwerktechnik skeptisch
Ericsson ist nicht dafür, 5G-Updates lange zu testen, sonst drohten Sicherheitslücken. Bei Überprüfungen erhalte man nur eine Momentaufnahme. (Ericsson, Nokia)
Quelle: Golem
Ericsson ist nicht dafür, 5G-Updates lange zu testen, sonst drohten Sicherheitslücken. Bei Überprüfungen erhalte man nur eine Momentaufnahme. (Ericsson, Nokia)
Quelle: Golem
Über das Internet einen vollwertigen Windows-PC streamen – kann das funktionieren? Golem.de testet Blade Shadow und die Konsole Shadow Ghost und ist gespalten: Einerseits funktioniert der Dienst unter Umständen sehr gut, andererseits hält er teils nicht, was er verspricht. (Streaming, Virtualisierung)
Quelle: Golem
Die Serienproduktion von Embedded MRAM, kurz eMRAM, wurde bei Samsung aufgenommen. Der sonst gerade bei Kapazitäten offen kommunizierende Hersteller hält sich aber mit Details bedeckt. Die Vorteile von MRAM verglichen mit Flashspeicher sollen aber enorm sein. (Samsung, PC-Hardware)
Quelle: Golem
Das chinesische EV-Startup Nio hatte erst im Dezember ein neues, kleineres Elektroauto in Form eines SUV vorgestellt und eine neue Fabrik in Schanghai angekündigt. Aus der Fabrik wird nun doch nichts. Stattdessen wurde ein Milliardenverlust ausgewiesen. (Elektroauto, Technologie)
Quelle: Golem
Die Harley Davidson Live Wire wird eine größere Reichweite erzielen als bisher angegeben. Der Hersteller hat zudem weitere technische Details des rund 30.000 US-Dollar teuren Motorrads veröffentlicht. (Harley-Davidson, Technologie)
Quelle: Golem
Am Donnerstag hat Huawei seine Klage in Texas gegen eine Verurteilung ohne Prozess eingereicht. Zugleich kritisiert der Ausrüster, dass die USA bei 5G technologisch zurückliegen, weil der freie Wettbewerb behindert werde. (5G, Huawei)
Quelle: Golem
Just to recap, so far our ongoing series about the Istio service mesh we’ve talked about the benefits of using a service mesh, using Istio for application deployments and traffic management, and how Istio helps you achieve your security goals. In today’s installment, we’re going to dig further into monitoring, tracing, and service-level objectives. The goal of this post is to demonstrate how you can use Istio to level up your own Site Reliability Engineering (SRE) practices for workloads running in Kubernetes. You can follow along in this post with the step-by-step tutorial here.The pillars of SREAt Google, we literally wrote the book on SRE, and it has now become an industry term; but let’s quickly review what the term really means to us at Google. The goal of SRE is to improve service reliability and performance and, in turn, the end-user experience. Conceptually, that means proactively managing and incorporating three main components: service level objectives (SLOs), service level agreements (SLAs), and service level indicators (SLIs). We can summarize these as follows:SLOs: targets you set for overall service healthSLAs: promises you make about your service’s health (so, they often include specific SLOs)SLIs: metrics that you use to define the SLO targetsHow do we take these ideas from conceptual to practical? To provide guarantees about your service (SLAs), you need to set targets (SLOs) that incorporate several key service metrics (SLIs). That’s where Istio and Stackdriver come in.Surfacing application metrics with Stackdriver MonitoringIn our second post, we talked about how Google Kubernetes Engine (GKE), Istio, and Stackdriver are integrated right out of the box. This means that Stackdriver Monitoring gives you the ability to monitor a dozen Istio-specific metrics without any special configuration or setup. These include metrics for bytes sent and received, request counts, and roundtrip latencies, for both clients and servers. Once you create a Stackdriver Workspace, you can immediately head to the Metrics Explorer and start visualizing those metrics from Istio. Without any manual instrumentation, Istio provides a significant amount of telemetry information for your workloads—enough to begin thinking about which of those metrics (Istio-provided or GCP-provided) could make for useful SLIs.Which SLIs make the most sense will depend on your application and deployments, but for Istio-enabled workloads we typically recommend creating Dashboards that include some combination of GKE cluster resource monitoring (node availability, CPU, RAM) along with service request counts and service request/response latency, broken out by Kubernetes Namespaces and/or Pods. The example Dashboard below provides a combined overview of cluster and service health (see the tutorial here for steps to set up your own Dashboard).After identifying the appropriate SLIs for your deployment, the next step is to create alerting policies that notify you or your team about any problems in your deployment. Alerting policies in Stackdriver are driven by metrics-based conditions that you define as part of the policy. In addition, you can combine multiple metrics-based conditions to trigger alerts when any or all of the conditions are met.With a working metrics dashboard and alerting policies in place, you’re now at a point where you can keep track of the health of each of your services. But what happens when you see an alert? What if it turns out that one of your services has a server response latency that’s much higher than expected—and that it’s happening on a pretty regular basis? The good news is that now you know there’s a problem; but now the challenge is tracking it down.Digging into requests using Stackdriver TraceSo far we’ve been talking about monitoring, but Istio’s telemetry support also includes the ability to capture distributed tracing spans directly from individual services. Distributed tracing allows you to track the progression of a single user-driven request, and follow along as it is handled by other services in your deployment.Once the Stackdriver Trace API is enabled in your GCP project, Istio’s telemetry capture components start sending trace data to Stackdriver, where you can view it in the trace viewer. Without instrumenting any of your services or workloads, Istio captures basic span information, like HTTP requests or RPCs.This is a good start, but to truly diagnose our example (higher than expected server response latency) we’ll need more than just the time it takes to execute a single service call. To get that next level of information, you need to instrument your individual services so that Istio (and by extension, Stackdriver) can show you the complete code path taken by the service called. Using OpenCensus tracing libraries, you can add tracing statements to your application code. We recommend instrumenting tracing for critical code paths that could affect latency, for example, calls to databases, caches, or internal/external services. The following is a Python example of tracing within a Flask application:We instrumented our sample microservices demo using OpenCensus libraries. Once you’ve deployed that app and the built-in load generator has had a chance to generate some requests, you can head over to Stackdriver Trace to examine one of the higher latency service calls.As you can see in the diagram above, Stackdriver Trace lets you examine the complete code path and determine the root of the high latency call.Examining application output using Stackdriver LoggingThe final telemetry component that Istio provides is the ability to direct logs to Stackdriver Logging. By themselves, logs are useful for examining application status or debugging individual functions and processes. And with Istio’s telemetry components sending metrics, trace data, and logging output to Stackdriver, you can tie all of your application’s events together. Istio’s Stackdriver integration allows you to quickly navigate between monitoring dashboards, request traces, and application logs. Taken together, this information gives you a more complete picture of what your app is doing at all times, which is especially useful when an incident or policy violation occurs.Stackdriver Logging’s integration comes full circle with Stackdriver Monitoring by giving you the ability to create metrics based on structured log messages. That means you can create specific log-based metrics, then add them to your monitoring dashboards right alongside your other application monitoring metrics. And Stackdriver Logging also provides additional integrations with other parts of Google Cloud—specifically, the ability to automatically export logs to Cloud Storage or BigQuery for retention and follow-on ad-hoc analysis, respectively. Stackdriver Logging also supports integration with Cloud Pub/Sub where each output log entry is exported as an individual pub/sub message, which can then be analyzed in real-time using Cloud Dataflow or Cloud Dataproc.Coming soon: SLOs and service monitoring using StackdriverSo far we’ve reviewed the various mechanisms Stackdriver provides to assess your application’s SLIs; and now available for early access, Stackdriver will provide native support for setting SLOs against your specific service metrics. That means you will be able to set specific SLO targets for the metrics you care about, and Stackdriver will automatically generate SLI graphs, and track your target compliance over time. If any part of your workload violates your SLOs, you are immediately alerted to take action.Interested in learning more? Take a deep dive into Stackdriver Service Monitoring and sign up for early access.SRE isn’t about tools; it’s a lifestyleThink of SRE as a set of practices, and not as a specific set of tools or processes. It’s a principled approach to managing software reliability and availability, through the constant awareness of key metrics (SLIs) and how those metrics are measured against your own targets (SLOs)—which you might use to provide guarantees to your customers (via SLAs). When you combine the power of Istio and Stackdriver and apply it to your own Kubernetes-based workloads, you end up with an in-depth view of your services and the ability to diagnose and debug problems before they become outages.As you can see, Istio provides a number of telemetry features for your deployments. And when combined with deep Stackdriver integration, you can develop and implement your own SRE practices.What’s nextWe haven’t even begun to scratch the surface on defining SRE and these terms so we’d recommend taking a look at SRE Fundamentals: SLIs, SLAs, and SLOs as well as SLOs, SLIs, SLAs, oh my – CRE life lessons for more background.To try out the Istio and Stackdriver integration features we discussed here, check out the tutorial here. In our next post in the Service Mesh era series, we’ll take a deep-dive into Istio from an IT perspective and talk about some practical operator scenarios, like maintenance, upgrades, and debugging Istio itself.Learn more:Istio and Stackdriver tutorialAdvanced application deployments and traffic management with Istio on GKESRE fundamentals: SLIs, SLAs, and SLOsDrilling down into Stackdriver Service Monitoring
Quelle: Google Cloud Platform
APIs have become mundane. They have become the de facto standard for connecting apps, data, and services. In the larger picture, APIs are driving digital transformation in organizations.
With the strategic value of APIs, a continuous integration (CI) and continuous deployment (CD) pipeline has become an important aspect of API development. It allows organizations to automate deployment of API changes without error-prone manual steps, detect issues earlier, and ultimately deliver value to end users faster.
This blog walks you through a conceptual framework for implementing a CI/CD pipeline for deploying changes to APIs published with Azure API Management.
The problem
Organizations today normally have multiple deployment environments (e.g., Development, Testing, Production) and use separate API Management instances for each environment. Some of these instances are shared by multiple development teams, who are responsible for different APIs with different release cadences.
As a result, customers often come to us with the following challenges:
How to automate deployment of APIs into API Management?
How to migrate configurations from one environment to another?
How to avoid interference between different development teams who share the same API Management instance?
We believe the approach described below will address all these challenges.
CI/CD with API Management
The proposed approach is illustrated in the above picture. In this example, there are two deployment environments: Development and Production. Each has its own API Management instance. The Production instance is managed by a designated team, called API publishers. API developers only have access to the Development instance.
The key in this proposed approach is to keep all configurations in Azure Resource Manager templates. These templates should be kept in a source control system. We will use Git as an example. As illustrated in the picture, there is a Publisher repository that contains all configurations of the Production API Management instance in a collection of templates:
Service template: Contains all service-level configurations (e.g., pricing tier and custom domains).
Shared templates: Contains shared resources throughout an API Management instance (e.g., groups, products, and identity providers).
API templates: Includes configurations of APIs and their sub-resources (e.g., operations and policies).
Master template: Ties everything together by linking to all templates.
API developers will fork and clone the Publisher repository. In most cases, they will focus on API templates for their APIs and should not change the shared or service templates.
When working with Resource Manager templates, we realize there are two challenges for API developers:
First, API developers often work with Open API specifications and may not be familiar with Resource Manager schemas. To simplify creation of templates, we created a utility tool to automate the creation of API templates based on Open API specifications.
Second, for customers who have already been using API Management, another challenge is how to extract existing configurations into Resource Manager templates. We created another tool to generate templates based on existing configurations.
Once developers have finished developing and testing an API, and have generated the API template, they will submit a pull request to the Publisher repository. API publishers can validate the pull request and make sure the changes are safe and compliant. Most of the validations can be automated as part of the CI/CD pipeline. When the changes are approved and merged successfully, API publishers will deploy them to the Production instance. The deployment can also be easily automated with Azure Pipelines.
With this approach, the deployment of API changes into API Management instances can be automated and it is easy to promote changes from one environment to another. Since different API development teams will be working on different sets of API templates, it also reduces the chances of interference between different teams.
Next steps
You can find the guidance, examples, and tools in this GitHub repository. Please give it a try and let us know your feedback and questions.
We realize our customers bring a wide range of engineering cultures and existing automation solutions. The approach and tools provided here are not meant to be a one-size-fits-all solution. That's why we published and open-sourced everything on GitHub, so that you can extend and customize the solution.
Quelle: Azure
Vodafone hat wie angekündigt die LTE-Versorgung auf den größten deutschen Flughäfen verbessert. Nun sind dort mehr als 100 LTE-Stationen in Betrieb. (Vodafone, Long Term Evolution)
Quelle: Golem
Vor der 5G-Auktion traut sich die Bundesregierung nicht mehr, lokales Roaming im Telekommunikationsgesetz festzuschreiben. Man will die Auktion am 19. März nicht noch weiter gefährden. (5G, Bundesnetzagentur)
Quelle: Golem