Adobe: M1-Version von Photoshop mit Super Resolution erschienen
Die Apple-Silicon-Version von Photoshop ist da. Die neue Funktion Super Resolution bläst Bilder mittels Machine Learning auf. (Photoshop, Grafiksoftware)
Quelle: Golem
Die Apple-Silicon-Version von Photoshop ist da. Die neue Funktion Super Resolution bläst Bilder mittels Machine Learning auf. (Photoshop, Grafiksoftware)
Quelle: Golem
Extrem schnell und absolut wartungsfreundlich: Selten hatten wir so viel Spaß an einem PC wie mit Lenovos Threadripper-Workstation. Ein Test von Marc Sauter (Lenovo, Prozessor)
Quelle: Golem
Von dem Cloud-Ausfall sollen auch staatliche Webseiten betroffen sein. OVH will schnell Reserve-Infrastruktur bereitstellen. (Internet, Cloud Computing)
Quelle: Golem
Ein erstes Bild des Toyota X Prologue verrät mit einem kleinen Trick deutlich mehr Details. (Toyota, Technologie)
Quelle: Golem
Among 2020’s biggest buzzwords—unprecedented, pivotal, virtual—one emerged as a defining factor of Red Hat’s success amidst the chaos: resilience. Earlier this month, our President and CEO Paul Cormier outlined our path for 2021, which is built upon the resilience of our people and technology.
Quelle: CloudForms
Songkick is a U.K.-based concert discovery service and live music platform owned by Warner Music Group connecting music fans to live events. Annually, we help over 175 million music fans around the world track their favorite artists, discover concerts and live streams, and buy tickets with confidence online and via their mobile apps and website. We have about 15 developers across four teams, all based in London, and my role is to provide support across those teams by helping them to make technical decisions and to architect solutions. After migrating to Google Cloud, we wanted a fully managed caching solution that would integrate well with the other Google tools that we’d come to love, and free our developers to work on innovative, customer-delighting products. Memorystore, Google’s scalable, secure, and highly available memory service for Redis, helped us meet those goals. Fully managed Memorystore removed hassles Our original caching infrastructure was built solely with on-premises Memcached, which we found simple to use at the time. Eventually, we turned to Redis to leverage advanced features like dictionaries and increments. In our service-oriented architecture, we had both of these open source data stores working for us. We had two Redis clusters—one for persistent data, and one as a straightforward caching layer between our front end and our services.When we were making decisions about how to use Google Cloud, we realized there was no real advantage for having two caching technologies (Memcached and Redis) and decided to use only Redis because everything we used it for could be handled by Redis and this way we don’t need knowledge in both databases. We do know that Redis can be more complex to use and manage but that wasn’t a big concern for us because it would be completely managed by Google Cloud when we use Memorystore. With Memorystore automating complex tasks for Redis like enabling high availability, failover, patching, and monitoring, we could focus that time now on new engineering opportunities.We considered the hours we spent fixing broken Redis clusters and debugging network problems. Our team is more heavily experienced in developing versus managing infrastructure, so problems with Redis had proven distracting and time-consuming for the team. Also, with a self-managed tool, there would potentially be some user-facing downtime. But Memorystore was a secure, fully managed option that was cost-effective and promised to save us those hassles. It offered the benefits of Redis without the cost of managing it. Choosing it was a no-brainer. How Memorystore works for usLet’s look at a couple of our use cases for Memorystore. We have two levels of caching on Memorystore—the front end caches results from API calls to our services and some services cache database results. Usually, our caching key for the front end services is the URL and any primitive values that will get passed. With the URL and the query parameters, the front end looks to see if it already has a result for it, or if it needs to then go talk to the service. We have a few services where we actually have a caching layer within the service itself that talks to Redis first before deciding whether it needs to go, then invokes our business logic and talks to the databases. That caching sits in front of the service, operating on the same principle as the front-end caching. We also use Fastly as a caching layer in front of our front ends. So, on an individual page level, the whole page may be heavily cached in Fastly, such as when a page is for a leaderboard of the top artists on the platform.Memorystore comes in for user-level content, such as if there’s an event page that pulls some information about the artist and some information about the event, and maybe some recommendations for the artists. If the Fastly cache on the artist page had expired, it would go to the front end, which would know to talk to the various services to display all of the requested information on the page. In this case, there might be three separate bits of data sitting in our Redis cache. Our artist pages have components that are not cached in Fastly, so there we rely much more heavily on Redis. Our Redis cache TTL (time-to-live) tends to be quite low; sometimes we have just a ten-minute cache. Other times, with very static data, we can cache it in Redis for a few hours. We determine a reasonable caching time for each data item, and then set the TTL based on that determination. A particular artist might be called 100,000 times a day, so even putting just a ten-minute cache on that makes a huge difference in how many calls a day we have to put into our service. For this use case, we have one highly available Memorystore cluster of about 4 GB of memory, and we use a cache eviction policy of allkeys-lru (least recently used). Right now on that cluster, we’re getting about 400 requests per second in peaks. That’s an average day’s busy period, but it’ll spike much higher than that in certain circumstances. We had two different Redis clusters in our old infrastructure. The first is as just described. The second was persistent Redis. When considering migration to Google Cloud, we decided to use Redis in the way it really excels in and decided to simplify, and re-architect the four or five features that use the persistent Redis, either using Cloud SQL for MySQL or using BigQuery. Sometimes we use Redis to aggregate data, and now that we’re on Google Cloud, we could just use BigQuery and have far better analysis options than we had for aggregating on Redis.We also use Memorystore as a distributed mutex. There are certain actions in our system where we don’t want the same thing happening concurrently—for example, a migration of data for a particular event, where two admins might be trying to pick up the same piece of work at the same time. If that data migration happened concurrently, it could prove damaging to our system. So we use Redis here as a mutex lock between different processes, to ensure they happen consecutively instead of concurrently. Memorystore and Redis work for us in peaceful harmonyWe have not seen any problems with Redis since the migration. We also love the monitoring capabilities you get out of the box with Memorystore. When we deploy a new feature, we can easily check if it suddenly fills the cache, or if we have a really low hit ratio that indicates we’ve made an error in our implementation.Another benefit: the Memorystore interface works exactly like you’re just talking to Redis. We use ordinary Redis in a Docker container in our development environments, so when we’re running it locally, it’s seamless to check that our caching code is doing exactly what it’s meant to. We have both production and staging environments, both Virtual Private Clouds, each with its own Memorystore cluster. We have unit tests, which never really touch Redis, and integration tests, which talk to a local MySQL in Docker and a Redis in Docker as well. And we also have acceptance tests—browser automation tests that run in the staging environment, which talk to Cloud SQL and Memorystore.Planning encores with MemorystoreFor a potential future use case for Memorystore, we’re almost certainly going to be adding Pub/Sub to our infrastructure, and we’ll be using Redis to deduplicate some messages coming from Pub/Sub, such as when we don’t want to send the same email twice in quick succession. We’re looking forward to Pub/Sub’s fully managed services as well, since we’re currently running RabbitMQ, which too often requires debugging. We performed an experiment using Pub/Sub for the same use case, and it worked really well, so it made for another easy decision.Memorystore is just one of Google’s data cloud solutions we use everyday. Additional ones include Cloud SQL, BigQuery and Dataflow for an ETL pipeline, data warehousing, and our analytics products. There, we aggregate data that the artist is interested in, feed that back into MySQL, and then surface that in our artist products. Once we have Pub/Sub, we’ll have virtually every bit of Google Cloud database type. That’s evidence of how we feel about Google Cloud’s tools.Learn more about the services and products making music at Songkick. Curious to learn more about Memorystore? Check out the Google Cloud blog for a look at performance tuning best practices for Memorystore for Redis.Related ArticleGo faster and cheaper with Memorystore for Memcached, now GALearn about fully managed Memorystore for Memcached, which is compatible with open-source Memcached protocol and can save database costs …Read Article
Quelle: Google Cloud Platform
In Google Kubernetes Engine (GKE), application owners can define multiple autoscaling behaviors for a workload using a single Kubernetes resource: Multidimensional Pod Autoscaler (MPA). The challenges of scaling Pods horizontally and verticallyThe success of Kubernetes as a widely adopted platform is grounded in its support for a variety of workloads and their many requirements. One of the areas that has continuously improved over time is workload autoscaling. Dating back to the early days of Kubernetes, Horizontal Pod Autoscaler (HPA) was the primary mechanism for autoscaling Pods. By the very nature of its name, it provided users the ability to have Pod replicas added when a user-defined threshold of a given metric was crossed. Early on this was typically CPU or Memory usage, though now there’s support for custom and external metrics.A bit further down the line, Vertical Pod Autoscaler (VPA) added a new dimension to workload autoscaling. Much like its name suggests, VPA had the ability to make recommendations on the best amount of CPU or Memory that Pods should be requesting based on usage patterns. Users can then either review those recommendations and make the call as to whether or not they should be applied, or entrust VPA to apply those changes automatically on their behalf.Naturally, Kubernetes users have sought to get the benefits from both of these forms of scaling.While these autoscalers work well independent of one another, the results of running both at the same time can produce unexpected results.Picture this example:HPA adjusts the number of replicas for a Pod to maintain a target 50% CPU utilizationVPA, when configured to automatically apply recommendations, could fall into a loop of continuously shrinking CPU requests – a direct result of HPA maintaining its relatively low target for CPU utilization!Part of the challenge here is that when configured to act autonomously, VPA applies changes for both CPU and memory. Thus, the contention can be difficult to avoid as long as VPA is automatically applying changes.Users have since accepted compromises in one of two ways: Using HPA to scale on CPU or memory and using VPA only for recommendations, building their own automation to review and actually apply the recommendationsUsing VPA to automatically apply changes to CPU and memory, while using HPA based on custom or external metricsWhile these workarounds are suitable for a handful of use cases, there are still workloads that would benefit from autoscaling across the dimensions of both CPU and memory. For example, web applications may require horizontal autoscaling on CPU when CPU bound – but may also desire vertical autoscaling on memory for reliability in the event of misconfigured memory that results in OOMkilled events for the container.Multidimensional Pod Autoscaler The first feature available in MPA allows users to scale Pods horizontally based on CPU utilization and vertically based on memory, available in GKE clusters versions 1.19.4-gke.1700 or newer.In the MPA schema, there are two critical constructs that enable users to configure their desired behavior: goals and constraints. See the below manifest for an MPA resource, which has been shortened for readability:Goals allow for users to define targets for metrics. The first supported metric is target CPU utilization, similar to how users define target CPU utilization in an HPA resource. The MPA will attempt to ensure that these goals are met by distributing load across additional replicas of a given Pod.Constraints, on the other hand, are a bit more stringent. These take precedence over goals, and can be applied either to global targets – think min and max replicas of a given Pod – or specific resources. In the case of vertical autoscaling, this is where users get to a.) specify that memory is controlled by MPA and b.) define the upper and lower boundaries for memory requests for a given Pod should they need to.Let’s test this out! We’ll use Cloud Shell as our workstation and create a GKE cluster with a version that supports MPA:We’ll use the standard php-apache example Pods from the Kubernetes documentation on HPA. These manifests will create three Kubernetes objects – a Deployment, a Service, and a Multidimensional Pod Autoscaler.The Deployment consists of a php-apache Pod, is exposed via a Service type: LoadBalancer, and is managed by a Multidimensional Pod Autoscaler (MPA). The Pod template in the Deployment is configured to request 100 millicores in CPU and 50 mebibytes in memory. The MPA is configured to aim for 60% CPU utilization and adjusting Pod memory requests based on usage.Once we have the resources deployed, grab the External IP address for the php-apache Service.kubectl get svcWe will then use the hey utility to send artificial traffic to our php-apache Pods and thus trigger action from the MPA, accessing the Pods via the Service’s external IP address.hey -z 1000s -c 1000 http://<your-service-external-ip>The MPA will then scale the Deployment horizontally, adding Pod replicas to handle the incoming traffic. kubectl get pods -wWe can also observe the amount of CPU and memory each Pod replica is using:kubectl top podsIn the output from the previous command, Pods should be utilizing well over the memory requests that we specified in the Deployment. Digging into the MPA object, we can see that the MPA notices that as well, recommending an increase in memory requests.kubectl describe mpaEventually, we should see MPA actuate these recommendations and scale the Pods vertically. We will know this is complete by observing an annotation in the Pod that denotes action was taken by the MPA, as well as the new memory requests adjusted to reflect the MPA’s action.kubectl describe pod $POD_NAMEConclusionMultidimensional Pod Autoscaler solves a challenge that many GKE users have faced, exposing a new method to control horizontal and vertical autoscaling via a single resource. Try it in GKE versions 1.19.6-gke.600+, currently available in the GKE Rapid Channel. Stay tuned for additional functionality in MPA!A special thanks to Mark Mirchandani, Jerzy Foryciarz, Marcin Wielgus, and Tomek Weksej for their contributions to this blog post.Related ArticleKubernetes best practices: upgrading your clusters with zero downtimeJust like your applications, Kubernetes is constantly getting new features and security updates. Learn how GKE can make upgrading your Ku…Read Article
Quelle: Google Cloud Platform
Ihn nervten die großen Tonspulen, ich habe davon profitiert. Der Erfinder der Kompaktkassette ist gestorben. Von Juliane Gunardono (Philips, Sony)
Quelle: Golem
Tanken und sein Paket bei der Packstation loswerden: Eine Zusammenarbeit von Jet Tankstellen und DHL macht’s möglich. (Packstation, Wirtschaft)
Quelle: Golem
Die Daten der Schüler sind bei der Anton-App offenbar auf einfache Weise einsehbar gewesen. (Schulen, Datenschutz)
Quelle: Golem