Handelskrieg: China will nur noch eigene PCs einsetzen
Behörden und staatliche Betriebe sollen auf Hardware und Software aus China setzen. Der Übergang dauert zwei Jahre und trifft Dell und HP. (Dell, HP)
Quelle: Golem
Behörden und staatliche Betriebe sollen auf Hardware und Software aus China setzen. Der Übergang dauert zwei Jahre und trifft Dell und HP. (Dell, HP)
Quelle: Golem
Herbert W. Franke ist Autor, Wissenschaftler und Künstler. Golem.de hat den 95-Jährigen zu seiner Sicht auf aktuelle Entwicklungen befragt. Ein Interview von Martin Wolf (Interview, Computer)
Quelle: Golem
Ein kleiner Amiga und eine große Handheld-Konsole: die Woche im Video. (Golem-Wochenrückblick, Steam)
Quelle: Golem
Customers who work with data warehouses, running BI on large datasets used to have to pick low latency but trading off freshness of data. With BigQuery BI Engine, they can accelerate their dashboards and reports that connect to BigQuery without having to sacrifice freshness of the data. Using the latest insights helps them make better decisions for the business. BI Engine enables customers to be able to get “formula one” performance for their queries across all BI tools that connect with BigQuery, thereby helping them leverage existing investments. Last year, we launched a preview of BigQuery BI Engine, a fast in-memory analysis service that accelerates and provides sub-second query performance for dashboards and reports that connect to BigQuery. BI Engine works with any BI or custom dashboarding tool. This was designed to help analysts identify trends faster, reduce risk, match the pace of customer demand, and improve operational efficiency in an ever-changing business climate. With this launch, customers were able to build fast, interactive dashboards using any of the popular tools like Looker, Tableau, Sheets, PowerBI, Qlik or even any custom application.And our customers have realized this value quickly. “We have seen significant performance improvements within BigQuery after implementing BI Engine. Our views and materialized views have been especially improved after implementing BI Engine.” says Yell McGuyer, Data Architect at Keller Williams Realty.Today, we are very excited to announce the general availability of BigQuery BI Engine for all BI and custom applications that work with BigQuery! BI Engine Acceleration works seamlessly with BigQuery Native Integration with the BigQuery API. BI Engine natively integrates with the BigQuery API, which means that if your dashboards use standard interfaces like SQL, BigQuery APIs or JDBC/ODBC drivers to connect to BigQuery, then BI Engine is automatically supported. No changes are required for applications or dashboards to get sub-second, scalable dashboards up and running. If you run a query with BigQuery and if it can be accelerated, it will be accelerated with BI Engine.Intelligent Scaling. Customers do not have to worry about efficiently using the memory reservation, BI Engine does it for you based on the access patterns. BI Engine leverages advanced techniques like vectorized processing, advanced data encodings, and adaptive caching to maximize performance while optimizing memory usage. It can also intelligently create replicas of the same data to enable concurrent access.Simple Configuration. The only configuration needed when using BI Engine is to set up memory reservation, which is provided in a fine-grained increment of 1GB each. Full Visibility. Monitoring and logging are critical for running applications in the cloud and to gain insight into performance and opportunities for optimization. BI Engine integrates with familiar tools such as Information Schema for job details (e.g. aggregate refresh time, cache hit ratios, query latency etc) and Stackdriver for monitoring of usage. Getting started with BI EngineBI Engine is now available in all regions where BigQuery is available. You can sign up for a BigQuery sandbox here and enable BI Engine for your project. Feel free to read through the documentation and quick-starts guides for popular BI tools. You can also watch the demo from Data Cloud Summit to see how BI Engine works with BI tools like Looker, Data Studio and Tableau. If you’re a partner with an integration to BigQuery, consider joining the Google Cloud Ready – BigQuery initiative. You can find more details about the program here.
Quelle: Google Cloud Platform
As the volume of data that people and businesses produce continues to grow exponentially, it goes without saying that data-driven approaches are critical for tech companies and startups across all industries. But our conversations with customers, as well as numerous industry commentaries, reiterate that managing data and extracting value from it remains difficult, especially with scale. Numerous factors underpin the challenges, including access to and storage of data, inconsistent tools, new and evolving data sources and formats, compliance concerns, and security considerations. To help you identify and solve these challenges, we’ve created a new whitepaper, “The future of data will be unified, flexible, and accessible,” which explores many of the most common reasons our customers tell us they’re choosing Google Cloud to get the most out of their data.For example, you might need to combine data in legacy systems with new technologies. Does this mean moving all your data to the cloud? Should it be in one cloud or distributed across several? How do you extract real value from all of this data without creating more silos?You might also be limited to analyzing your data in batch instead of processing it in real-time, adding complexity to your architecture and necessitating expensive maintenance to combat latency. Or you might be struggling with unstructured data, with no scalable way to analyze and manage it. Again, the factors are numerous—but many of them accrue to inadequate access to data, often exacerbated by silos, and insufficient ability to process and understand it. The modern tech stack should be a streaming stack that scales with your data, provides real-time analytics, incorporates and understands different types of data, and lets you use AI/ML to predictively derive insights and operationalize processes. These requirements mean that to effectively leverage your data assets:Data should be unified across your entire company, even across suppliers, partners, and platforms., eliminating organizational and technology silos.Unstructured data should be unlocked and leveraged in your analytics strategy. The technology stack should be unified and flexible enough to support use cases ranging from analysis of offline data to real-time streaming and application of ML without maintaining multiple bespoke tech stacks. The technology stack should be accessible on-demand, with support for different platforms, programming languages, tools, and open standards compatible with your employees’ existing skill sets. With these requirements met, you’ll be equipped to maximize your data, whether that means discerning and adapting to changing customer expectations or understanding and optimizing how your data engineers and data scientists spend their time. In coming weeks, we’ll explore aspects of the whitepaper in additional blog posts—but if you’re ready to dive in now, and to steer your tech company or startup towards success by making your data better work for you, click here to download your copy, free of charge.Related ArticleCelebrating our tech and startup customersTech companies and startups are choosing Google Cloud so they can focus on innovation, not infrastructure. See what they’re up to!Read Article
Quelle: Google Cloud Platform
As enterprises accelerate their migration to the cloud, they experience more notable mid- and late-phase migration challenges. Specifically, 41% face challenges when optimizing apps in the cloud post-migration, and 38% struggle with performance issues on workloads migrated to the cloud. Further, organizations have also increased reliance on outside consultants and other service providers for early-stage cloud migration tasks to ongoing management post-implementation.1To help customers through these challenges with a simple, quick path to a successful cloud migration, Google Cloud created our comprehensive Rapid Assessment & Migration Program (RAMP). And we’ve got some exciting developments to share with our customers and partners: Expanded focus on post-migration TCO/ROIGiven the complex nature of cloud migrations, we are committed to meeting our customers where they’re at in their cloud journeys and partnering with them to achieve their business goals — be it building customer value through innovation, driving cost efficiencies, or increasing competitive differentiation and productivity. RAMP is a holistic framework based on tangible customer TCO and ROI analyses, that supports our customers’ journeys all the way through: from assessing their digital landscapes across multiple sources including on-prem and other clouds, and identifying prioritized target workloads to building a comprehensive migration and modernization plan. Accelerate positive outcomes with expert partnersCustomers can also now expect a more streamlined migration experience through our ecosystem of partners who have completed their cloud migration specialization. Last week, we announced industry-leading updates to our partner funding programs with new assessment and consumption packages that simplify and accelerate our customers’ journey to Google Cloud, at little-to-no cost. These packages offer prescriptive pathways for infrastructure and application modernization initiatives, empowering our partners to support our customers at every stage — from discovery and planning to migration and modernization. Through our partner ecosystem, our customers can expect:Distinct funding packages for assessment, planning, and migrationFaster approval processes for accelerated deploymentsMore partners eligible to participate in RAMP and access these new funding packagesSustainability through migrationAnother major focus area for RAMP is helping enterprises optimize their migration planning and maximize their ROI by including their business and technical considerations early in the process and including any sustainability goals they may have. To aid with their sustainability efforts, we are excited to share that customers can now receive a Digital Sustainability Report along with their IT assessments – enabling sustainability to be built into their migration strategies. The report provides actionable insights to measure and reduce their environmental impact, and is based on some of Google Cloud’s own best practices, having been carbon-neutral for decades and looking to run on carbon-free energy by 2030. We are committed to solving complex problems for our customers and partners, and these updates are a reflection of the feedback we receive. Simplify your cloud migration strategy today by requesting your free assessment, finding a partner to work with, or talking to your existing partner to get started.1. Forrester Consulting, State Of Public Cloud Migration; A study commissioned by Google, 2022Related ArticleIs a cloud migration on your to do list? Our top stories from 2021 can helpThinking about migrating to Google Cloud in 2022? Here’s what to catch up on from 2021!Read Article
Quelle: Google Cloud Platform
Heute kündigt Amazon Kinesis Video Streams neue APIs und SDKs an, mit denen Sie Bilder aus Ihren Videostreams extrahieren können. Diese vollständig verwaltete Funktion erlaubt es Kunden, Bilder über API-Aufrufe anzufordern oder die automatische Bildgenerierung basierend auf Metadaten-Tags in aufgenommenen Videos zu konfigurieren.
Quelle: aws.amazon.com
Jetzt können Sie die CLI von AWS Serverless Application Model (AWS SAM) verwenden, um die AWS X-Ray-Ablaufverfolgung in Ihren AWS SAM-Vorlagen automatisch zu aktivieren, ohne Ihre AWS SAM-Vorlagen manuell erstellen zu müssen. Dies erleichtert die zentrale Verwaltung der AWS X-Ray-Ablaufverfolgung über Ihre Lambda-Funktionen in Ihrer Serverless-Anwendung.
Quelle: aws.amazon.com
Amazon Relational Database Service (Amazon RDS) for PostgreSQL kündigt Unterstützung für PostgreSQL 14 mit drei Ebenen kaskadierter Lesereplikate an, 5 Replikate pro Instanz, die maximal bis zu 155 Lesereplikate pro Quell-Instance unterstützen. Sie können jetzt kaskadierte Single-AZ- oder Multi-AZ-Lesereplikat-DB-Instances in derselben Region oder in einer beliebigen Region über eine andere Lesereplikat-Instance erstellen, wodurch Sie eine robustere Notfallwiederherstellungsarchitektur aufbauen können.
Quelle: aws.amazon.com
AWS RoboMaker, ein Service, der es einfach macht, Robotikanwendungen in großem Maßstab zu simulieren, ist jetzt in der Region AWS GovCloud (USA-West) verfügbar. Der Support in der GovCloud-Region erlaubt es US-Regierungsbehörden und Auftragnehmern, sensible Simulations-Workloads in AWS RoboMaker auszuführen, indem sie ihre spezifischen regulatorischen und Compliance-Anforderungen erfüllen.
Quelle: aws.amazon.com