Wegen Twitter-Deal: Elon Musk verkauft Tesla-Aktien im Milliardenwert
Der Tesla-CEO bereitet sich auf den “hoffentlich unwahrscheinlichen” Fall vor, dass er Twitter tatsächlich kaufen muss. (Tesla, Börse)
Quelle: Golem
Der Tesla-CEO bereitet sich auf den “hoffentlich unwahrscheinlichen” Fall vor, dass er Twitter tatsächlich kaufen muss. (Tesla, Börse)
Quelle: Golem
Soll sich Unity für 17,5 Milliarden US-Dollar kaufen lassen – oder selbst ein umstrittenes Unternehmen für 4,4 Milliarden US-Dollar übernehmen? (Unity, Onlinewerbung)
Quelle: Golem
For over seven years, Functions-as-a-Service has changed how developers create solutions and move toward a programmable cloud. Functions made it easy for developers to build highly scalable, easy-to-understand, loosely-coupled services. But as these services evolved, developers faced challenges such as cold starts, latency, connecting disparate sources, and managing costs. In response, we are evolving Cloud Functions to meet these demands, with a new generation of the service that offers increased compute power, granular controls, more event sources, and an improved developer experience. Today, we are announcing the general availability of the 2nd generation of Cloud Functions, enabling a greater variety of workloads with more control than ever before. Since the initial public preview, we’ve equipped Cloud Functions 2nd gen with more powerful and efficient compute options, granular controls for faster rollbacks and new triggers from over 125 Google and third-party SaaS event sources using Eventarc. Best of all, you can start to use 2nd gen Cloud Functions for new workloads, while continuing to use your 1st gen Cloud Functions.Let’s take a closer look at what you’ll find in Cloud Functions 2nd gen.Increased compute with granular controlsOrganizations are choosing Cloud Functions for increasingly demanding and sophisticated workloads that require increased compute power and more granular controls. Functions built on Cloud Functions 2nd gen have the following features and characteristics: Instance concurrency – Process up to 1000 concurrent requests with a single instance. Concurrency can drastically reduce cold starts, improve latency and lower cost.Fast rollbacks, gradual rollouts – Quickly and safely roll back your function to any prior deployment or configure how traffic is routed across revisions. A new revision is created every time you deploy your function.6x longer request processing – Run your 2nd gen HTTP-triggered Cloud Functions for up to one hour. This makes it easier to run longer request workloads such as processing large streams of data from Cloud Storage or BigQuery.4x larger instances – Leverage up to 16GB of RAM and 4 vCPUs on 2nd gen Cloud Functions, allowing larger in-memory, compute-intensive and more parallel workloads. 32GB / 8 vCPU instances are in preview.Pre-warmed instances – Configure a minimum number of instances that will always be ready to go to cut your cold starts and make sure the your application’s bootstrap time doesn’t impact its performance.More regions – 2nd gen Cloud Functions will be available in all 1st gen regions plus new regions including Finland (europe-north1) and Netherlands (europe-west4).Extensibility and portability- By harnessing the power of Cloud Run’s scalable container platform, 2nd gen Cloud Functions let you move your function to Cloud Run or even to Kubernetes if your needs change.Lots more event sourcesAs more workloads move to the cloud, you need to connect more event sources together. Using Eventarc, 2nd gen Cloud Functions supports 14x more event sources than 1st gen, supporting business-critical event-driven workloads.Here are some highlights of events in 2nd gen Cloud Functions:125+ Event sources: 2nd gen Cloud Functions can be triggered from a growing set of Google and third-party SaaS event sources (through Eventarc) and events from custom sources (by publishing to Pub/Sub directly). Standards-based Event schema for consistent developer experience: These event-driven functions are able to make use of the industry-standard CloudEvents format. Having a common standards-based event schema for publishing and consuming events can dramatically simplify your event-handling code.CMEK support: Eventarc supports customer-managed encryption keys, allowing you to encrypt your events using your own managed encryption keys that only you can access.As Eventarc adds new event providers, they become available in 2nd gen Cloud Functions as well. Recently, Eventarc added Firebase Realtime Database, DataDog, Check Point CloudGuard, LaceWork and ForgeRock, as well as the Firebase Stripe / Revenuecat extensions as event sources. Improved developer experienceYou can use the same UI and gcloud commands as for your 2nd gen functions as for 1st gen, help you get started quickly from one place. That’s not to say we didn’t make some big improvements to the UI:Eventarc subtask – Allows you to easily discover and configure how your function is triggered during creation.Deployment tracker – Enables you to view the status of your deployment and spot any errors quickly if they occur during deployment.Improved testing tab – Simplifies calling your function with sample payloads.Customizable dashboard – Gives you important metrics at a glance and the accessibility updates improve the experience for screen readers.As with 1st gen, you can drastically speed up development time by using our open source Functions Framework to develop your functions locally.Tying it together2nd gen Cloud Functions allows developers to connect anything from anywhere to get important work done. This example shows an end-to-end architecture for an event-driven solution that uses new features in 2nd gen Cloud Functions and Eventarc. It starts with identifying the data sources to which you want to programmatically respond. These can be any of the 125+ Google Cloud or third-party sources supported by Eventarc. Then you’re able to configure the trigger and code the function while specifying instance size, concurrency and processing time based on your workload. Your function can process and store the data using Google Cloud’s AI and data platforms to transform data into actionable insights.Get started with the new Cloud FunctionsWe built Cloud Functions to be the future of how organizations build enterprise applications. Our 2nd generation incorporates feedback we’ve received from customers to meet their needs for more compute, control and event sources with an improved developer experience. We’re excited to see what you build with 2nd gen functions. You can learn more about Cloud Functions in the documentation and get started using Quickstarts: Cloud Functions.Related ArticleSupercharge your event-driven architecture with new Cloud Functions (2nd gen)The next generation of our Cloud Functions Functions-as-a-Service platform gives you more features, control, performance, scalability and…Read Article
Quelle: Google Cloud Platform
In today’s hybrid office environments, it can be difficult to know where your most valuable, sensitive content is, who’s accessing it, and how people are using it. That’s why Egnyte focuses on making it simple for IT teams to manage and control a full spectrum of content risks, from accidental data deletion to privacy compliance. I used to be an Egnyte customer before joining the team, so I’ve experienced first-hand the transformative effects that Egnyte can have on a company. Because data is fundamental to a company’s success, we take the trust of our 16,000 clients very seriously. There is no room for error with a cloud governance platform, which means that the technology providers we work with can’t fail either. That’s why we work with Google Cloud.Since Egnyte was founded in 2007, we have delivered our services to clients 24/7. We do this by running our own data centers: two in the USA and one in Europe. But as the company continued its steady growth, owning and managing these data centers became unsustainable. There’s a tremendous amount of work that goes into managing everything that we need from a data center. Not only were we constantly building, maintaining, and paying for all this infrastructure, but we’d have to constantly expand our data centers to accommodate our business growth. This caused a never-ending pipeline issue because we had to predict how many businesses we were going to win over the next 12 to 18 months. What if we planned to grow the business by 20%, and ended up growing by 25% instead? We knew that being limited to our own data centers was going to negatively impact our business, so we looked for alternatives. To gain scalability and introduce another layer of reliability to our business, we decided to collaborate with a reputable cloud provider who could reliably back up our data. We examined the offerings of every cloud provider, and found that in every category that we analyzed, Google was hands-down the winner.One of these categories is the reach of the network. With its own transoceanic fiber with points of presence in all markets where we’re currently doing business as well as markets where we intend to do business one day, Google network is second to none. Another important criteria for us was flexibility in the product offering, so we could better consider the financial risks of this large-scale data migration. For a while, we needed to pay for both our new cloud infrastructure and our old on-premises one while they overlapped during the migration, but Google Cloud made it easier for us to plan for this. By December 2021, we had completed our full migration to Google Cloud. This significant migration was completed gradually and without disrupting our services at any point. Our close collaboration with the Google Cloud team is one of the big reasons we completed this so successfully. Google Cloud was able to anticipate some of the problems we’d likely be facing and helped us overcome them along the way. We were able to shut off our last data center in February 2022, and the beneficial changes to the business are already obvious. Capacity planning, which used to be our biggest challenge on-premises, is now a problem of the past. The ability to spin up new resources on Google Cloud means we no longer need to buy additional resources a year in advance and wait for them to be shipped. Using Google Cloud means that we no longer rely on aging infrastructure, which is a very limiting factor when you’re developing and engineering a platform as complex as Egnyte. Our entire platform is now always operating on the latest storage, processing, network, and services available on Google Cloud.Additionally, we have services embedded on our infrastructure such as Cloud SQL, Cloud Bigtable, BigQuery, Dataflow, Pub/Sub, and Memorystore for Redis, which means we no longer need to build services from scratch or shop, install, and build them into the product and company work flow. There’s a long list of Google Cloud services that have significantly simplified our processes and that now support our flagship products, Egnyte Collaborate and Secure and Govern.Looking ahead, we’ll continue to take advantage of what Google Cloud has to offer. Our migration has impacted not only our business but also our clients. We can offer even higher reliability and faster scalability to our clients whenever they need our platform to protect and manage critical content on any cloud or any app, anywhere in the world. We look forward to seeing what’s next.Related Article4 new ways Citrix & Google Cloud can simplify your Cloud MigrationCitrix and Google Cloud simplify your cloud migration. The expanding partnership between Citrix and Google Cloud means that customers con…Read Article
Quelle: Google Cloud Platform
Google is one of the largest identity providers on the Internet. Users rely on our identity systems to log into Google’s own offerings, as well as third-party apps and services. For our business customers, we provide administratively managed Google accounts that can be used to access Google Workspace, Google Cloud, and BeyondCorp Enterprise. Today we’re announcing that these organizational accounts support single sign-on (SSO) from multiple third-party identity providers (IdPs), available in general availability immediately. This allows customers to more easily access Google’s services using their existing identity systems. Google has long provided customers with a choice of digital identity providers. For over a decade, we have supported SSO via the SAML protocol. Currently, Google Cloud customers can enable a single identity provider for their users with the SAML 2.0 protocol. This release significantly enhances our SSO capabilities by supporting multiple SAML-based identity providers instead of just one. Business cases for supporting multiple identity providersThere are many reasons for customers to federate identity to multiple third-party identity providers. Often, organizations have multiple identity providers resulting from mergers and acquisitions, or due to differing IT strategies across corporate divisions and subsidiaries. Supporting multiple identity providers allows the users from these different organizations to all use Google Cloud without time-consuming and costly migrations.Another increasingly common use case is data sovereignty. Companies that need to store the data of their employees in specific jurisdictional locations may need to use different identity providers. Migrations are yet another common use case for supporting multiple identity providers. Organizations transitioning to new identity providers can now keep their old system active with the new one during the transition phase.”The City of Los Angeles is launching a unified directory containing all of the city’s workforce. Known as “One Digital City,” the directory provides L.A. city systems with better security and a single source for authentication, authorization, and directory information,” said Nima Asgari, Google Team Manager for the City of Los Angeles. “As the second largest city in the United States, this directory comes at a critical time for hybrid teleworkers, allowing a standard collaboration platform based on Google Docs, Sheets, Slides, Forms, and Sites. From our experience, Google Cloud’s support of multiple identity providers has saved us from having to create a number of custom solutions that would require valuable staff time and infrastructure costs.”How it worksTo use these new identity federation capabilities, Google Cloud Administrators must first configure one or more identity provider profiles in the Google Cloud Admin console; we support up to 100 profiles. These profiles require information from your identity provider, including a sign-in URL and an X.509 certificate. Once these profiles have been created, they can then be assigned to the root level for your organization or to any organizational unit (OU). In addition, profiles can be assigned to a Group as an override for the OU. It is also possible to configure an Organizational Unit or group to sign in with Google usernames and passwords instead of a third-party IdP.For detailed information on configuring SSO with third-party IdPs, see the documentation here.OIDC Support, Coming SoonCurrently, SSO supports the popular SAML 2.0 protocol. Later this year, we plan on adding support for OIDC. OIDC is becoming increasingly popular for both consumer and corporate SSO. By supporting OIDC, Google Cloud customers can choose which protocol is best for the needs of their organization. OIDC works alongside the multi-IdP support being released now, so administrators can configure IdPs using both SAML and OIDC.Related ArticleAnnouncing Sovereign Controls for Google WorkspaceTo further enable EU organizations through digital sovereignty, we’re launching new capabilities to control, limit, and monitor transfers…Read Article
Quelle: Google Cloud Platform
Digital tools offered by cloud computing are fueling transformation around the world, including in Asia Pacific. In fact, IDC expects that total spending on cloud services in Asia Pacific (excluding Japan) will reach 282 billion USD by 2025.1 To meet growing demand for cloud services in Asia Pacific, we are excited to announce our plans to bring three new Google Cloud regions to Malaysia, Thailand, and New Zealand — on top of six other regions that we previously announced are coming to Berlin, Dammam, Doha, Mexico, Tel Aviv, and Turin. When they launch, these new regions will join our 34 cloud regions currently in operation around the world — 11 of which are located in Asia Pacific — delivering high-performance services running on the cleanest cloud in the industry. Enterprises across industries, startups, and public sector organizations across Asia Pacific will benefit from key controls that enable them to maintain low latency and the highest security, data residency, and compliance standards, including specific data storage requirements.“The new Google Cloud regions will help to address organizations’ increasing needs in the area of digital sovereignty and enable more opportunities for digital transformation and innovation in Asia Pacific. With this announcement, Google Cloud is providing customers with more choices in accessing capabilities from local cloud regions while aiding their journeys to hybrid and multi-cloud environments,” said Daphne Chung, Research Director, Cloud Services and Software Research, IDC Asia/Pacific.What customers and partners are sayingFrom retail and media & entertainment to financial services and public sector, leading organizations come to Google Cloud as their trusted innovation partner. The new Google Cloud regions in Malaysia, Thailand, and New Zealand will help our customers continue to enable growth and solve their most critical business problems. We will work with our customers to ensure the cloud region fits their evolving needs. “Kami was born out of the digital native era, where in order to scale globally we needed a partner like Google Cloud who could support us on our ongoing innovation journey. We have since delivered an engaging and dependable experience for millions of teachers and students around the world, so it’s incredibly exciting to hear about the new region coming to New Zealand. This investment from Google Cloud will enable us to deliver services with lower latency to our Kiwi users, which will further elevate and optimize our free premium offering to all New Zealand schools.” – Jordan Thoms, Chief Technology Officer, Kami “Our customers are at the heart of our business, and helping Kiwis find what they are looking for, faster than ever before, is our key priority. Our collaboration with Google Cloud has been pivotal in ensuring the stability and resilience of our infrastructure, allowing us to deliver world-class experiences to the 650,000 Kiwis that visit our site every day. We welcome Google Cloud’s investment in New Zealand, and are looking forward to more opportunities to partner closely on our technology transformation journey.” – Anders Skoe, CEO, Trade Me “Digital transformation plays a key role in helping Vodafone deliver better customer experiences and connect all Kiwis. We welcome Google Cloud’s investment in New Zealand and look forward to working together to offer more enriched experiences for local businesses, and the communities we serve,” said Jason Paris, CEO, Vodafone New Zealand“Our journey with Google Cloud spans almost half a decade, with our most recent partnership and co-innovation initiatives paving the way for AirAsia and Capital A to disrupt the digital platform arena in the same vein as we did airlines. The announcement of a new cloud region that’s coming to Malaysia – and Thailand too if I may add – showcases Google Cloud’s continuous desire to expand its in-region capabilities to complement and support our aspiration of establishing the airasia Super App at the center of our e-commerce, logistics and fintech ecosystem, while enriching the local community and giving all 700 million people in Asean inclusivity, accessibility, and value. I couldn’t be more excited about this massive milestone and the new possibilities that Google Cloud’s growing network of cloud regions will create for us, our peers, and the common man.” – Tony Fernandes, CEO, Capital A“Google Cloud’s world-class cloud-based analytics and artificial intelligence (AI) tools have enabled Media Prima to embed a digital DNA across our organization, deliver trusted and real-time news updates during peak periods when people need them the most, and implement whole new engagement models like content commerce, thereby allowing us to diversify our revenue streams and remain at the forefront of an industry in transition. By allowing us to place our digital infrastructure and applications even closer to our audiences, this cloud region will supercharge data-driven content production and distribution, and our ability to enrich the lives of Malaysians by informing, entertaining, and engaging them through new and innovative mediums.” – Rafiq Razali, Group Managing Director, Media Prima“Google Cloud’s global network has been playing an integral role in Krungthai Bank’s adoption of advanced data analytics, cybersecurity, AI, and open banking capabilities to earn and retain the trust of the 40 million Thais who use our digital services to meet their daily financing needs. This new cloud region is a fundamentally important milestone that will help accelerate our continuous digital reinvention and sustainable growth strategy within the local regulatory framework, thereby allowing us to reach and serve Thais at all levels, including unbanked consumers and small business owners, no matter where they may be.” – Payong Srivanich, CEO, Krungthai Bank“Having migrated our operations and applications onto Google Cloud’s superior data cloud infrastructure, we are already delivering more personalized services and experiences to small business owners, delivery riders, and consumers than ever before – and in a more cost efficient and sustainable way. With the new cloud region, we will be physically closer to the computing resources that Google Cloud has to offer, and able to access cloud technologies in a faster and even more complete way. This will help strengthen our mission: to build a homegrown ‘super app’ that assists smaller players and revitalizes the grassroots economy.” – Thana Thienachariya, Chairman of the Board, Purple Ventures Co., Ltd. (Robinhood)Delivering a global networkThese new cloud regions represent our ongoing commitment to supporting digital transformation across Asia Pacific. We continue to invest in expanding connectivity throughout the region by working with partners in the telecommunications industry to establish subsea cables — including Apricot, Echo, JGA South, INDIGO, and Topaz — and points of presence in major cities. Learn more about our global cloud infrastructure, including new and upcoming regions.1. Source: Asia/Pacific (Excluding Japan) Whole Cloud Forecast, 2020—2025, Doc # AP47756122, February 2022Related ArticleA new Google Cloud region is coming to MexicoThe new Google Cloud region in Mexico will be the third in Latin America, joining Chile and Brazil, and bringing the total of regions and…Read Article
Quelle: Google Cloud Platform
Almost two years ago, the National Defense Science Board invited me to participate in the Summer Study 2020 Panel, “Protecting the Global Information Infrastructure.” They requested that I brief them on the evolution of the global communications infrastructure connecting all nations. The U.S., like other nations, both cooperates and competes in the commercial telecom market, while prioritizing national security.
This study group was interested in the implementation of 5G and its evolution to 6G. They understood that softwarization of the core communication technologies and the inclusion of edge and cloud computing as core infrastructure components of telecommunications services is inevitable. Because of my expertise in these areas, they invited me to share my thoughts on how we might secure and protect the emerging networks and systems of the future. I prepared for the meeting by looking at how Microsoft, as a major cloud vendor, had worked to secure our global networks.
My conclusion was simple. It is clear that attacks on the national communications infrastructure will occur with much greater sophistication than ever before. Because of this, we continue to develop our networks and systems with security as our first principle and we stay constantly vigilant. To these ends, Microsoft has adopted a zero-trust security architecture in all our platforms, services, and network functions.
Specialized hardware replaced by disaggregated software
One challenge for the panel was to understand precisely what the emerging connectivity infrastructure will be, and what security attributes must be assured with respect to that infrastructure.
Classical networks (the ones before the recent 5G networks), were deployed as hub-and-spoke architecture. Packets came to a specialized hardware-software package developed by a single vendor. From there, they were sent to the Internet. But 5G (and beyond) networks are different. In many ways, the specialized hardware has been “busted open.”
Functionality is now disaggregated into multi-vendor software components that run on different interconnected servers. As a result, the attack surface area has increased dramatically. Network architects have to protect each of these components along their interconnects—both independently and together. Furthermore, packets are now processed by multiple servers, any of which could be compromised. 5G brings the promise of a significant number of connected Internet-of-Things (IoT) devices that, once compromised, could also be turned into an army of attackers.
The power of cloud lies in its scale
In a word, Microsoft Azure is big: 62 regions in 140 countries worldwide host millions of networked servers, with regions connected by over 180,000 miles of fiber. Some of our brightest and most experienced engineers have used their knowledge to make this infrastructure safe and secure for customers, which includes companies and people working in healthcare, government services, finance, energy, manufacturing, retail, and more.
As of today, Microsoft tracks more than 250 unique nation-states, cybercriminals, and other threat actors. Our cloud processes and analyzes more than 43 trillion security signals every single day. Nearly 600,000 organizations worldwide use our security offering. With all this, Microsoft’s infrastructure is secure, and we have earned the trust of our customers. Many of the world’s largest companies with vital and complex security needs have offloaded much of their network and compute workloads to Azure. Microsoft Azure has become part of their critical infrastructure.
Securing Open RAN architecture
The cloud’s massive and unprecedented scale is unique, and precisely what makes the large investments in sophisticated defense and security economically possible. Microsoft Azure’s ground-up design includes strict security measures to withstand any type of attack imaginable. Conversely, the scale required to defend against sophisticated threats is not logical or feasible for smaller-scale, on-premises systems.
The report, “Why 5G requires new approaches to cybersecurity”1 articulates several good reasons why we need to think about how to protect our infrastructure. Many of us in research and engineering have also been thinking about these issues, as evidenced by Microsoft’s recently published white paper, Bringing Cloud Security to the Open RAN, which describes how we can defend and mitigate against malicious attacks against O-RANs, beginning with security as the first principle.
With respect to O-RAN and Azure for Operators Distributed Services (AODS), we explain how they inherit and benefit from the cloud’s robust security principles applied in the development of the far-edge and the near-edge. The inherently modular nature of Open RAN, alongside recent advancements in Software Defined Networking (SDN) and network functions virtualization (NFV), enables Microsoft to deploy security capabilities and features at scale across the O-RAN ecosystem.
We encapsulate code into secure containers and enable more granular control of sensitive data and workloads than prior generations of networking technologies. Additionally, our computing framework makes it easy to add sophisticated security features in real-time, including AI/ML and advanced cloud security capabilities to promptly detect and actively mitigate malicious activities.
Microsoft is actively working on delivering the most resilient platform in the industry, backed by our proven security capabilities, trustworthy guarantees, and a well-established secure development lifecycle. This platform is being integrated with Microsoft security defense services to prevent, detect, and respond to attacks. It includes AI/ML technologies to allow creation of logic to automate and create actionable intelligence to improve security, fault analyses, and operational efficiency.
We are also leveraging Azure services such as Active Directory, Azure Container Registry, Azure Arc, and Azure Network Function Manager to provide a foundation for secure and verifiable deployment of RAN components. Additional technologies include secure RAN deployment and management processes on top of these, which will eliminate significant upfront cost otherwise incurred by RAN vendors when building these technologies themselves.
It is noteworthy that across the entire project lifecycle—from planning to sunsetting—we integrate security practices. All software deliverables are developed in a “secure by default” manner, going through a pipeline that leverages Microsoft Azure’s security analysis tools that perform static analysis, credential scanning, regression, and functionality testing.
We are taking steps to integrate our RAN analytics engine with Microsoft Sentinel. This enables telecom operators to manage vulnerability and security issues, and to deploy secure capabilities for their data and assets. We expect Microsoft Sentinel, Azure Monitor, and other Azure services will incorporate our RAN analytics to support telecommunications customers. With this, we will deliver intelligent security analytics and threat intelligence for alert detection, threat visibility, proactive hunting, and threat response. We also expect that Azure AI Gallery will host sophisticated 3rd party ML models for RAN optimization and threat detection, running on the data streams we collect.
Mitigating the impact of compromised systems
We have built many great tools to keep the “bad guys” out, but building secure telecommunication platforms requires dealing with the unfortunate reality that sometimes systems can still be compromised. As a result, we are aggressively conducting research and building technologies, including fast detection and recovery from compromised systems.
Take the case of ransomware. Traditional ransomware attacks encrypt a victim’s data and ask for a ransom in exchange for decrypting it. However, modern ransomware attacks do not limit themselves to encrypting data. Instead, they remove the enterprise’s ability to control its platforms and critical infrastructure. The RAN constitutes critical infrastructure and can suffer from ransomware attacks.
Specifically, we have developed technology that prepares us for the unfortunate time when systems may be compromised. Our latest technology makes it easier to recover as quickly as possible, and with minimal manual effort. This is especially important in telco far-edge scenarios, where the large number of sites makes it prohibitively expensive to send technicians into the field for recovery. Our solution, which leverages a concept called trusted beacons, automatically recovers a far-edge node from a compromise or failure. When trusted beacons are absent, the platform automatically reboots and re-installs an original, unmodified, and uncompromised software image.
Looking into the future
We have developed mechanisms for monitoring and analyzing data as we look for threats. Our best-in-class verification technology checks every configuration before lighting it up. Our researchers are constantly adding new AI techniques that use the compute power of the cloud to protect our infrastructure better than ever before. Our end-to-end zero-trust solutions spanning identity, security, compliance, and device management, across cloud, edge, and all connected platforms will protect the telecommunications infrastructure. We continue to invest billions to improve cybersecurity outcomes.
Microsoft will continue to update you on developments that impact the security of our network, including many of the technologies noted within this article. Microsoft knows that while we need to continue to be vigilant, the telecommunications industry ultimately benefits by making Microsoft Azure part of their critical infrastructure.
1 Tom Wheeler and David Simpson, “Why 5G requires new approaches to cybersecurity.” The Brookings Institution.
Quelle: Azure
With Azure Cognitive Services for Speech, customers can build voice-enabled apps confidently and quickly with the Speech SDK. We make it easy for customers to transcribe speech to text (STT) with high accuracy, produce natural-sounding text-to-speech (TTS) voices, and translate spoken audio. In the past few years, we have been inspired by the innovations coming out of the gaming industry, specific to AI.
Why AI for gaming? AI in gaming allows for flexible and reactive video game experiences. As technology continues to change and evolve, AI innovation has led to pioneering and tremendous advances in the gaming industry. Here are three popular use cases:
Use Cases for AI Gaming
Game dialogue prototyping with text to speech: Shorten the amount of time and money spent on the product to get the game to market sooner. Designers and producers can rapidly swap lines of dialogue using different emotional voices and listen to variations in real-time to ensure accuracy.
Greater accessibility with transcription, translation, and text to speech: Make gaming more accessible and add functionality through a single interface. Gameplay instructions that make games more accessible to individuals unable to read the text or language. Storylines for visually impaired gamers or younger users that have yet to be taught to read.
Scalable non-playable character voices and interaction with text to speech: Easily produce voice characters that stay on-brand with consistent quality and speaking styles. Game developers can add emotions, accents, nuances, laughter, and other paralinguistic sounds and expressions to game avatars and NPCs (non-playable characters) that can initiate or participate in a conversation in-game.
Featured Customers for AI Gaming
Flight Simulator: Our first-party game developers are using AI for speech to improve end-user experiences. Flight Simulator is the longest-running franchise in Microsoft history, and the latest critically acclaimed release not only builds on that legacy, but it also pushes the boundaries as the most technologically advanced simulator ever made. By adding authentic air traffic controller voices, Flight Simulator added a small-but-powerful way to elevate the Flight Simulator experience. Recording audio to replicate air traffic controllers from every airport on Earth was a huge task—TTS is a great solution that can handle the dynamic content as well as serve the air traffic controller voices as a low-latency, highly available, secure, and scalable solution. Let’s check out a video for the newly released Flight Simulator experience with custom neural voice implemented for real-time air traffic controller voice.
Undead Labs: Undead Labs studio is on a mission to take gaming in bold new directions. They are the makers of the State of Decay franchise and use Azure Neural TTS during game development.
Double Fine: Double Fine is the producer of many popular games, including Psychonauts. They are utilizing our neural TTS to prototype future game projects.
You can check out our use case presentation at Microsoft’s Game Developers Conference 2022 for more details.
Speech Services and Responsible AI
We are excited about the future of Azure Speech with human-like, diverse, and delightful quality under the high-level architecture of XYZ-code AI framework. Our technology advancements are also guided by Microsoft’s Responsible AI process, and our principles of fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. We put these ethical standards into practice through the Office of Responsible AI (ORA)—which sets our rules and governance processes, the AI Ethics and Effects in Engineering and Research (Aether) Committee—which advises our leadership on the challenges and opportunities presented by AI innovations, and Responsible AI Strategy in Engineering (RAISE)—a team that enables the implementation of Microsoft Responsible AI rules across engineering groups.
Get started
Start building new customer experiences with Azure Neural TTS and STT. In addition, the Custom Neural Voice capability enables organizations to create a unique brand voice in multiple languages and styles.
Resources
Get started with text to speech
Get started with speech to text
Get started with Custom Neural Voice
Get started with speech translation
Quelle: Azure
Gartner has recognized Microsoft as a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud AI Developer Services, with Microsoft placed furthest in “Completeness of Vision”.
Gartner defines the market as “cloud-hosted or containerized services that enable development teams and business users who are not data science experts to use AI models via APIs, software development kits (SDKs), or applications.”
We are proud to be recognized for our Azure AI Platform. In this post, we’ll dig into the Gartner evaluation, what it means for developers, and provide access to the full reprint of the Gartner Magic Quadrant to learn more.
Scale intelligent apps with production-ready AI
“Although ModelOps practices are maturing, most software engineering teams still need AI capabilities that do not demand advanced machine learning skills. For this reason, cloud AI developer services (CAIDS) are essential tools for software engineering teams.”—Gartner
A staggering 87 percent of AI projects never make it into production.¹ Beyond the complexity of data preprocessing and building AI models, organizations wrestle with scalability, security, governance, and more to make their model’s production ready. That’s why over 85 percent of Fortune 100 companies use Azure AI today, spanning industries and use cases.
More and more, we see developers accelerate time to value by using pre-built and customizable AI models as building blocks for intelligent solutions. Microsoft Research has made significant breakthroughs in AI over the years, being the first to achieve human parity across speech, vision, and language capabilities. Today, we’re pushing the boundaries of language model capabilities with large models like Turing, GPT-3, and Codex (the model powering GitHub Copilot) to help developers be more productive. Azure AI packages these innovations into production-ready general models known as Azure Cognitive Services and use case-specific models, Azure Applied AI Services for developers to integrate via API or an SDK, then continue to fine tune for greater accuracy.
For developers and data scientists looking to build production-ready machine learning models at scale, we support automated machine learning also known as autoML. AutoML in Azure Machine Learning is based on breakthrough Microsoft research focused on automating the time-consuming, iterative tasks of machine learning model development. This frees up data scientists, analysts, and developers to focus on value-add tasks outside operations and accelerate their time to production.
Enable productivity for AI teams across the organization
“As more developers use CAIDS to build machine learning models, the collaboration between developers and data scientists will become increasingly important.”—Gartner
As AI becomes more mainstream across organizations, it’s essential that employees have the tools they need to collaborate, build, manage, and deploy AI solutions effectively and responsibly. As Microsoft Chairman and CEO Satya Nadella shared at Microsoft Build, Microsoft is "building models as platforms in Azure" so that developers with different skills can take advantage of breakthrough AI research and embed them into their own applications. This ranges from professional developers building intelligent apps with APIs and SDKs to citizen developers using pre-built models via Microsoft Power Platform.
Azure AI empowers developers to build apps in their preferred language and deploy in the cloud, on-premises, or at the edge using containers. Recently we also announced the capability to use any Kubernetes cluster and extend machine learning to run close to where your data lives. These resources can be run through a single pane with the management, consistency, and reliability provided by Azure Arc.
Operationalize Responsible AI practices
“Vendors and customers alike are seeking more than just performance and accuracy from machine learning model. When selecting AutoML services, they should prioritize vendors that excel at providing explainable, transparent models with built-in bias detection and compensatory mechanisms.”—Gartner
At Microsoft, we apply our Responsible AI Standard to our product strategy and development lifecycle, and we’ve made it a priority to help customers do the same. We also provide tools and resources to help customers understand, protect, and control their AI solutions, including a Responsible AI Dashboard, bot development guidelines, and built-in tools to help them explain model behavior, test for fairness, and more. Providing a consistent toolset to your data science team not only supports responsible AI implementation but also helps provide greater transparency and enables more consistent, efficient model deployments.
Microsoft is proud to be recognized as a Leader in Cloud AI Developer Services, and we are excited by innovations happening at Microsoft and across the industry that empower developers to tackle real-world challenges with AI. You can read and learn from the complete Gartner Magic Quadrant now.
Learn more
Explore other analyst reports for Azure AI.
Read the latest announcements from Azure AI on the Azure blog.
References
¹Why do 87 percent of data science projects never make it into production? Venture Beat.
Gartner Inc.: “Magic Quadrant for Cloud AI Developer Services,” Van Baker, Svetlana Sicular, Erick Brethenoux, Arun Batchu, Mike Fang, May 23, 2022.
Gartner and Magic Quadrant are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Azure
Amazon Relational Database Service (Amazon RDS) for PostgreSQL unterstützt jetzt die PostgreSQL-Nebenversionen 14.3, 13.7, 12.11, 11.16 und 10.21. Wir empfehlen Kunden das Upgrade auf die neuesten Nebenversionen, um bekannte Sicherheitslücken in früheren PostgreSQL-Versionen zu beheben und von den von der PostgreSQL-Community hinzugefügten Fehlerbehebungen, Leistungsverbesserungen und neuen Funktionen zu profitieren. Weitere Informationen zu diesen Versionen finden Sie in der Ankündigung der PostgreSQL-Community.
Quelle: aws.amazon.com