Bing Notizbuch: Microsoft bringt KI-Chat mit extralangen Prompts
Maximal 18.000 Zeichen können User im Bing-Notizbuch eingeben. Die Software wurde zudem auf GPT-4 aktualisiert und sollte präziser arbeiten. (Bing, Microsoft)
Quelle: Golem
Maximal 18.000 Zeichen können User im Bing-Notizbuch eingeben. Die Software wurde zudem auf GPT-4 aktualisiert und sollte präziser arbeiten. (Bing, Microsoft)
Quelle: Golem
Risk management is a systematic—and necessary—process designed to identify, assess, prioritize, and minimize the impact of uncertain events on an organization. Worldwide end-user spending on security and risk management is projected to total USD215 billion in 2024, an increase of 14.3 percent from 2023, according to new forecast from Gartner, Inc. who also states that in 2023, global security and risk management end-user spending is estimated to reach USD188.1 billion.
Companies around the world are using AI to understand potential threats, make informed decisions, and take actions to avoid or reduce risk.
For example:
Identification: Recognize both internal and external factors that could affect your objectives.
Assessment: Determine which risks are most critical to allow your business to better prioritize.
Mitigation: Implement safety procedures, implement security measures, and develop contingency plans.
Compliance and regulation: Comply with regulations to avoid legal penalties and reputational damage.
Business continuity: Withstand unexpected disruptions and recover more quickly when they occur.
Financial stability: Protect investments, reduce the likelihood of financial crises, and maintain stakeholder confidence.
Strategic decision-making: Make informed choices and navigate uncertainties in rapidly changing business environments.
By analyzing historical data and using machine learning algorithms, businesses can anticipate future risks and their potential impact. This allows for the development of risk mitigation strategies that are both data-driven and forward-looking. Microsoft uses sophisticated data analytics and AI algorithms to better understand and protect against digital threats and cybercriminal activity. In 2021 Microsoft blocked more than 70 billion email and identity threat attacks.
Azure OpenAI Service also aids in operational risk management. It can be employed to monitor and analyze data from IoT devices and sensors, helping companies identify potential operational disruptions or equipment failures before they occur. This proactive approach can prevent costly downtime and production losses, enhancing overall business continuity.
Following, learn how Azure OpenAI Service is helping mitigate risk for diverse businesses around the globe.
Azure OpenAI Service contributes to improved risk management
Azure OpenAI’s natural language processing (NLP) algorithms can analyze vast amounts of unstructured data from various sources, including news articles, social media, and financial reports, to identify emerging risks and trends. This real-time analysis enables businesses to stay proactive in identifying potential threats, such as market fluctuations, regulatory changes, or emerging competitive challenges. By staying ahead of these risks, companies can develop proactive strategies to mitigate or exploit them, thereby enhancing their resilience.
Orca Security
A front-runner in agentless cloud security, Orca Security delivers comprehensive risk management to global enterprises. Impressed with Azure’s stronger privacy and compliance protocols, and Orca Security believed that Microsoft could provide better support. It also guaranteed a 99.9 percent uptime. By integrating OpenAI’s GPT API, they empowered clients to swiftly respond to security alerts with AI-guided solutions. With Azure OpenAI, customers can choose where to store their data depending on the regulations they want to adhere to. Azure also secures data at rest (data stored on physical or virtual disk drives or other media) and in transit (when it’s actively being transferred over a network).
NTTAirports need a reliable network connection for various processes and applications, especially at the bridging point between Wi-Fi networks and public networks. NTT partnered with Microsoft to deliver a smart airport solution, helping the airport digitally transform their operations, including baggage handling, passenger screening, and data transfer. The airport is building a completely private 5G network with NTT across 1000 hectares, allowing them to transform critical business processes like luggage handling and border control. The private network enables digital solutions to optimize the movement of people, baggage, and equipment safely and in real-time across the airport, without the risk of congestion on public networks.
Intapp
“Knowledge-based industries have special needs and require out-of-the-box AI capabilities that deliver specific use cases,” says Lavinia Calvert: Vice President and Legal Industry Principal at Intapp. General software solutions don’t adequately address the needs of financial and professional services firms. With their complex relationships, partner-led operations, and compliance and regulatory mandate, they require purpose-built cloud solutions. Azure underpins all of Intapp’s solutions and AI initiatives, including managing end-to-end risk, compliance, and confidentiality, business development, driving profitability, and effective collaboration. Robust compliance capabilities ensure conformity with leading risk and compliance management practices by using technology designed to meet industry-specific mandates and regulatory requirements. These benefits help Intapp deliver an out-of-the-box industry cloud experience designed for the evolving needs and demanding use cases of financial and professional services firms.
A fundamental aspect of any successful business strategy
Azure OpenAI plays a pivotal role in helping businesses achieve better risk management. Its NLP and machine learning capabilities enable companies to analyze vast amounts of data, identify emerging risks, and make data-driven decisions. By leveraging Azure OpenAI, businesses can enhance their risk resilience, seize opportunities, and navigate the ever-changing business landscape with confidence.
Our commitment to responsible AI
With Responsible AI tools in Azure, Microsoft is empowering organizations to build the next generation of AI apps safely and responsibly. Microsoft has announced the general availability of Azure AI Content Safety, a state-of-the art AI system that helps organizations keep AI-generated content safe and create better online experiences for everyone. Customers—from startup to enterprise—are applying the capabilities of Azure AI Content Safety to social media, education and employee engagement scenarios to help construct AI systems that operationalize fairness, privacy, security, and other responsible AI principles.
Get started with Azure OpenAI Service
Apply for access to Azure OpenAI Service by completing this form.
Learn about Azure OpenAI Service and the latest enhancements.
Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn.
Read our partner announcement blog, empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service.
Learn how to use the new Chat Completions API (in preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.
Learn more about Azure AI Content Safety.
The post Future proof—Navigating risk management with Azure OpenAI Service appeared first on Azure Blog.
Quelle: Azure
We are honored to be recognized by Gartner® as a Leader in the recently published 2023 Gartner® Magic Quadrant™ for Strategic Cloud Platform Services (SCPS). In the report, Gartner placed Microsoft furthest in Completeness of Vision.
For years, we’ve understood that the industry trusts Gartner Magic Quadrant reports to provide a holistic review of cloud providers’ capabilities. We’re pleased by this placement in the Gartner report as we continue to prioritize investments to make Azure the global cloud computing platform powering transformation and growth—enabling new possibilities for organizations to embrace the latest technologies and to advance at a rapid pace. With highly secure, state-of-the-art Azure datacenters designed with data residency in mind, Azure hosts one of the most advanced supercomputers in the world.
The Gartner report validates our commitment to empowering customers to reach new heights. We are proud of our purpose-built cloud infrastructure for the era of AI, one that is adaptive across on-prem, multicloud and edge environments, for our complete development platform with advanced tools to accelerate developer productivity, for our AI services and tools to empower innovation, and for our extensive partnerships across a wide range of industry leaders to give customers the choices they desire.
We are honored for this recognition and will continue to build the future together with our customers, no matter where they are in the cloud journey.
Purpose-built cloud infrastructure for the era of AI
We continue to build our AI infrastructure in close collaboration with silicon providers and industry leaders, incorporating the latest innovations in software, power, models, and silicon. Azure works closely with NVIDIA to provide NVIDIA H100 Tensor Core (GPU) graphics processing unit-based virtual machines (VMs) for mid to large-scale AI workloads. We’ve also expanded our partnership with AMD, enabling our customers with choices to meet their unique business needs. These investments have allowed Azure to pioneer performance for AI supercomputing in the cloud and have consistently ranked us as the number one cloud in the top 500 of the world’s supercomputers.
With these additions to the Azure infrastructure hardware portfolio, our platform enables us to deliver the best performance and efficiency across all workloads.
An adaptive cloud across on-prem, multicloud and edge environments
The cloud is evolving to support customer workloads wherever they’re needed. We realize cloud migration is not a one-size-fits-all approach, and that’s why we’re committed to meeting customers where they are in their cloud journey. With Azure you have an adaptive cloud that enables you to thrive in dynamic environments by unifying siloed teams, distributed sites, and sprawling systems into single operations, application, and data model in Azure.
Azure Arc helps customers implement their adaptive cloud strategies, providing a bridge that extends the Azure platform and enables them to build applications and services across datacenters, at the edge, and in multicloud environments. Through a portfolio of services, tools, and infrastructure, organizations can take advantage of Azure services within a single control plane. And with the recent general availability of VMware vSphere enabled by Azure Arc that brings together Azure and the VMware vSphere infrastructure, VMware administrators can empower their developers to use Azure technologies with their existing server-based workloads and new Kubernetes workloads all from Azure.
Every day, cloud administrators and IT professionals are being asked to do more. We consistently hear from customers they’re tasked with a wider range of operations; they are required to collaborate with more users, and support more complex needs to deliver on increasing customer demand—all while integrating more workloads into their cloud environment. To support our customers, we recently launched the public preview of Microsoft Copilot for Azure, a new solution built into Azure that will help simplify how they design, operate, or troubleshoot apps and infrastructure from cloud to edge.
One location to build, test, and deploy AI innovations securely
We’re only just starting to understand the potential of generative AI and how it will transform the way we live and work. Developers are at the heart of this new wave of innovation, pushing the boundaries of what’s possible. With a cloud-first approach developers spend less time on maintaining apps and infrastructure and more time on innovating and ideating, reducing the time to market. What’s more, developers can build with confidence, knowing that Azure has built-in tools and technologies to help ensure a secure and responsible approach from development to deployment.
The public preview of Azure AI Studio gives developers everything they need to build, test, and deploy AI innovations in one convenient location: cutting-edge models, data integration for retrieval augmented generation (RAG), intelligent search capabilities, full-lifecycle model management, and content safety. Azure AI Content Safety is available in Azure AI Studio so developers can easily evaluate model responses all in one unified development platform to quickly and efficiently detect offensive or inappropriate content in text and images, Customers like Heineken, Thread, Moveworks, Manulife , and so many more are putting Azure AI technologies to work for their businesses and their own customers and employees.
Integrated, AI-based tools to help developers innovate efficiently
The integration of AI-based tools in the development cycle is not just accelerating innovation, but also enabling developers to spend more time on strategic, meaningful work, and less time on tasks like debugging and infrastructure management. With Microsoft DevBox, developers can streamline development with secure, ready-to-code workstations in the cloud, leveraging self-service access to high-performance, cloud-based workstations preconfigured and ready-to-code for specific projects. GitHub Copilot uses AI technology to suggest code in the editor maximizing time spent on business logic over boilerplate, with developers reporting they can complete tasks up to 55% faster and feel up to 88% more productive.
With tools that are designed to work seamlessly together, Microsoft’s complete development platform stack empowers developers with flexible solutions, so they can build next-gen apps productively and securely, where they want. GitHub integrates with Azure to provide a continuous integration and deployment (CI/CD) pipeline for developers, and with GitHub Enterprise they can be more efficient with up to 75% improvement in time spent managing tools and code infrastructure. Customers like GM are collaborating with Microsoft to help speed up innovation within their organization, test and learn, and create agile environments using Microsoft development platforms such as GitHub, Visual Studio and Microsoft DevBox.
Create differentiated AI experiences with cloud-native apps
Azure’s cloud-native platform is the best place to run and scale applications while seamlessly embedding Azure’s native AI services. Azure gives developers the choice between control and flexibility, with complete focus on productivity regardless of what option is chosen. Azure App Service allows developers to host .NET, Java, Node.js, and Python web apps and APIs in a fully managed Azure service. Azure takes care of all the infrastructure management like high availability, load balancing, and autoscaling enabling developers to accelerate app development to production up to 50 percent with fully-managed Azure App Service. Developers can further streamline the development process for faster time to market with cloud-based tools and services including Azure Kubernetes Service (AKS), GitHub Enterprise and Advanced Security, Azure Cosmos DB, and Azure Cognitive Services. And with Microsoft Copilot for Azure, developers have an AI companion to help them design, operate, optimize, and troubleshoot everyday tasks with AKS and Kubernetes. Customers such as Sapiens have leveraged the synergy between cloud-native technologies and AI to accelerate their digital transformation and deliver more value to their end users with intelligent apps.
Building the future together
We are dedicated to empowering our customers with technology that unlocks limitless innovation, helping them wherever they are on their technology journey. We use the decades of experience we have in migrating Microsoft’s on-premises workloads to the cloud, to inform how we make it easier for customers and partners to use the cloud—from how we build products, to the real-world migration guidance we provide. It’s why 95 percent of Fortune 500 companies trust Azure with their business on Azure. Customers like AT&T rely on Azure AI for Enterprise Chat GPT and better knowledge mining and the World Bank who is using Azure cloud-based solution to centralized monitoring, performance, resource consumption, and security management across clouds, all in a single package.
And we are not doing this alone. We have a vast global partner network and a growing number of technology partnerships across a wide range of industry leaders such as, Databricks, Netapp, NVIDIA, Oracle, OpenAI, SAP, Snowflake, VMware, and others. We recently announced a partnership to bring Oracle Database Services into Azure to help maximize efficiency and resiliency for our mutual customers’ businesses. Our investments with SAP continue to grow, enhancing performance and resilience for mission critical workloads with new powerful infrastructure options for our SAP customers such as the Azure M-series Mv3 family, the next generation of memory optimized virtual machines (VMs). As we expand partnerships with OpenAI, Meta and Hugging Face, we create more opportunities for organizations and developers to build generative AI experiences, offering the most comprehensive selection of frontier, open and commercial models.
As Microsoft continues to innovate at the speed of AI, Azure is at the foundation of all our innovation, powering all aspects of the Microsoft Cloud and our copilots. Azure makes it possible for organizations to securely embrace the latest technologies and leverage them to create new ones. We endlessly optimize our infrastructure to bring faster, secure, reliable, and more sustainable computing power, so that our customers and partners can build with confidence. Linked by one of the largest interconnected networks on the planet, we’re providing unprecedented scalability, low latency, data residency, and high availability to our customers around the world. Azure provides the cloud platform wherever you are with highly secure, state-of the art Azure datacenters, offering 60+ regions—more than any other cloud provider.
With Azure, customers can trust they are on a secure and well-managed foundation to utilize the latest advancements in AI and cloud-native services, safely and responsibly, to create today’s solutions and tomorrow’s breakthroughs. We are dedicated to the success of our customers and partners, and continue to invest in ways that ensure Azure is the leading choice for customers, big and small around the globe.
Discover resources for your cloud journey
Learn more about Azure Migrate and Modernize and Azure Innovate and how they can help you from migration to AI innovation.
Check out the new and free Azure Migrate application and code assessment feature to save on application migrations.
Find out how to take your AI ambitions from ideation to reality with Azure.
For the latest Azure innovations watch our Ignite 2023 sessions on demand.
Disclaimer:
Gartner, Magic Quadrant for Strategic Cloud Platform Services, David Wright, Dennis Smith, and 4 more, 4 December 2023.
The report was previously known as Magic Quadrant for Cloud Infrastructure and Platform Services (2020-2022) and Magic Quadrant for Cloud Infrastructure as a Service till 2019.
Gartner is a registered trademark and service mark and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request here.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The post Microsoft named a Leader in 2023 Gartner® Magic Quadrant™ for Strategic Cloud Platform Services (SCPS) appeared first on Azure Blog.
Quelle: Azure
This is the third blog in our series on LLMOps for business leaders. Read the first and second articles to learn more about LLMOps on Azure AI.
As we embrace advancements in generative AI, it’s crucial to acknowledge the challenges and potential harms associated with these technologies. Common concerns include data security and privacy, low quality or ungrounded outputs, misuse of and overreliance on AI, generation of harmful content, and AI systems that are susceptible to adversarial attacks, such as jailbreaks. These risks are critical to identify, measure, mitigate, and monitor when building a generative AI application.
Note that some of the challenges around building generative AI applications are not unique to AI applications; they are essentially traditional software challenges that might apply to any number of applications. Common best practices to address these concerns include role-based access (RBAC), network isolation and monitoring, data encryption, and application monitoring and logging for security. Microsoft provides numerous tools and controls to help IT and development teams address these challenges, which you can think of as being deterministic in nature. In this blog, I’ll focus on the challenges unique to building generative AI applications—challenges that address the probabilistic nature of AI.
First, let’s acknowledge that putting responsible AI principles like transparency and safety into practice in a production application is a major effort. Few companies have the research, policy, and engineering resources to operationalize responsible AI without pre-built tools and controls. That’s why Microsoft takes the best in cutting edge ideas from research, combines that with thinking about policy and customer feedback, and then builds and integrates practical responsible AI tools and methodologies directly into our AI portfolio. In this post, we’ll focus on capabilities in Azure AI Studio, including the model catalog, prompt flow, and Azure AI Content Safety. We’re dedicated to documenting and sharing our learnings and best practices with the developer community so they can make responsible AI implementation practical for their organizations.
Mapping mitigations and evaluations to the LLMOps lifecycle
We find that mitigating potential harms presented by generative AI models requires an iterative, layered approach that includes experimentation and measurement. In most production applications, that includes four layers of technical mitigations: (1) the model, (2) safety system, (3) metaprompt and grounding, and (4) user experience layers. The model and safety system layers are typically platform layers, where built-in mitigations would be common across many applications. The next two layers depend on the application’s purpose and design, meaning the implementation of mitigations can vary a lot from one application to the next. Below, we’ll see how these mitigation layers map to the large language model operations (LLMOps) lifecycle we explored in a previous article.
Fig 1. Enterprise LLMOps development lifecycle.
Ideating and exploring loop: Add model layer and safety system mitigations
The first iterative loop in LLMOps typically involves a single developer exploring and evaluating models in a model catalog to see if it’s a good fit for their use case. From a responsible AI perspective, it’s crucial to understand each model’s capabilities and limitations when it comes to potential harms. To investigate this, developers can read model cards provided by the model developer and work data and prompts to stress-test the model.
Model
The Azure AI model catalog offers a wide selection of models from providers like OpenAI, Meta, Hugging Face, Cohere, NVIDIA, and Azure OpenAI Service, all categorized by collection and task. Model cards provide detailed descriptions and offer the option for sample inferences or testing with custom data. Some model providers build safety mitigations directly into their model through fine-tuning. You can learn about these mitigations in the model cards, which provide detailed descriptions and offer the option for sample inferences or testing with custom data. At Microsoft Ignite 2023, we also announced the model benchmark feature in Azure AI Studio, which provides helpful metrics to evaluate and compare the performance of various models in the catalog.
Safety system
For most applications, it’s not enough to rely on the safety fine-tuning built into the model itself. large language models can make mistakes and are susceptible to attacks like jailbreaks. In many applications at Microsoft, we use another AI-based safety system, Azure AI Content Safety, to provide an independent layer of protection to block the output of harmful content. Customers like South Australia’s Department of Education and Shell are demonstrating how Azure AI Content Safety helps protect users from the classroom to the chatroom.
This safety runs both the prompt and completion for your model through classification models aimed at detecting and preventing the output of harmful content across a range of categories (hate, sexual, violence, and self-harm) and configurable severity levels (safe, low, medium, and high). At Ignite, we also announced the public preview of jailbreak risk detection and protected material detection in Azure AI Content Safety. When you deploy your model through the Azure AI Studio model catalog or deploy your large language model applications to an endpoint, you can use Azure AI Content Safety.
Building and augmenting loop: Add metaprompt and grounding mitigations
Once a developer identifies and evaluates the core capabilities of their preferred large language model, they advance to the next loop, which focuses on guiding and enhancing the large language model to better meet their specific needs. This is where organizations can differentiate their applications.
Metaprompt and grounding
Proper grounding and metaprompt design are crucial for every generative AI application. Retrieval augmented generation (RAG), or the process of grounding your model on relevant context, can significantly improve overall accuracy and relevance of model outputs. With Azure AI Studio, you can quickly and securely ground models on your structured, unstructured, and real-time data, including data within Microsoft Fabric.
Once you have the right data flowing into your application, the next step is building a metaprompt. A metaprompt, or system message, is a set of natural language instructions used to guide an AI system’s behavior (do this, not that). Ideally, a metaprompt will enable a model to use the grounding data effectively and enforce rules that mitigate harmful content generation or user manipulations like jailbreaks or prompt injections. We continually update our prompt engineering guidance and metaprompt templates with the latest best practices from the industry and Microsoft research to help you get started. Customers like Siemens, Gunnebo, and PwC are building custom experiences using generative AI and their own data on Azure.
Fig 2. Summary of responsible AI best practices for a metaprompt.
Evaluate your mitigations
It’s not enough to adopt the best practice mitigations. To know that they are working effectively for your application, you will need to test them before deploying an application in production. Prompt flow offers a comprehensive evaluation experience, where developers can use pre-built or custom evaluation flows to assess their applications using performance metrics like accuracy as well as safety metrics like groundedness. A developer can even build and compare different variations of their metaprompts to assess which may result in the higher quality outputs aligned to their business goals and responsible AI principles.
Fig 3. Summary of evaluation results for a prompt flow built in Azure AI Studio.
Fig 4. Details for evaluation results for a prompt flow built in Azure AI Studio.
Operationalizing loop: Add monitoring and UX design mitigations
The third loop captures the transition from development to production. This loop primarily involves deployment, monitoring, and integrating with continuous integration and continuous deployment (CI/CD) processes. It also requires collaboration with the user experience (UX) design team to help ensure human-AI interactions are safe and responsible.
User experience
In this layer, the focus shifts to how end users interact with large language model applications. You’ll want to create an interface that helps users understand and effectively use AI technology while avoiding common pitfalls. We document and share best practices in the HAX Toolkit and Azure AI documentation, including examples of how to reinforce user responsibility, highlight the limitations of AI to mitigate overreliance, and to ensure users are aware that they are interacting with AI as appropriate.
Monitor your application
Continuous model monitoring is a pivotal step of LLMOps to prevent AI systems from becoming outdated due to changes in societal behaviors and data over time. Azure AI offers robust tools to monitor the safety and quality of your application in production. You can quickly set up monitoring for pre-built metrics like groundedness, relevance, coherence, fluency, and similarity, or build your own metrics.
Looking ahead with Azure AI
Microsoft’s infusion of responsible AI tools and practices into LLMOps is a testament to our belief that technological innovation and governance are not just compatible, but mutually reinforcing. Azure AI integrates years of AI policy, research, and engineering expertise from Microsoft so your teams can build safe, secure, and reliable AI solutions from the start, and leverage enterprise controls for data privacy, compliance, and security on infrastructure that is built for AI at scale. We look forward to innovating on behalf of our customers, to help every organization realize the short- and long-term benefits of building applications built on trust.
Learn more
Explore Azure AI Studio.
Watch the 45-minute breakout session on “Evaluating and designing Responsible AI Systems for the Real World” and “End-to-End AI App Development: Prompt Engineering to LLMOps” from Microsoft Ignite 2023.
Take the 45-minute Introduction to Azure AI Studio course on Microsoft Learn.
The post Infuse responsible AI tools and practices in your LLMOps appeared first on Azure Blog.
Quelle: Azure
Since launching Microsoft Azure Space, we’ve been focused on three main goals:
Connect anyone, anywhere, at any security level, back to the full power and potential of the Microsoft Cloud. This includes working with exciting space start-ups like Muon Space and True Anomaly as well as government agencies like the United States Space Force.
Enable real-time analysis across petabytes of data gathered on orbit, so that our customers can take immediate action that delivers on their mission.
Empower developers to develop, deploy, and run their applications on orbit.
As customers and partners have adopted and experimented with the Azure Space portfolio, new and interesting use cases are emerging that illustrate what’s possible. Today, we are excited to share some of those customer stories, along with updates for Azure Orbital Ground Station, Azure Orbital’s software development kit, and Microsoft Planetary Computer. While it is still early days, these stories offer a glimpse at understanding how an accessible space layer can transform the way organizations across the public and private sectors serve their missions.
Satellite operators are using Azure Orbital Ground Station for spacecraft communications
Delivering space data to Earth requires a secure, robust ground network with low latency and high throughput—presenting various challenges for the operator. Opportunities for satellite contacts are limited by ground station coverage, and it can be difficult and expensive to achieve sufficient capacity.
Azure Space is enabling partner-powered, space-to-cloud transmissions with end-to-end support for space data downlink, processing, storage, analytics, and dissemination. Azure Orbital Ground Station provides easy, secure access to communication products and services required to support all phases of satellite missions—from launch to operations and decommissioning. Mission operations are seamless with self-service scheduling of contacts in Microsoft Azure with a managed data path.
Muon Space collaborates with Microsoft for its first two launches
Learn More
Accelerating the pace of innovation with Azure Space and our partners chevron_right
As previously announced, Muon Space selected Microsoft to support its first-ever launch, using Azure Orbital Ground Station as the sole ground provider for their MuSat-1 mission. Muon Space is ramping up for the launch of its second satellite, MuSat-2, in early 2024—again leveraging Azure Orbital Ground Station to bring down data gathered by a prototype microwave sensor. Muon Space will provide space weather and ionospheric data to the United States Space Force.
“Launch and early operations is always a very stressful period for satellite operators. With Azure Orbital, we achieved contact with MuSat-1 within six minutes of separation from the launch vehicle. This early success, along with our continuous on-orbit operations, gives us confidence to use Azure Orbital for future missions.”
Paige Holland, Operations Automation Lead, Muon Space
Azure Orbital Ground Station for government customers
Learn More
Azure Space technologies advance digital transformation across government agencies chevron_right
We’ve seen increasing momentum with commercial customers adopting Azure Orbital Ground Station. Azure Orbital Ground Station is now available in preview within the Microsoft Azure Government region. Introducing Azure Orbital Ground Station into Azure Government enables government customers to fully leverage a global partner ecosystem of ground stations, cloud modems, self-service scheduling, and a managed data path.
True Anomaly and Viasat are leveraging Azure Orbital Ground Station in Azure Government for space domain awareness
True Anomaly selected Microsoft and Viasat to provide ground support for its upcoming launch of two Jackal spacecrafts—autonomous orbital vehicles for rendezvous and proximity operations. True Anomaly will schedule satellite contacts at Viasat Real Time Earth (RTE) sites using Azure Orbital Ground Station in Azure Government.
“Azure Orbital Ground Station’s managed data path makes it easy to connect to a global ground network. With one click of a button on Azure, we gain access to all Viasat Real Time Earth sites and simply indicate where the data from our spacecraft should land, while Microsoft handles the orchestration and connectivity. Working within Azure Government lets us meet our customers where they are.”
Jared Kirkpatrick, Jackal Block 1 Project Manager, True Anomaly
Provisioning fiber to Viasat sites
To provide customers with their data as quickly and securely as possible, Microsoft is provisioning high-speed, real-time cloud connectivity to select Viasat RTE sites, allowing customers to stream multi-gigabit per second downlinks.
“Viasat is collaborating with Microsoft to enable low-touch access to space communication solutions for our customers like True Anomaly. Azure Orbital Ground Station offers a common data plane and API to access our global antenna network that includes very high throughput data downlinks over Ka-band.”
Aaron Hawkins, Real Time Earth Director for Strategic Partnerships, Viasat
Watch this video to learn more about how our customers are using Azure Orbital Ground Support in support of their missions.
Gaining insights from space data
As the volume and value of space data continues to grow, having easy and affordable access to ground infrastructure will play a central role in serving customers’ mission-critical operations. So too will be the ability of customers to access and analyze near real-time data gathered from space.
The future of the cloud will incorporate space solutions such as satellite connectivity and Earth observational data. Space-based sensors observing Earth and satellite data will increasingly be used to improve our data insights on the ground.
The latest episode in the Microsoft Future of the Cloud Webinar series explores the role of space data in creating “a planetary computer for a sustainable future.”
Watch the series to learn about:
Leveraging the potential of the cloud and space to enable data-driven decision making for your organization and missions.
How Microsoft Planetary Computer supports global efforts of environmental sustainability and Earth science by enabling developers to build tools for measuring, monitoring, modeling, and managing healthy ecosystems.
The potential opportunities that a new Azure Space data solution built on the Microsoft Planetary Computer will create for Microsoft’s customers to unlock the full potential of their Earth observation data.
Empowering developers to build, deploy, and operate on-orbit
Empowering any developer to build and deploy applications into space will be critical to lowering the barrier to entry for participating in the space industry. Azure Orbital’s software development kit provides satellite operators with the tools and capabilities to unlock new business models and enable mission requirements.
Loft Orbital customer onboarding for virtual missions on YAM-6 is now open
Over the past two years, Microsoft and Loft Orbital have been collaborating to lower the barriers to entry for space. A key pillar in this collaboration has been the enablement of “virtual missions,” making it easier for developers to access space capabilities without having to develop or launch their own hardware in space, and instead by simply writing software applications.
YAM-6 is the first satellite fully dedicated to offering this capability. Last week, we announced that YAM-6 is now publicly accepting customers for virtual missions for 2024. General availability is planned for April 2024.
“Our joint product offering leverages Loft’s space infrastructure and Microsoft’s cloud and ground infrastructure to make it simple for anyone to deploy AI applications in space at scale. YAM-6 is supported by the Azure Orbital product portfolio, including Azure Orbital Ground Station, Azure Orbital space edge on-orbit application framework.”
Pierre Damien Voujour, Cofounder and Chief Executive Officer, Loft Orbital
Space Compass leveraging virtual missions to prove out concepts quickly
Space Compass—a joint venture company between NTT, Japanese Information and Communications Technology (ICT) leader, and SKY Perfect JSAT Corporation, Asia’s largest satellite operator—is on a multi-year mission to deploy space-edge computing capabilities together with an ultra-speed optical data relay network. This will allow space data users to utilize real-time data much more efficiently in the cloud environment (see Figure 1).
Over the past three months, Space Compass has been working with Microsoft to explore use cases in an effort to better understand and demonstrate the value of on-orbit processing, and how to shape their future space infrastructure to support it.
“We are very excited to closely collaborate with the Microsoft team to develop a cutting-edge space computing solution. This is one of our key initiatives to realize the Space Integrated Computing Network.”
Shigehiro Hori, Co-Chief Executive Officer, Space Compass
Space Compass will be running a virtual mission on YAM-6 to demonstrate AI-based ship detection. This demonstration paves the way and de-risks future missions that will be flown on Space Compasses’ own satellites. Learn more about the Space Compass mission.
Figure 1: Ship detection program.
Both Microsoft and Space Compass believe in the power of on-orbit processing, bringing AI to the edge in space with high-speed connectivity to the cloud.
Learn More
These exciting use cases are just the beginning. Learn more about the ways that Microsoft Azure Space can transform how you deliver on your mission:
Sign up for news and updates on how Azure Space data can advance your organization and missions, or complete this form to get in touch with the Azure Space team.
Visit the Azure Orbital Ground Station website and documentation page.
The post Create new ways to serve your mission with Microsoft Azure Space appeared first on Azure Blog.
Quelle: Azure
Many AI systems are designed for collaboration: Copilot is one of them. Copilot—powered by Microsoft Azure OpenAI Service—allows you to simplify how you design, operate, optimize, and troubleshoot apps and infrastructure from cloud to edge. It utilizes language models, the Azure control plane, and insights about your Azure and Arc-enabled assets. All of this is carried out within the framework of Azure’s steadfast commitment to safeguarding data security and privacy.
A brief history of AI collaboration with copilots
In aviation terms a copilot is responsible for assisting the pilot in command, sharing control of the airplane, and handling various navigational and operational tasks. Having a copilot ensures that there is a second trained professional who can take over controls if the main pilot is unable to perform their duties, thereby enhancing safety.
Microsoft originally introduced the concept of a copilot two years ago as an AI pair programmer in GitHub to assist developers in generating code, catching errors, and suggesting improvements. Today, Azure OpenAI Service powers more than just GitHub Copilot. Microsoft 365 Copilot performs as a digital companion for your whole life creating a single Copilot user experience across Bing, Edge, Microsoft 365, and Windows.
AI at the service of others
Microsoft Copilot represents a profound shift in how AI-powered software can support the user experience, the architecture, the services that it uses, and how we think about safety and security.
“We now have machines that are so fluent in human language. Every place that you interact with a machine ought to be much more fluent in human natural language and I think we’ll start to see that change coming in a lot of different places as well and it will really redefine the interfaces that we’re used to.”—Eric Boyd, head of AI at Microsoft.
Copilots powered by Azure OpenAI Service can be trained on a specific set of data to adapt the model to a specific domain. We’re seeing developments across a variety of sectors. For example:
Language translation: Language translation models can help bridge communication gaps between people who speak different languages. This can be particularly useful in situations such as emergency response, disaster relief, and international diplomacy.
Educational support: Educational chatbots that can help students with homework, provide personalized tutoring, and answer questions related to different subjects.Crime investigation: Financial crimes such as money laundering and fraud are linked to human trafficking, child exploitation, terrorism, theft, and wildlife trafficking. SymphonyAI’s new Sensa Copilot acts as a sophisticated AI assistant to a financial crime investigator by automatically collecting, collating, and summarizing financial and third-party information.
Learn More
Microsoft Azure AI Fundamentals: Generative AI chevron_right
Medical reporting: Generative AI has the potential to increase the power and accessibility of self-service reporting, making it easier for healthcare organizations and their providers to identify operational improvements, including ways to reduce costs and to find answers to questions both locally and within a broader context.
Climate change: Azure OpenAI Service can be used to generate educational materials or assist in research on topics related to climate change, including natural disasters, global warming, and environmental conservation.
Inclusive and diverse avatars: DeepBrain AI includes a library of photo-realistic and virtual avatars that businesses can use for training videos, news broadcasts, marketing videos, and more. An integral part of the digital world, avatars foster a sense of inclusivity and diversity by allowing people to choose representations that reflect their individuality, regardless of physical appearance or other limitations.
Industrial advances: ABB is partnering with Microsoft to integrate Azure OpenAI Service into its ABB Ability™ Genix Industrial Analytics and AI suite with the goal of boosting real-time insights and asset longevity by 20% and reducing unplanned downtimes by 60%. Additionally, it will aid in monitoring and optimizing industrial emissions and energy usage, contributing to sustainability goals.
Prioritizing human agency
The Copilot System powered by Azure OpenAI Service builds on our existing commitments to data security and privacy in the enterprise. Copilot automatically inherits your organization’s security, compliance, and privacy policies for Microsoft 365. Data is managed in line with our current commitments. Copilot prioritizes human agency and puts the user in control. This includes noting limitations, providing links to sources, and prompting users to review, fact-check, and fine-tune content based on their own knowledge and judgment.
AI systems can analyze and learn from copious amounts of data and help employees make decisions based on that data. They can be programmed for specific tasks such as image recognition and natural language processing.
While technology has the potential to generate both favorable and adverse consequences, technological developments such as Copilot are proving far more likely to help society steer a straight and humane course toward a future that benefits us all.
Learn, connect, and explore with the latest technologies announced at Microsoft Ignite.
Our commitment to responsible AI
Explore
Empowering responsible AI practices chevron_right
With Responsible AI tools in Azure, Microsoft is empowering organizations to build the next generation of AI apps safely and responsibly. Microsoft has announced the general availability of Azure AI Content Safety, a state-of-the art AI system that helps organizations keep AI-generated content safe and create better online experiences for everyone. Customers—from startup to enterprise—are applying the capabilities of Azure AI Content Safety to social media, education, and employee engagement scenarios to help construct AI systems that operationalize fairness, privacy, security, and other responsible AI principles.
Get started with Azure OpenAI Service
Apply for access to Azure OpenAI Service by completing this form.
Learn about Azure OpenAI Service and the latest enhancements.
Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn.
Read our partner announcement blog, empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service.
Learn how to use the new Chat Completions API (preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.
Learn more about Azure AI Content Safety.
The post Azure OpenAI Service powers the Microsoft Copilot ecosystem appeared first on Azure Blog.
Quelle: Azure
Amazon SageMaker Canvas unterstützt jetzt umfassende Datenaufbereitungsfunktionen, die von Amazon SageMaker Data Wrangler unterstützt werden. Sie können jetzt Tabellen-, Zeitreihen-, Bild- und Textdaten aus über 50 Datenquellen importieren, Berichte zu Datenqualität und Erkenntnissen erstellen und Daten mithilfe von über 300 integrierten Operatoren transformieren, um Machine Learning (ML)-Modelle zu erstellen und zu verwenden, ohne Code schreiben zu müssen. Durch diese Integration können Sie die Datenaufbereitung für ML mithilfe von SageMaker Canvas von Wochen auf Minuten verkürzen.
Quelle: aws.amazon.com
AWS Compute Optimizer unterstützt jetzt in den Regionen AWS GovCloud (USA) die Möglichkeit, Ihre Empfehlungen zur Größenanpassung nach Tags zu filtern. Dazu gehören Tag-Schlüssel, Tag-Schlüssel- und -Wert-Paare oder Kombinationen aus beiden. Tag-Filterung ist auf folgenden Seiten mit Empfehlungen zur Größenanpassung verfügbar: Amazon Elastic Compute Cloud (EC2)-Instance-Typen, Amazon Elastic Block Store (EBS)-Volumes, AWS Lambda-Funktionen und Amazon Elastic Container Service (ECS)-Services auf AWS Fargate.
Quelle: aws.amazon.com
Kunden von Amazon FinSpace mit Managed kdb Insights können jetzt KDB-Cluster für allgemeine Zwecke erstellen, die Unterstützung für eine größere Anzahl von KDB-Funktionen und Datenspeicherkonfigurationen innerhalb eines einzigen KDB-Prozesses bieten. Dadurch kann ein breiteres Spektrum von Kundenanwendungen und Prozessen direkt in Managed kdb Insights ausgeführt werden.
Quelle: aws.amazon.com
Netflix hat seinen ersten Engagement Report veröffentlicht. Was sich aus einem Datensatz mit mehr als 18.000 Positionen herauslesen lässt. Von Peter Osteried (Streaming, Netflix)
Quelle: Golem