The seven pillars of modern AI development: Leaning into the era of custom copilots

In an era where technology is rapidly advancing and information consumption is exponentially growing, there are many new opportunities for businesses to manage, retrieve, and utilize knowledge. The integration of generative AI (content creation by AI) and knowledge retrieval mechanisms is revolutionizing knowledge management, making it more dynamic and readily available. Generative AI offers businesses more efficient ways to capture and retrieve institutional knowledge, improving user productivity by reducing time spent looking for information 

This business transformation was enabled by copilots. Azure AI Studio is the place for AI Developers to build custom copilot experiences.

Copilots infuse data with large language models (LLM) to improve the response generation process. This process can be described as follows: the system receives a query (e.g., a question), then, before responding, fetches pertinent information from a designated data source related to the query, and uses the combined content and query to guide the language model in formulating an appropriate response.

The power of copilots is in their adaptability, particularly their unparalleled ability to seamlessly and securely tap into both internal and external data sources. This dynamic, always-updated integration doesn’t just increase the accessibility and usability of enterprise knowledge, it improves the efficiency and responsiveness of businesses to ever-evolving demands.

Although there is much excitement for copilot pattern-based solutions, it’s important for businesses to carefully consider the design elements to design a durable, adaptable, and effective approach. How can AI developers ensure their solutions do not just capture attention, but also enhance customer engagement? Here are seven pillars to think through when building your custom copilot.

Retrieval: Data ingestion at scale

Data connectors are vital for businesses aiming to harness the depth and breadth of their data across multiple expert systems using a copilot. These connectors serve as the gateways between disparate data silos, connecting valuable information, making accessible and actionable in a unified search experience. Developers can ground models on their enterprise data and seamlessly integrate structured, unstructured, and real-time data using Microsoft Fabric.

For copilot, data connectors are no longer just tools. They are indispensable assets that make real-time, holistic knowledge management a tangible reality for enterprises.

Enrichment: Metadata and role-based authentication

Enrichment is the process of enhancing, refining, and valuing raw data. In the context of LLMs, enrichment often revolves around adding layers of context, refining data for more precise AI interactions, and data integrity. This helps transform raw data into a valuable resource. 

When building custom copilots, enrichment helps data become more discoverable and precise across applications. By enriching the data, generative AI applications can deliver context-aware interactions. 

LLM-driven features often rely on specific, proprietary data. Simplifying data ingestion from multiple sources is critical to create a smooth and effective model. To make enrichment even more dynamic, introducing templating can be beneficial. Templating means crafting a foundational prompt structure, which can be filled in real-time with the necessary data, which can safe-guard and tailor AI interactions.

The combined strength of data enrichment and chunking leads AI quality improvements, especially when handling large datasets. Using enriched data, retrieval mechanisms can grasp cultural, linguistic, and domain-specific nuances. This results in more accurate, diverse, and adaptable responses, bridging the gap between machine understanding and human-like interactions.

Search: Navigating the data maze 

Advanced embedding models are changing the way we understand search. By transforming words or documents into vectors, these models capture the intrinsic meaning and relationships between them. Azure AI Search, enhanced with vector search capabilities, is a leader in this transformation. Using Azure AI Search with the power of semantic reranking gives users contextually pertinent results, regardless of their exact search keywords.

With copilots, search processes can leverage both internal and external resources, absorbing new information without extensive model training. By continuously incorporating the latest available knowledge, responses are not just accurate but also deeply contextual, setting the stage for a competitive edge in search solutions.

The basis of search involves expansive data ingestion, including source document retrieval, data segmentation, embedding generation, vectorization, and index loading to ensure that the results align closely with the user’s intent when a user inputs a query, that undergoes vectorization before heading to Azure AI Search for retrieving most relevant results.

Continuous innovation to refine search capabilities has led to a new concept of hybrid search. This innovative approach melds the familiarity of keyword-based search with the precision of vector search techniques. The blend of keyword, vector, and semantic ranking further improves the search experience, delivering more insightful and accurate results for end users.

Prompts: Crafting efficient and responsible interactions

In the world of AI, prompt engineering provides specific instructions to guide the LLM’s behavior and generate desired outputs. Crafting the right prompt is crucial to get not just accurate, but safe and relevant responses that meet user expectations. 

Prompt efficiency requires clarity and context. To maximize the relevance of AI responses, it is important to be explicit with instructions. For instance, if concise data is needed, specify that you want a short answer. Context also plays a central role. Instead of just asking about market trends, specify current digital marketing trends in e-commerce. It can even be helpful to provide the model with examples that demonstrate the intended behavior.

Azure AI prompt flow enables users to add content safety filters that detect and mitigate harmful content, like jailbreaks or violent language, in inputs and outputs when using open source models. Or, users can opt to use models offered through Azure OpenAI Service, which have content filters built-in. By combining these safety systems with prompt engineering and data retrieval, customers can improve the accuracy, relevance, and safety of their application. 

Learn More

Get started with prompt flow chevron_right

Achieving quality AI responses often involves a mix of tools and tactics. Regularly evaluating and updating prompts helps align responses with business trends. Intentionally crafting prompts for critical decisions, generating multiple AI responses to a single prompt, and then selecting the best response for the use case is a prudent strategy. Using a multi-faceted approach helps AI to become a reliable and efficient tool for users, driving informed decisions and strategies.

User Interface (UI): The bridge between AI and users 

An effective UI offers meaningful interactions to guide users through their experience. In the ever-evolving landscape of copilots, providing accurate and relevant results is always the goal. However, there can be instances when the AI system might generate responses that are irrelevant, inaccurate, or ungrounded. A UX team should implement human-computer interaction best practices to mitigate these potential harms, for example by providing output citations, putting guardrails on the structure of inputs and outputs, and by providing ample documentation on an application’s capabilities and limitations. 

To mitigate potential issues like harmful content generation, various tools should be considered. For example, classifiers can be employed to detect and flag possibly harmful content, guiding the system’s subsequent actions, whether that’s changing the topic or reverting to a conventional search. Azure AI Content Safety is a great tool for this.

A core principle for Retrieval Augmented Generation (RAG)-based search experiences is user-centric design, emphasizing an intuitive and responsible user experience. The journey for first-time users should be structured to ensure they comprehend the system’s capabilities, understand its AI-driven nature, and are aware of any limitations. Features like chat suggestions, clear explanations of constraints, feedback mechanisms, and easily accessible references enhance the user experience, fostering trust and minimizing over-reliance on the AI system.

Continuous improvement: The heartbeat of AI evolution 

The true potential of an AI model is realized through continuous evaluation and improvement. It is not enough to deploy a model; it needs ongoing feedback, regular iterations, and consistent monitoring to ensure it meets evolving needs. AI developers need powerful tools to support the complete lifecycle of LLMs, including continuously reviewing and improving AI quality. This not only brings the idea of continuous improvement to life, but also ensures that it is a practical, efficient process for developers. 

Identifying and addressing areas of improvement is a fundamental step to continuously refine AI solutions. It involves analyzing the system’s outputs, such as ensuring the right documents are retrieved, and going through all the details of prompts and model parameters. This level of analysis helps identify potential gaps, and areas for refinement to optimize the solution.

Prompt flow in Azure AI Studio is tailored for LLMs and transforming LLM development lifecycle. Features like visualizing LLM workflows and the ability to test and compare the performance of various prompt versions empowers developers with agility and clarity. As a result, the journey from conceptualizing an AI application to deploying it becomes more coherent and efficient, ensuring robust, enterprise-ready solutions.

Unified development

The future of AI is not just about algorithms and data. It’s about how we retrieve and enrich data, create robust search mechanisms, articulate prompts, infuse responsible AI best practices, interact with, and continuously refine our systems. 

AI developers need to integrate pre-built services and models, prompt orchestration and evaluation, content safety, and responsible AI tools for privacy, security, and compliance. Azure AI Studio offers a comprehensive model catalog, including the latest multimodal models like GPT-4 Turbo with Vision coming soon to Azure OpenAI Service and open models like Falcon, Stable Diffusion, and the Llama 2 managed APIs. Azure AI Studio is a unified platform for AI developers. It ushers in a new era of generative AI development, empowering developers to explore, build, test, and deploy their AI innovations at scale. VS Code, GitHub Codespaces, Semantic Kernel, and LangChain integrations support a code-centric experience.

Whether creating custom copilots, enhancing search, delivering call center solutions, developing bots and bespoke applications, or a combination of these, Azure AI Studio provides the necessary support.

Learn more about the power of Azure AI Studio

As AI continues to evolve, it is essential to keep these seven pillars in mind to help build systems that are efficient, responsible, and always at the cutting-edge of innovation.

Are you eager to tap into the immense capabilities of AI for your enterprise? Start your journey today with Azure AI Studio! 

We’ve pulled together two GitHub repos to help you get building quickly. The Prompt Flow Sample showcases prompt orchestration for LLMOps—using Azure AI Search and Cosmos DB for grounding. Prompt flow streamlines prototyping, experimenting, iterating, and deploying AI applications. The Contoso Website repository houses the eye-catching website featured at Microsoft Ignite, featuring content and image generation capabilities, along with vector search. These two repos can be used together to help build end-to-end custom copilot experiences.

Learn more

Build with Azure AI Studio

Join our SMEs during the upcoming Azure AI Studio AMA session – December 14th, 9-10am PT

Azure AI SDK

Azure AI Studio documentation

Introduction to Azure AI Studio (learn module) 

The post The seven pillars of modern AI development: Leaning into the era of custom copilots appeared first on Azure Blog.
Quelle: Azure

Optimize your Azure cloud journey with skilling tools from Microsoft

Optimization is a crucial strategy for businesses seeking to extract maximum value from their Azure cloud investment, minimize unnecessary expenses, and ultimately drive better return on investment (ROI). At Microsoft, we’re dedicated to optimizing your Azure environments and teaching you how to approach it with resources, tools, and guidance, promoting continuous development of your cloud architectures and workloads, both in new and existing projects. We want you to build confidence to achieve your cloud goals, and to become more efficient and productive once you have a better understanding of how to operate in the cloud most successfully. That’s why we’re proud to offer a wide array of optimization skilling opportunities to help you confidently achieve your cloud goals, resulting in increased efficiency and productivity through a deeper understanding of successful cloud operations.

With Azure optimization skilling, we aim to be your guide in achieving these business goals. By engaging with our curated learning paths, modules, and gamified cloud skills challenges, you’ll quickly begin the process of planning, deploying, and managing your cloud investments. Training topics include Cloud Adoption Framework (CAF), Well-Architected Framework (WAF), FinOps, security, and much more to help you drive continuous improvement and business innovation.

Level up on optimization with our 30 Days to Learn It challenge

Microsoft “30 Days to Learn It” challenges are dynamic and immersive learning experiences designed to empower individuals with the skills and knowledge needed to excel in their chosen tech career path. These gamified, interactive challenges offer a blend of hands-on exercises, tutorials, and assessments to ensure a well-rounded learning experience.

Within the accelerated timeframe of 30 days, the structured framework engages participants in friendly competitions to see who can top the leaderboard on their way to mastering any number of Microsoft tools or concepts.

The challenge is open to IT professionals and developers of all skill levels and is designed to provide a flexible and accessible way to learn new skills and advance their careers. To participate, individuals simply need to sign up for the challenge on the Microsoft Learn platform and begin completing the available learning modules.

This month, we’ll be launching a new Azure Optimization 30 Days to Learn It challenge loaded with resources, tools, and guidance to help you optimize your Azure workloads. Learn to optimize your cloud architecture and workloads effectively so that you can invest in projects that drive ongoing growth and innovation. In about 16 hours, you’ll master how to drive continuous improvement of your architecture and workloads while managing and optimizing cloud costs.

Tailor your skilling experience with the Azure Optimization Collection

Explore

Azure Optimization Collection chevron_right

Whether you’re in the process of migrating to the cloud or have already established Azure workloads, we have assembled a handpicked collection of training and resources to help you on our journey. The collection is tailored to support the ongoing enhancement of your architecture and workloads, all while effectively managing and optimizing your cloud expenses.

ModuleDescriptionPurchase Azure savings plan for computeBy the end of this module, you’ll be able to describe the characteristics and benefits of Azure savings plan for compute and identify scenarios most suitable for its usage.Save money with Azure Reserved InstancesLearn how to analyze and buy reserved instances, optimize against underused resources, and understand the benefits provided through compute purchases.Get started with Azure AdvisorWith Azure Advisor, you can analyze your cloud environment to determine whether your workloads are following documented best practices for cost, security, reliability, performance, and operational excellence.Getting started with the Microsoft Cloud Adoption Framework for AzureDiscover how a range of getting-started resources in the Cloud Adoption Framework can accelerate results across your cloud-adoption efforts.Address tangible risks with the Govern methodology of the Cloud Adoption Framework for AzureWithout proper governance, it can be difficult and laborious to maintain consistent control across a portfolio of workloads. Fortunately, cloud-native tools like Azure Policy and Azure Blueprints provide convenient means to establish those controls.Ensure stable operations and optimization across all supported workloads deployed to the cloudAs workloads are deployed to the cloud, operations are critical to success. In this learn module, you learn how to deploy an operations baseline to manage workloads in your environment.Choose the best Azure landing zone to support your requirements for cloud operationsAzure landing zones can accelerate configuration of your cloud environment. This module will help you choose and get started with the best landing zone option for your needs.Introduction to the Microsoft Azure Well-Architected FrameworkYou want to build great things on Azure, but you’re not sure exactly what that means. Using key principles throughout your architecture, regardless of technology choice, can help you design, build, and continuously improve your architecture.Microsoft Azure Well-Architected Framework: operational excellenceIn this module, you learn about the operational excellence pillar of the Azure Well-Architected Framework, which allows you to answer these types of questions and improve the operations of your Azure cloud deployments.Microsoft Azure Well-Architected Framework: Cost optimizationLearn about the cost optimization pillar of the Azure Well-Architected Framework to identify cost optimization opportunities to maximize cloud efficiency and visibility.Microsoft Azure Well-Architected Framework: Performance efficiencyScaling your system to handle load, identifying network bottlenecks, and optimizing your storage performance are important to ensure your users have the best experience. Learn how to make your application perform at its best.Microsoft Azure Well-Architected Framework: SecurityLearn how to incorporate security into your architecture design and discover the tools that Azure provides to help you create a secure environment through all the layers of your architecture.Microsoft Azure Well-Architected Framework: ReliabilityYour business relies on access to its systems and data. Each moment that a customer or internal team can’t access what they need can result in a loss of revenue. It’s your job to prevent that by designing and implementing reliable systems.Describe cost management in AzureIn this module, you’ll be introduced to factors that impact costs in Azure and tools to help you both predict potential costs and monitor and control costs.

Discover more in the Azure Optimization Collection, including e-books and further reading, at the Microsoft Learn site.

Watch optimization tips and tricks from Azure experts

In our Azure Enablement Show video series, hear about the latest resources on how to accelerate your cloud journey and optimize your solutions in Azure. These expert-led videos share technical advice, tips, and best practices to help you do all that and more.

Our newest video on Azure optimization skilling will walk you through the newest training resources, guidance, tools, and skilling that you need to foster continuous development of your cloud architectures and workloads. Get an in-depth understanding of how successful cloud operations increase efficiency and productivity to help you confidently achieve your cloud goals.

In addition, go deeper into optimization with these two-video series on cloud frameworks that provide a comprehensive approach to cloud adoption and continuous improvement:

Cloud Adoption Framework (CAF) series: Address common blockers in your cloud adoption journey using best practices, tools, and templates featured in CAF and shared by Microsoft experts. This series covers scenarios such as enabling your landing zones, assessing your cloud environments, and applying an Azure savings plan.

Well-Architected Framework (WAF) series: Engage with technical guidance for your cloud adoption journey at the workload level across the five pillars of WAF: cost optimization, security, reliability, performance efficiency, and operational excellence.

Get started today with Azure optimization skilling

The journey to cloud optimization is not a destination, but an ongoing pursuit that can transform your organization’s digital landscape. Engaging with learning paths on Microsoft Learn isn’t just about gaining knowledge—it’s about investing in your organization’s future success. Our comprehensive skilling resources provide you with the tools, insights, and skills you need to unlock the full potential of Azure’s cloud optimization capabilities.

Take the first step today toward a more efficient, cost-effective, and competitive cloud environment by exploring Microsoft Learn’s cloud optimization learning paths in this Collection. Whether you’re an IT professional, a developer, or a decision-maker, there’s a tailored learning path waiting for you. Start your journey now and empower your organization to thrive in the cloud-first world.

Attendees to Microsoft Ignite 2023 were given the chance to learn more about leveling up their Azure through live keynotes, breakout sessions, and expert workshops. View recorded sessions, including the “Optimize your Azure investment through FinOps” discussion session, to learn how you can facilitate a culture of continuous improvement in your organization.

Lastly, game on! Be sure to register for our Azure Optimization 30 Days to Learn It Challenge to compete against your peers from around the globe as you master optimizing your cloud architecture and workloads.
The post Optimize your Azure cloud journey with skilling tools from Microsoft appeared first on Azure Blog.
Quelle: Azure

Announcing the Docker AI/ML Hackathon 2023 Winners

The week of DockerCon 2023 in Los Angeles, we announced the kick-off of the Docker AI/ML Hackathon. The hackathon ran as a virtual event from October 3 to November 7 with support from partners including DataStax, Livecycle, Navan.ai, Neo4j, and OctoML. Leading up to the submission deadline, we ran a series of webinars on topics ranging from getting started with Docker Hub to setting up computer vision AI models on Docker, and more. You can watch the collection of webinars on YouTube.

The Docker AI/ML Hackathon encouraged participants to build solutions that were innovative, applicable in real life, use Docker technology, and have an impact on developer productivity. We made a lot of announcements at DockerCon, including the new GenAI Stack, and we couldn’t wait to see how developers would put this to work in their projects.  

Participants competed for US$ 20,000 in cash prizes and exclusive Docker swag. Judging was based on criteria such as applicability, innovativeness, incorporation of Docker tooling, and impact on the developer experience and productivity. Read on to learn who took home the top prizes.

The winners

1st place

Signal0ne — This project automates insights from failed containers and anomalous resource usage through anomaly detection algorithms and a Docker desktop extension. Developed using Python and Angular, the Signal0ne tool provides rapid, accurate log analysis, even enabling self-debugging. The project’s key achievements include quick issue resolution for experienced engineers and enhanced debugging capabilities for less experienced ones.

2nd place

SeamlessML: Docker-Powered Serverless Model Orchestration — SeamlessML addresses the AI model deployment bottleneck by providing a simplified, scalable, and cost-effective solution. Leveraging Docker and serverless technologies, it enables easy deployment of machine learning models as scalable API endpoints, abstracting away complexities like server management and load balancing. The team successfully reduced deployment time from hours to minutes and created a local testing setup for confident cloud-like deployments.

3rd place

Dionysus — Dionysus is a developer collaboration platform that streamlines teamwork through automatic code documentation, efficient codebase search, and AI-powered meeting transcription. Built with a microservice architecture using NextJS for the frontend and a Python backend API, Docker containerization, and integration with GitHub, Dionysus simplifies development workflows. The team overcame challenges in integrating AI effectively, ensuring real-time updates and creating a user-friendly interface, resulting in a tool that automates code documentation, facilitates contextual code search, and provides real-time AI-driven meeting transcription.

Honorable mentions

The following winners took home swag prizes. We received so many fantastic submissions that we awarded honorable mentions to four more teams than originally planned!

Chiral AI — Chat with PRDs and Create Tickets in Record Time

Code Explorer

Containerized Online Bandit Experimentation (COBE) Platform

Dataficial

DockerImageAnalyzer

DockerPulse

Docker Genius

Docker Log Sentiment Analyzer

Docker ML Studio

Gait Analyzer

GitChats AI

Local LLM Messenger (lollmm)

P8Hub — Private AI Hub

ReadmeAI

SolPredict

Techdocs

What’s next?

Check out all project submissions on the Docker AI/ML Hackathon gallery page. Also, check out and contribute to the GenAI Stack project on GitHub and sign up to join the Docker AI Early Access program. We can’t wait to see what projects you create.

We had so much fun seeing the creativity that came from this hackathon. Stay tuned until the next one!

Learn more

Docker AI/ML Hackathon gallery page

Docker 2023 AI/ML Hackathon on YouTube

GenAI Stack project on GitHub

Docker AI Early Access

DockerCon announcements

Quelle: https://blog.docker.com/feed/

Ankündigung von AWS B2B Data Interchange

Heute kündigt AWS die allgemeine Verfügbarkeit von AWS B2B Data Interchange an, einem vollständig verwalteten Service zur Automatisierung der Umwandlung von Electronic Data Interchange (EDI) -Dokumenten in gängige Datendarstellungen wie JSON und XML in großem Umfang und mit nutzungsabhängiger Preisberechnung. Kunden aus Branchen wie Fertigung, Einzelhandel, Gesundheitswesen und anderen können jetzt den Zeitaufwand, die Komplexität und die Kosten reduzieren, die mit der Vorbereitung und Integration von EDI-Daten in ihre Geschäftsanwendungen und speziell entwickelten Data Lakes verbunden sind.
Quelle: aws.amazon.com

Amazon Redshift bietet Unterstützung für inkrementelle Aktualisierungen von materialisierten Ansichten in Data-Lake-Tabellen (Vorschau)

Amazon Redshift unterstützt jetzt die inkrementelle Aktualisierung für materialisierte Ansichten auf Apache Iceberg- und standardmäßigen AWS Glue-Tabellen, sodass keine vollständigen Aktualisierungen erforderlich sind, bei denen die zugrunde liegenden Select-Anweisungen erneut ausgeführt und die Daten in die materialisierte Ansicht neu geschrieben werden müssen.
Quelle: aws.amazon.com

Amazon Braket startet Braket Direct, ein Programm, um tiefer in das Quantencomputing einzutauchen

Amazon Braket führt Braket Direct ein, ein neues Programm, das die Möglichkeiten erweitert, Quantencomputer auf AWS zu erforschen und so Forschung und Innovation zu beschleunigen. Sie können jetzt dedizierte Kapazitäten auf verschiedenen Quantengeräten reservieren, direkt mit Quantencomputing-Spezialisten in Kontakt treten und frühzeitig auf Funktionen der nächsten Generation zugreifen, darunter Forte, das neueste Trapped-Ion-Gerät von IonQ, das zum ersten Mal auf AWS öffentlich verfügbar ist.
Quelle: aws.amazon.com

Ankündigung von Amazon Redshift Serverless mit KI-gesteuerter Skalierung und Optimierungen (Vorversion)

Heute stellt Amazon Redshift Serverless eine Vorschau auf die nächste Generation der durch künstliche Intelligenz (KI) gesteuerten Skalierung und Optimierung im Cloud Data Warehousing vor. Amazon Redshift Serverless verwendet KI-Techniken zur automatischen Skalierung bei Workload-Änderungen in allen wichtigen Dimensionen (wie z. B. Änderungen des Datenvolumens, gleichzeitige Benutzer und Komplexität der Abfragen), um Ihre Preisleistungsziele zu erreichen und aufrechtzuerhalten.  Interne Tests zeigen, dass Sie mit diesen Optimierungen ein bis zu zehnmal besseres Preis-Leistungs-Verhältnis für variable Workloads erzielen können, ohne dass Sie manuell eingreifen müssen.
Quelle: aws.amazon.com