Best practices for developing a generative AI copilot for business

Since launch at ChatGPT, I can’t remember a meeting with a prospect or customer where they didn’t ask me how they could use generative AI for their business. From internal efficiency and productivity to external products and services, companies are racing to implement generative AI technologies in every sector of the economy.

While GenAI is still in its early days, its capabilities are rapidly expanding — from vertical search, to photo editing, to writing assistants, the common thread uses interactive interfaces. talk to make the software more accessible and powerful. Chatbots, now rebranded as “copilots” and “assistants,” are in the spotlight once again, and while a set of best practices is beginning to emerge, step 1 in developing one chatbot is to cover the problem and start small.

A copilot is an orchestrator, which helps a user complete many different tasks through a free text interface. There is an infinite number of possible input prompts, and all must be managed well and safely. Instead of planning to solve every task, and running the risk of failing the user’s expectations, developers should start by solving a task very well and learn along the way.

At AlphaSense, for example, we focus on earnings call summarization as our first task, a well-covered but high-value task for our customer base that also maps well to existing product workflows. Along the way, we gained insights into LLM development, model selection, training data generation, augmented generation recovery and user experience design that enabled the expansion of open chat.

LLM progression: Choice of open or closed

In early 2023, the leaderboard for LLM performance is clear: OpenAI is ahead of GPT-4, but well-capitalized competitors such as Anthropic and Google are determined to catch up. Open source has sparks of promise, but performance on word processing tasks can’t compete with closed models.

To create a high performance LLM, commit to creating the best dataset in the world for the task at hand.

My experience with AI over the past decade led me to believe that open source would make a fierce comeback and that’s exactly what happened. The open source community has improved performance while lowering costs and latency. LLaMA, Mistral and other models offer strong foundations for innovation, and major cloud providers such as Amazon, Google and Microsoft have largely adopted a multi-vendor approach, including support and expansion. in open source.

While open source isn’t captured in published performance benchmarks, it clearly leapfrogs closed models in the set of trade-offs any developer must make when bringing a product into the real world. . The 5 S’s of Model Selection help developers decide which type of model is right for them:

Leave a comment