Different large language models perform better at different tasks. You get access to the highest performing models, including: OpenAI’s GPT-4 for search relevance, Google’s BERT for intent classification, and Meta’s RoBERTa for entity recognition.
Broadcom is resolving 57%+ of IT issues while seeing a 40% reduction in incidents with Moveworks
New models come out every day and perform better at specific tasks than old models. Moveworks is deeply ingrained in the latest research and has access to models before they’re publicly released, meaning we are able to automatically swap them out to maximize performance and accuracy.
LLMs can’t be fine-tuned with your company data because most of them are too large to host on infrastructure. To perform better than LLMs at specific tasks in the enterprise, Moveworks fine-tunes smaller models with the language of work. This also results in lower latency for the end-user experience.
To get the most out of model performance, you need a team to constantly evaluate its performance. At Moveworks, we have a robust annotation operation where trained linguists securely review bot conversations and provide real-time feedback to the models.
Calling large language model APIs for every natural language task can be costly and provide a poor latency experience. Moveworks has built proprietary model infrastructure to host smaller models that are fine-tuned, allowing for better performance and minimal latency.
Users speak naturally, and Moveworks understands. Whether it’s in any language, or the user wants to switch topics, or the user doesn’t know what they’re looking for, Moveworks handles the conversation in a delightful way.
Search and action reasoning
Businesses have a sea of resources to answer questions: knowledge articles, forms, structured information in backend systems, workflows, and more. Moveworks leverages LLMs to decide the best answer or combination of answers to provide.