Trusted by the best: over 100 Fortune 500 companies choose Moveworks

Multi-LLM Strategy

Providing access to multiple LLMs

Different large language models perform better at different tasks. You get access to the highest performing models, including: OpenAI’s GPT-4 for search relevance, Google’s BERT for intent classification, and Meta’s RoBERTa for entity recognition.

Enterprise Data

Training models collectively

Most off-the-shelf LLMs are trained on public data, not your company’s data. Moveworks incorporates secure training data from 500M+ support tickets, 10B+ bot conversations, and 100M+ enterprise resources to ensure the language models understand the language of work.

Entity Grounding

Teaching LLMs your business language

To maximize factuality and minimize hallucinations from LLMs, you need to automatically send prompts to the LLM  with content of your business. Moveworks automatically mines company specific entities (identity maps, distribution lists, conference room names, software catalog, etc.) and provides that information in every prompt.

Broadcom is resolving 57%+ of IT issues while seeing a 40% reduction in incidents with Moveworks

Model evaluation

Constantly swapping out models

New models come out every day and perform better at specific tasks than old models. Moveworks is deeply ingrained in the latest research and has access to models before they’re publicly released, meaning we are able to automatically swap them out to maximize performance and accuracy.

Model fine-tuning

Providing better model understanding

LLMs can’t be fine-tuned with your company data because most of them are too large to host on infrastructure. To perform better than LLMs at specific tasks in the enterprise, Moveworks fine-tunes smaller models with the language of work. This also results in lower latency for the end-user experience.

Model annotation

Improving model performance automatically

To get the most out of model performance, you need a team to constantly evaluate its performance. At Moveworks, we have a robust annotation operation where trained linguists securely review bot conversations and provide real-time feedback to the models.

Model infrastructure

Hosting models for optimal performance & speed

Calling large language model APIs for every natural language task can be costly and provide a poor latency experience. Moveworks has built proprietary model infrastructure to host smaller models that are fine-tuned, allowing for better performance and minimal latency.

Conversation experience

Handles fluid and dynamic conversation

Users speak naturally, and Moveworks understands. Whether it’s in any language, or the user wants to switch topics, or the user doesn’t know what they’re looking for, Moveworks handles the conversation in a delightful way.

Search and action reasoning

Provides users exactly what they need

Businesses have a sea of resources to answer questions: knowledge articles, forms, structured information in backend systems, workflows, and more. Moveworks leverages LLMs to decide the best answer or combination of answers to provide.

 

prp-chat-queries-finance-expenses.png

Web

Our LLM stack is a portion of the Moveworks platform. See what else makes Moveworks an enterprise-grade platform.

Learn more

Guide

Read this guide for a comprehensive primer on how to get the most out of the latest breakthrough of AI in IT.

Read the guide

Blog

Dive deep into large language models and their applications for the enterprise.

Read the blog

Learn how the Moveworks platform drives ROI, reduce busy work, and improve employee experience.

Request a demo