Blog / July 18, 2023

What is data annotation? And how can it help build better AI?

Natasha Klein-Atlas, Principal Researcher, Data Annotation

Kate Lubrano, Annotation Manager

Copy of ai copilot - 1

For a long time, chatbots suffered from a negative reputation, primarily due to their frequent misunderstandings of user requests. Many users, subject to the limitations of primitive, script-bound chatbots, longed to speak to live agents who could comprehend their needs the first time around.

Recognizing this frustration, artificial intelligence (AI) developers sought solutions to harness the technology's distinguishing capacity to learn and evolve continually, ultimately setting it apart from static, code-dependent software. By tapping into AI's potential, researchers aim to reach the state of dynamic communication tools and leave rigid chatbots behind.

This adaptability employs high-quality annotated data, a crucial ingredient for developing representative, successful, and unbiased AI models. 

Data annotation, the often unsung hero of AI, is the key to reaching excellence. It plays an indispensable role in advancing intelligent conversational AI and fostering chatbots that respond to human language naturally and intuitively.

In this blog post, we aim to illuminate the intriguing world of data annotation and highlight its significance in teaching chatbots to interact seamlessly with users.

We’ll cover:

  • What data annotation is and why it’s important
  • The role annotators play
  • The main data annotation techniques
  • Data annotation applications, including in the enterprise
  • The limitations of data annotation
  • Best practices for data annotation
  • What role AI annotation will play in the future

What is data annotation?

Annotation provides much-needed context and categorization for machine learning models to extract valuable insights by way of assigning labels to raw data. In this process, a taxonomy — a system of classification — is applied to systematically organize and classify data.

Data annotation is the backbone of modern AI applications. Its primary function is to help machines comprehend and interpret various forms of data such as text, video, images, or audio. Thanks to this methodical annotation, AI systems can process different types of content effectively.

More specifically, text annotation can be broken down into various tasks, including but not limited to:

  • Semantic annotation: Associates meanings to specific portions of a text, facilitating natural language understanding (NLU)
  • Intent annotation: Identifies the ultimate goal or user needs within user input for improved conversational AI. 
  • Sentiment annotation: Categorizes emotions expressed in the text, enabling sentiment analysis for chatbots. 

As mentioned, annotation encompasses more than just textual formats. For instance, image or video annotation may include classification, which entails categorizing images according to their content; object recognition, which involves identifying and locating specific objects within images or video frames; image segmentation, the process of dividing an image into regions representing distinct objects or areas of interest; and boundary recognition, to further refine object identification.

In this blog, we'll primarily concentrate on text annotation, as it aligns with Moveworks' objective of comprehending and interacting with enterprise language. However, please note that annotation is crucial to the advancement of all AI, particularly with the ongoing development of large multimodal models that are able to engage with images, audio, and more.

Why is data annotation important?

Before getting into the importance of data annotation, let's first acknowledge the inherent challenges posed by the ambiguity of human language. 

People articulate their needs in vastly diverse ways — concise or lengthy, jargon-filled or formal. And on top of that, a user’s goals carry more specificity than any taxonomy you apply to them. With seemingly infinite possibilities to convey a message or pose a question, humans can still effortlessly communicate, as they are naturally adept at comprehending linguistic nuances.

But for an untrained AI system, deciphering the essence of such communications can be an arduous task. To illustrate this challenge, consider a colleague who shares a meandering story about their vacation and how they could not access the company portal due to poor Wi-Fi service. Despite various HR-related keywords such as “vacation” and “time off”, a human reader or listener would quickly infer that their issue was an IT problem, not an HR problem. 

An untrained bot, in contrast, might struggle to prioritize the most relevant keywords. This is precisely where data annotation steps in. Training AI models on high-quality, annotated data allows them to grasp the complexity and diversity of natural language, separate the signal from the noise, and focus on the most critical aspects of user input.

This becomes particularly important when attempting to predict user's needs based on a chosen taxonomy. Through maintaining a manageable level of granularity in the annotation process, we can improve the decision-making skills of our AI. This method contrasts with approaches used by some, where they assign a single intent to each piece of content, say a knowledge base article, which could lead to reduced productivity and clarity in understanding users' needs via a proliferation of intents.

In turn, AI systems and chatbots can accurately respond to a wide range of human communication with minimal effort. Data annotation empowers AI to comprehend the nuanced symptoms users describe and connect them with solutions, cutting through linguistic complexities and delivering elegant solutions. 

To sum up, data annotation is an essential component in creating AI systems capable of providing meaningful user experiences. The impact of data annotation spans industries and use cases, significantly enhancing the capabilities and practicality of AI-powered solutions across the board.

What is the role of annotators?

An AI annotator's role is to systematically review and label different data types, translating human language and inputs into machine-understandable formats. Your brain most likely would automatically determine that if you try your password incorrectly too many times, you may need your account unlocked, but machines require annotated data to effectively learn patterns, make sense of data, and adapt their responses. 

Annotated data is particularly crucial for training large language models (LLMs), as their key purpose is to digest, interpret, and generate human-like conversation. Focusing on intent annotation, we define “intent” as the ultimate goal or user need. Intent annotation acts as a bridge between human language and machine language. Annotators review real messages sent to chatbots (utterances) and user-submitted tickets. We do this review while protecting personal and sensitive information, incorporating the principles of privacy by design and by default. 

Annotators label these user inputs using a taxonomy that the bot understands and can map to its corresponding actions. For instance, a password resetting issue might be labeled with "Reset Authentication," which triggers the bot's specific skill. Through intent annotation, annotators teach the AI system to recognize patterns in seemingly disparate phrasings and accurately respond to meet users’ needs.

What are the main data annotation techniques?

A variety of techniques and approaches are available for annotating your data. In this section, we will discuss the most popular methods - in-house, outsourced, crowdsourced, and AI-driven annotation - and examine the strengths and weaknesses of each. 

In-house annotation

In-house annotation involves using your own team of dedicated annotators to process and tag the data, ensuring a high level of control over quality and consistency of the annotations. 


  • Direct control over the annotation process and quality 
  • Stronger protection of sensitive or proprietary data 
  • Fosters a deep understanding of domain-specific data 


  • May require hiring or training employees, increasing costs 
  • Limited scalability, as the team's size can constrict the volume of data to be processed

Outsourced annotation

Outsourced annotation refers to delegating the data annotation tasks to a third-party provider that specializes in the field, such as a technology consulting firm or a managed services provider. 


  • Access to a professional, experienced workforce 
  • Cost-effective by offloading data annotation expenses 
  • Relatively scalable, as third-party providers can accommodate increased workloads


  • Limited control over the task and potential inconsistency in quality 
  • Higher risks for sensitive or classified data, introducing privacy concerns

Crowdsourced annotation

Crowdsourced annotation  — a particularly popular approach in the research community — relies on platforms that gather a large pool of contributors from around the world to annotate data, usually on a pay-per-task model. Amazon Mechanical Turk is one example of crowdsourced annotation platforms. 


  • Access to a diverse range of annotators and perspectives 
  • Extremely scalable to handle vast quantities of data 
  • Cost-effective due to the gig economy model 


  • Quality control can be challenging for a number of reasons, as contributors have varying expertise levels
  • May not be a fit for specialized or highly technical data 
  • Privacy concerns for sensitive or confidential information 

AI-driven annotation

AI-driven annotation employs machine learning algorithms to automatically label the data. Over time, iteratively refining and improving the AI model can achieve higher annotation accuracy. 


  • Rapid annotation of large data volumes 
  • Cost-effective in the long term


  • May require an initial investment in developing an AI model 
  • Less adaptability for unanticipated or complex data 
  • Ethical concerns, including biases in training data that may lead to biased outcomes 

Analyzing the strengths and weaknesses of each technique empowers you to make informed decisions that best suit your project requirements, budget, and time constraints.

What are the applications of data annotation?

Data annotation has far-reaching implications across various industries. In teaching AI models to proficiently engage with user interactions, many businesses can leverage the power of annotated data to drive innovation, improve customer experience, and optimize processes. Below, we delve into some of the most notable applications of data annotation, spanning sectors such as medical, retail, finance, legal, automotive, industrial, and employee support. 

Medical: In the medical industry, annotated data enables AI systems to analyze medical images, electronic health records, and diagnostic data. Examples of applications include detecting diseases in radiology scans, predicting patient outcomes, and creating personalized treatment plans. There are also significant and long-standing applications of annotation in medical research, such as the extraction of information from published research papers. In this line of work, annotators are typically Ph.D. scientists.

Retail: Data annotation helps retail businesses better understand customer preferences, improve inventory management, and optimize store layouts. Annotated data can also aid in creating AI-driven conversational assistants, which help customers with inquiries and product recommendations.

Finance: Financial institutions harness annotated data to develop AI models that detect fraudulent activities, analyze market trends, and improve customer service. Data annotation also is the key to creating chatbots that both answer customer questions and offer personalized investment recommendations.

Legal: In the legal sector, annotated data is invaluable for developing AI models that can process and analyze vast amounts of legal documents, identifying relevant precedents, streamlining contract review, and helping with e-discovery. Data annotation also contributes to creating AI-driven tools that facilitate legal research, predict case outcomes, and assist in automating routine tasks — reducing workload for legal professionals.

Automotive: In the realm of autonomous vehicles, data annotation is instrumental in training AI systems to recognize traffic signs, pedestrians, bicycles, and other vehicles. Annotated data forms the foundation for the development of advanced driver-assistance systems (ADAS), which significantly enhance road safety.

Industrial: Data annotation supports AI system adoption in the industrial sector, enabling predictive maintenance, real-time monitoring of equipment, and quality control. Annotated data trains AI models to detect anomalies, optimize production processes, and improve overall productivity. 

Employee Support: Annotated data is also crucial in realizing intelligent employee support systems. By training AI models to accurately understand user requests, machine learning-driven platforms can offer seamless assistance with IT support, HR issues, and other workplace tasks, enhancing the overall employee experience. 

As illustrated by these applications, data annotation is at the core of AI-driven innovations, empowering industries to leverage machine learning and usher in a new era of smart solutions. The right investment in data annotation can pave the way to unparalleled growth, revolutionizing businesses across diverse sectors.

Why is data annotation important in an enterprise context?

In an enterprise setting, AI systems are expected to perform at their best, adapting to specific use cases and delivering fluid experiences for customers and employees alike. 

For AI to thrive in this context,it must be proficient in dealing with complex situations, recognizing domain-specific terminology, and making accurate inferences based on user input. Data annotation is deeply involved in achieving these high performance standards, as it is necessary for AI models to smoothly adapt to unique enterprise use cases. 

Effective data annotation assists AI systems in better understanding various contexts within the enterprise realm, such as: 

  1. Industry-specific terminology: Enterprises often use domain-specific jargon and abbreviations unique to their industry or organization. Annotating data relevant to these terms teaches AI systems to develop a comprehensive repository of the language typically used in the enterprise environment. 
  2. Ambiguous requests: AI-driven customer support and chatbots need to interpret user requests accurately in real time while dealing with a wide array of inquiries. Data annotation ensures these systems understand the nuances of human language and are capable of providing appropriate responses to various customer needs. 
  3. Data security and compliance: Enterprises that handle sensitive data, such as financial or health information, require AI solutions that comply with regulatory requirements. Annotating data to train AI models in detecting, protecting, and handling such information is crucial in adhering to privacy and security standards. 
  4. Domain-specific AI applications: AI applications designed for specialized enterprise use cases rely on annotated data tailored to each specific domain for optimal performance. Examples of these use cases include fraud detection in finance, predictive maintenance in manufacturing, or document analysis in legal contexts.

Clearly, data annotation is a vital aspect of any enterprise AI endeavor. The process of annotating data allows AI systems to adapt to organization-specific demands and complexities, delivering custom solutions that cater to unique use cases. Investing time and resources into data annotation facilitates successful AI adoption, transforming operations and customer interactions aligned with the ever-evolving digital landscape.

Data annotation limitations

Data annotation can be hindered by challenges such as cost, accuracy, and ambiguity. Here’s a quick run-through of these limitations and the ways annotators and enterprises can work together to overcome these obstacles.


One of the primary challenges encountered in data annotation is the financial cost associated with hiring and training annotators. Investing in annotators ensures high-quality annotations that lead to improved AI system performance. Striking a balance between securing adequate resources for annotators and managing organizational constraints is essential for maintaining an efficient annotation process.


Ensuring the accuracy of annotated data is crucial for the efficacy of AI models. Mislabeled datasets can negatively impact AI performance, leading to undesirable outcomes, such as incorrect predictions or responses. Annotators and enterprises must continuously monitor and maintain the quality of annotations to ensure AI models are trained on the most accurate, current, and relevant data.


One of the greatest challenges of annotation is dealing with ambiguity. Words, acronyms, and abbreviations can have multiple meanings. Annotators are able to consider the organization and the industry that the utterance or ticket they are evaluating originated from, which can help narrow things down. The same acronym could be referring to a particular form at one organization but an internal software tool at another. 

In some cases, annotators are able to reference more specific customer language resource documents in order to classify a resource type of a potentially ambiguous term with more accuracy. The type of company a user works for can also be important to consider when determining intent. 

For example, while at most organizations served by Moveworks utterances discussing health insurance are likely to fall into our “Benefits” Resource label, if we know the organization is a health insurance provider we may need to read deeper into the utterance to determine if it is about insurance being provided to an external client.

Contextual awareness

Further complicating annotation is the fact that each utterance can be considered individually, without the context of the conversation history. Annotators must then assess and annotate single utterances without having this context, which can occasionally make it difficult to infer the intended meaning. 

Ambiguity is added when messages are cut off by a user hitting Enter too soon, or in cases where a user is referencing something previously discussed with vague terms such as “it” instead of naming the actual resource. In these cases, annotation can feel like an exercise in mind reading, but expert annotators are typically able to make educated guesses based on their experience reviewing similar utterances.

Disagreement resolution and quality control

Disagreement resolution is a process of addressing both annotator errors and valid interpretive differences in opinion, ensuring data consistency, as well as refining guidelines. By analyzing differing interpretations of user inputs, annotators can improve annotation quality. 

Historically, collecting multiple opinions and resolving disagreements has helped us correct annotator errors, identify ambiguous inputs, and refine project documentation. This process has also led to better taxonomy definitions, minimizing biases while enabling the AI to ask clarifying questions when needed. 

In pursuit of efficiency, the Moveworks team has reduced time spent on disagreement resolution, instead aiming for an 85% organic agreement rate. Continuous improvements have resulted in consistently reaching or even exceeding this goal, achieving high-quality data to train AI models effectively.

What are some best practices for data annotation?

Data annotation is a critical process for creating high-quality labeled datasets used in many machine learning applications. To ensure the accuracy and consistency of annotations, it is important to establish clear guidelines for annotators to follow. This will lead to increased efficiency while helping maintain a robust quality control process. The following is a list of key best practices that you should consider when starting the annotation process.

Prepare detailed and easy-to-read instructions: Provide articulate guidelines for your annotators to follow, including examples and edge cases. This will help avoid confusion and ensure that annotators understand how to handle various scenarios, leading to more accurate annotations.

Support humans with machines: Leverage machine learning techniques, such as automated pre-annotations, to provide a starting point for human annotators. Machine-generated suggestions can save time and help achieve consistent results, allowing annotators to focus on refining the outcome and handling more complex cases. 

Focus on quality: Ensure you have a robust quality control process in place throughout the data annotation project. This may involve periodic reviews, inter-annotator agreement measurements, and addressing discrepancies, which can improve the accuracy and consistency of the final dataset. 

Stay compliant: Be mindful of privacy policies, regulations, and ethical considerations when annotating data. This includes proper data anonymization and following guidelines relevant to specific domains such as finance, healthcare, or education. 

Iterate and update guidelines: Data annotation is an ongoing process. As you gather more data and feedback, update the guidelines accordingly to address new challenges and scenarios. Keep your guidelines current and ensure that annotators are informed of any changes, maintaining consistency and adaptability over time. 

Select the right annotation tools and techniques: Choose the best tools and methodologies for your specific data annotation project as capable and precise tools make the process annotator-friendly. This may vary depending on the type of data, the project's scope, and other factors. Having the right tools can streamline the annotation process and improve efficiency. 

Encourage communication and collaboration: Facilitate open communication between annotators and project managers. Address questions and provide regular feedback, helping minimize errors, and fostering a collaborative environment for annotators to learn from each other and make improvements. 

Diversify your annotator team: Having a diverse range of perspectives among your annotators can reduce bias in your dataset. Ensure that your training data is annotated by people with different backgrounds, experiences, and skill sets to increase overall quality and impact. 

Set realistic goals and timelines: Establish specific, measurable, and achievable goals for your data annotation project. Ensure that project deadlines are realistic and considerate of the resources available. This will help manage expectations, keep the team motivated, and deliver a high-quality annotated dataset in a timely manner.

The impact of AI annotation in the future

As AI technology continues to rapidly evolve, AI-powered data annotation is poised to play a significant role in reshaping various industries and transforming the way data is processed, organized, and utilized. 

Here's an overview of the potential impact of AI annotation in the future: 

Scalability: AI-driven annotation can help scale the data annotation process exponentially, reducing the reliance on human annotators and decreasing the time required for annotation tasks. This will enable organizations to process larger volumes of data at unprecedented speeds, ultimately fueling faster AI system development and deployment. 

Increased annotation efficiency and accuracy: Advanced machine learning algorithms will be able to both automate and enhance the quality of annotations, minimizing errors and inconsistencies. As AI systems become more intelligent, the gap between human and machine-generated annotations will contract, with AI models increasingly handling complex tasks with ease. 

Personalized AI models: With more annotated data, AI systems will be able to learn from diverse user experiences and preferences, paving the way for highly personalized models. Tailoring AI outputs to individual users will greatly benefit industries such as healthcare, education, marketing, and customer service, promoting a more engaging and customized user experience. 

Greater accessibility to AI technologies: The democratization of AI-powered data annotation will lower barriers to entry for organizations looking to harness AI capabilities. With faster and more cost-effective annotation options, even smaller entities and startups can access and utilize advanced AI technologies across various domains. 

Ethical AI with reduced bias: As AI-driven annotation processes evolve, creating unbiased and ethically sound AI models will be of paramount importance. Diversifying training data can help achieve more representative and unbiased systems that consider a wide range of perspectives and serve a broader audience. However, this approach is not without risks. It’s also possible that depending exclusively on AI annotation will only perpetuate biases on a larger scale.

Data annotation will only get more important with the rise of AI

Data annotation is integral to the development of advanced AI systems and chatbots that interact seamlessly with users. By understanding the intricacies of data annotation, we can empower AI to comprehend and empathize with users, cutting through linguistic complexities and delivering optimal solutions across diverse industries. 

With the right investment in data annotation, we can also establish a foundation for unparalleled growth, revolutionizing businesses across the board. To leverage the full potential of data annotation, we encourage readers to explore further resources on improving annotation, reducing biases, and staying compliant. Keep an eye on the future of AI annotation, as it will continue to transform and elevate the landscape of AI-assisted communication.

Request a demo to see how your business can use conversational AI.

Table of contents

Subscribe to our Insights blog