Browse Definitions :
Definition

data labeling

Data labeling, in the context of machine learning, is the process of detecting and tagging data samples. The process can be manual but is usually performed or assisted by software.

What is data labeling used for?

Data labeling is an important part of data preprocessing for ML, particularly for supervised learning, in which both input and output data are labeled for classification to provide a learning basis for future data processing.

A system training to identify animals in images, for example, might be provided with multiple images of various types of animals from which it would learn the common features of each, enabling it to correctly identify the animals in unlabeled images.

Data labeling is also used when constructing ML algorithms for autonomous vehicles. Autonomous vehicles such as self-driving cars need to be able to tell the difference between objects in their course so that they can process the external world and drive safely. Data labeling is used to enable the car's artificial intelligence (AI) to tell the difference between a person, the street, another car and the sky by labeling the key features of those objects or data points and looking for similarities between them.

How does data labeling work?

ML and deep learning systems often require massive amounts of data to establish a foundation for reliable learning patterns. The data they use to inform learning must be labeled or annotated based around data features that help the model organize the data into patterns that produce a desired answer.

The labels used to identify data features must be informative, discriminating and independent to produce a quality algorithm. A properly labeled dataset provides a ground truth that the ML model uses to check its predictions for accuracy and to continue refining its algorithm.

A quality algorithm is high in both accuracy and quality. Accuracy refers to the proximity of certain labels in the dataset to ground truth. Quality refers to how consistently accurate an entire dataset is.

Errors in data labeling impair the quality of the training dataset and the performance of any predictive models it’s used for. To mitigate this, many organizations take a Human-in-the-Loop (HITL) approach, maintaining human involvement in training and testing data models throughout their iterative growth.

Methods of data labeling

An enterprise can use several methods to structure and label its data. The options range from using in-house staff to crowdsourcing and data labeling services. These options include the following:

  • CrowdsourcingA third-party platform gives an enterprise access to many workers at once.
  • Contractors. An enterprise can hire temporary freelance workers to process and label data.
  • Managed teams. An enterprise can enlist a managed team to process data. Managed teams are trained, evaluated and managed by a third-party organization.
  • In-house staff. An enterprise can use its existing employees to process data.

There is no one optimal method for labeling data. Enterprises should use the method or combination of methods that best suits their needs. Some criteria to consider when choosing a data labeling method are as follows:

  • the size of the enterprise;
  • the size of the dataset that requires labeling;
  • the skill level of employees on staff;
  • the financial restraints of the enterprise; and
  • the purpose of the ML model being supplemented with labeled data.

A good data labeling team should ideally have domain knowledge of the industry an enterprise serves. Data labelers who have outside context guiding them are more accurate. They should also be flexible and agile, because data labeling and ML are iterative processes, always changing and evolving as more information is taken in.

Importance of data labeling

A recent report from AI research and advisory firm Cognilytica found that over 80% of the time enterprises spend on AI projects goes toward preparing, cleaning and labeling data. Manual data labeling is the most time-consuming and expensive method, but it may be warranted for important applications.

Critics of AI speculate that automation will put low skill-jobs such as call center work, truck and Uber driving at risk, because rote tasks are becoming easier to perform for machines. However, some experts believe that data labeling may present a new low-skill job opportunity to replace the ones that are nullified by automation, because there is an ever-growing surplus of data and machines that need to process it to perform the tasks necessary for advanced ML and AI.

This was last updated in August 2019

Continue Reading About data labeling

SearchCompliance

  • information governance

    Information governance is a holistic approach to managing corporate information by implementing processes, roles, controls and ...

  • enterprise document management (EDM)

    Enterprise document management (EDM) is a strategy for overseeing an organization's paper and electronic documents so they can be...

  • risk assessment

    Risk assessment is the identification of hazards that could negatively impact an organization's ability to conduct business.

SearchSecurity

  • unified threat management (UTM)

    Unified threat management (UTM) describes an information security (infosec) system that provides a single point of protection ...

  • physical security

    Physical security is the protection of personnel, hardware, software, networks and data from physical actions and events that ...

  • attack vector

    An attack vector is a path or means by which an attacker or hacker can gain access to a computer or network server in order to ...

SearchHealthIT

SearchDisasterRecovery

  • risk mitigation

    Risk mitigation is a strategy to prepare for and lessen the effects of threats faced by a business.

  • call tree

    A call tree is a layered hierarchical communication model that is used to notify specific individuals of an event and coordinate ...

  • Disaster Recovery as a Service (DRaaS)

    Disaster recovery as a service (DRaaS) is the replication and hosting of physical or virtual servers by a third party to provide ...

SearchStorage

  • cloud storage

    Cloud storage is a service model in which data is transmitted and stored on remote storage systems, where it is maintained, ...

  • cloud testing

    Cloud testing is the process of using the cloud computing resources of a third-party service provider to test software ...

  • storage virtualization

    Storage virtualization is the pooling of physical storage from multiple storage devices into what appears to be a single storage ...

Close