Browse Definitions :

Data deduplication

Terms related to data deduplication, including definitions about single-instance storage and words and phrases about eliminating redundant data.

DAT - TAR

  • data archiving - Data archiving migrates infrequently used data to low-cost, high-capacity archive storage for long-term retention.
  • data at rest - Data at rest is a term that is sometimes used to refer to all data in computer storage while excluding data that is traversing a network or temporarily residing in computer memory to be read or updated.
  • data center - A data center (or datacenter) is a facility composed of networked computers and storage that businesses or other organizations use to organize, process, store and disseminate large amounts of data.
  • data deduplication - Data deduplication -- often called intelligent compression or single-instance storage -- is a process that eliminates redundant copies of data and reduces storage overhead.
  • data deduplication hardware - Data deduplication hardware is a storage product that eliminates redundant copies of data and retains one instance to be stored.
  • data deduplication ratio - To calculate the deduplication ratio, divide the capacity of backed up data before duplicates are removed by the actual capacity used once the backup is complete.
  • data federation software - Data federation software is programming that provides an organization with the ability to collect data from disparate sources and aggregate it in a virtual database where it can be used for business intelligence (BI) or other analysis.
  • data life cycle management (DLM) - Data life cycle management (DLM) is a policy-based approach to managing the flow of an information system's data throughout its life cycle: from creation and initial storage to the time when it becomes obsolete and is deleted.
  • Data management - Data management is the process of ingesting, storing, organizing and maintaining the data created and collected by an organization, as explained in this in-depth look at the process.
  • data reduction - Data reduction is the process of reducing the amount of capacity required to store data.
  • data reduction in primary storage (DRIPS) - Data reduction in primary storage (DRIPS) is the application of capacity optimization techniques for data that is in active use.
  • data scrubbing (data cleansing) - Data scrubbing, also called data cleansing, is the process of cleaning up data in a database that is incorrect, incomplete, or duplicated.
  • global deduplication - Global deduplication is a method of preventing redundant data when backing up data to multiple deduplication devices.
  • inline deduplication - Inline deduplication is the removal of redundancies from data before or as it is being written to a backup device.
  • KSM (kernel samepage merging) - KSM (kernel samepage merging) is a Linux kernel feature that allows the KVM hypervisor to share identical memory pages among different process or virtual machines on the same server.
  • memory mirroring - Memory mirroring is the division of memory on a server into two channels.
  • post-processing deduplication (PPD) - Post-processing deduplication (PPD), also known as asynchronous de-duplication, is the analysis and removal of redundant data after a backup is complete and data has been written to storage.
  • source deduplication - Source deduplication is the removal of redundancies from data before transmission to the backup target.
  • target deduplication - Target deduplication is the removal of redundancies from a backup transmission as it passes through an appliance sitting between the source and the backup target.

-ADS BY GOOGLE

SearchCompliance

  • risk management

    Risk management is the process of identifying, assessing and controlling threats to an organization's capital and earnings.

  • compliance as a service (CaaS)

    Compliance as a Service (CaaS) is a cloud service service level agreement (SLA) that specified how a managed service provider (...

  • data protection impact assessment (DPIA)

    A data protection impact assessment (DPIA) is a process designed to help organizations determine how data processing systems, ...

SearchSecurity

  • spyware

    Spyware is a type of malicious software -- or malware -- that is installed on a computing device without the end user's knowledge.

  • application whitelisting

    Application whitelisting is the practice of specifying an index of approved software applications or executable files that are ...

  • botnet

    A botnet is a collection of internet-connected devices, which may include PCs, servers, mobile devices and internet of things ...

SearchHealthIT

SearchDisasterRecovery

  • business continuity plan (BCP)

    A business continuity plan (BCP) is a document that consists of the critical information an organization needs to continue ...

  • disaster recovery team

    A disaster recovery team is a group of individuals focused on planning, implementing, maintaining, auditing and testing an ...

  • cloud insurance

    Cloud insurance is any type of financial or data protection obtained by a cloud service provider. 

SearchStorage

  • DRAM (dynamic random access memory)

    Dynamic random access memory (DRAM) is a type of semiconductor memory that is typically used for the data or program code needed ...

  • RAID 10 (RAID 1+0)

    RAID 10, also known as RAID 1+0, is a RAID configuration that combines disk mirroring and disk striping to protect data.

  • PCIe SSD (PCIe solid-state drive)

    A PCIe SSD (PCIe solid-state drive) is a high-speed expansion card that attaches a computer to its peripherals.

Close