Browse Definitions :
Definition

streaming data architecture

Contributor(s): Kostas Tzoumas

A streaming data architecture is an information technology framework that puts the focus on processing data in motion and treats extract-transform-load (ETL) batch processing as just one more event in a continuous stream of events. This type of architecture has three basic components -- an aggregator that gathers event streams and batch files from a variety of data sources, a broker that makes data available for consumption and an analytics engine that analyzes the data, correlates values and blends streams together.

The system that receives and sends data streams and executes the application and real-time analytics logic is called the stream processor. Because a streaming data architecture supports the concept of event sourcing, it reduces the need for developers to create and maintain shared databases. Instead, all changes to an application’s state are stored as a sequence of event-driven processing (ESP) triggers that can be reconstructed or queried when necessary. Upon receiving an event, the stream processor reacts in real- or near real-time and triggers an action, such as remembering the event for future reference.

The growing popularity of streaming data architectures reflects a shift in the development of services and products from a monolithic architecture to a decentralized one built with microservices. This type of architecture is usually more flexible and scalable than a classic database-centric application architecture because it co-locates data processing with storage to lower application response times (latency) and improve throughput. Another advantage of using a streaming data architecture is that it factors the time an event occurs into account, which makes it easier for an application’s state and processing to be partitioned and distributed across many instances.

Streaming data architectures enable developers to develop applications that use both bound and unbound data in new ways. For example, Alibaba’s search infrastructure team uses a streaming data architecture powered by Apache Flink to update product detail and inventory information in real-time. Netflix also uses Flink to support its recommendation engines and ING, the global bank based in The Netherlands, uses the architecture to prevent identity theft and provide better fraud protection. Other platforms that can accommodate both stream and batch processing include Apache Spark, Apache Storm, Google Cloud Dataflow and AWS Kinesis.

This was last updated in October 2018

Continue Reading About streaming data architecture

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What approach does your IT department use to streamline your analytics pipeline(s)?
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

Powered by:

SearchCompliance

  • compliance audit

    A compliance audit is a comprehensive review of an organization's adherence to regulatory guidelines.

  • regulatory compliance

    Regulatory compliance is an organization's adherence to laws, regulations, guidelines and specifications relevant to its business...

  • Whistleblower Protection Act

    The Whistleblower Protection Act of 1989 is a law that protects federal government employees in the United States from ...

SearchSecurity

  • payload (computing)

    In computing, a payload is the carrying capacity of a packet or other transmission data unit. The term has its roots in the ...

  • passphrase

    A passphrase is a string of characters longer than the usual password (which is typically from four to 16 characters long) that ...

  • Web application firewall (WAF)

    A Web application firewall (WAF) is a firewall that monitors, filters or blocks traffic to and from a Web application. WAFs are ...

SearchHealthIT

SearchDisasterRecovery

SearchStorage

  • computational storage

    Computational storage is defined as an architecture that couples compute with storage in order to reduce data movement. In doing ...

  • data deduplication

    Data deduplication -- often called intelligent compression or single-instance storage -- is a process that eliminates redundant ...

  • public cloud storage

    Public cloud storage, also called storage-as-a-service or online storage is a service model that provides data storage on a ...

Close