D2TS is a TypeScript implementation of differential dataflow - a powerful data-parallel programming framework that enables incremental computations over changing input data.
You can use D2TS to build data pipelines that can be executed incrementally, meaning you can process data as it comes in, and only recompute the parts that have changed. This could be as simple as remapping data, or as complex as performing a full join combining two datasources where one is a computed aggregate.
D2TS can be used in conjunction with ElectricSQL to build data pipelines on top of ShapeStreams that can be executed incrementally.
A D2TS pipeline is also fully type safe, inferring the types at each step of the pipeline, and supports auto-complete in your IDE.
- Incremental Processing: Efficiently process changes to input data without recomputing everything
- Rich Operators: Supports common operations with a pipeline API:
buffer
: Buffer and emit versions when they are completeconcat
: Concatenate two streamsconsolidate
: Consolidates the elements in the stream at each versioncount
: Count elements by keydistinct
: Remove duplicatesfilter
: Filter elements based on predicatesiterate
: Perform iterative computationsjoin
: Join two streamskeyBy
: Key a stream by a propertymap
: Transform elements in a streamreduce
: Aggregate values by keyrekey
: Change the key of a keyed streamunkey
: Remove keys from a keyed streamoutput
: Output the messages of the streampipe
: Build a pipeline of operators enabling reuse of combinations of operators
- SQLite Integration: Optional SQLite backend for persisting operator state allowing for larger datasets and resumable pipelines
- Type Safety: Full TypeScript type safety and inference through the pipeline API
npm install @electric-sql/d2ts
Here's a simple example that demonstrates the core concepts:
import { D2, map, filter, debug, MultiSet, v } from '@electric-sql/d2ts'
// Create a new D2 graph with initial frontier
// The initial frontier is the lower bound of the version of the data that may
// come in future.
const graph = new D2({ initialFrontier: 0 })
// Create an input stream
// We can specify the type of the input stream, here we are using number.
const input = graph.newInput<number>()
// Build a simple pipeline that:
// 1. Takes numbers as input
// 2. Adds 5 to each number
// 3. Filters to keep only even numbers
// Pipelines can have multiple inputs and outputs.
const output = input.pipe(
map((x) => x + 5),
filter((x) => x % 2 === 0),
debug('output'),
)
// Finalize the pipeline, after this point we can no longer add operators or
// inputs
graph.finalize()
// Send some data
// Data is sent as a MultiSet, which is a map of values to their multiplicity
// Here we are sending 3 numbers (1-3), each with a multiplicity of 1
// When you send data, you set the version number, here we are using 0
// The key thing to understand is that the MultiSet represents a *change* to
// the data, not the data itself. "Inserts" and "Deletes" are represented as
// an element with a multiplicity of 1 or -1 respectively.
input.sendData(
0, // The version of the data
new MultiSet([
[1, 1],
[2, 1],
[3, 1],
]),
)
// Set the frontier to version 1
// The "frontier" is the lower bound of the version of the data that may come in future.
// By sending a frontier, you are indicating that you are done sending data for any version less than the frontier and therefor D2TS operators that require them can process that data and output the results.
input.sendFrontier(1)
// Process the data
graph.run()
// Output will show:
// 6 (from 1 + 5)
// 8 (from 3 + 5)
D2TS can be used in conjunction with ElectricSQL to build data pipelines on top of ShapeStreams that can be executed incrementally.
Here's an example of how to use D2TS with ElectricSQL:
import { D2, map, filter, output } from '@electric-sql/d2ts'
import { electricStreamToD2Input } from '@electric-sql/d2ts/electric'
import { ShapeStream } from '@electric-sql/client'
// Create D2 graph
const graph = new D2({ initialFrontier: 0 })
// Create D2 input
const input = graph.newInput<any>()
// Configure the pipeline
input
.pipe(
map(([key, data]) => data.value),
filter(value => value > 10),
// ... any other processing / joining
output((msg) => doSomething(msg))
)
// Finalize graph
graph.finalize()
// Create Electric stream (example)
const electricStream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'items',
replica: 'full', // <-- IMPORTANT!
}
})
// Connect Electric stream to D2 input
electricStreamToD2Input(electricStream, input)
There is a complete example in the ./examples/electric directory.
There are a number of examples in the ./examples directory, covering:
- Basic usage (map and filter)
- "Fruit processed" (reduce and consolidate)
- Joins between two streams
- Iterative computations
- Modeling "includes" using joins
- ElectricSQL example (using D2TS with ElectricSQL)
const graph = new D2({ initialFrontier: 0 })
The D2
constructor takes an optional options
object with the following properties:
initialFrontier
: The initial frontier of the graph, defaults to0
An instance of a D2 graph is used to build a dataflow graph, and has the following main methods:
newInput<T>(): IStreamBuilder<T>
: Create a new input streamfinalize(): void
: Finalize the graph, after this point no more operators or inputs can be addedrun(): void
: Process all pending versions of the dataflow graph
Input streams are created using the newInput<T>()
method, and have the following methods:
sendData(version: Version | number | number[], data: MultiSet<T>): void
: Send data to the input streamsendFrontier(version: Antichain | Version | number | number[]): void
: Send a frontier to the input stream
Versions are used to represent the version of the data, and are a lattice of integers. For most use cases you will only need to provide a single integer version, and all apis that take a version will work with a single integer. More advanced use cases may require the use of the latice to track multidimensional versions.
Frontiers are used to represent the lower bound of the version of the data that may come in future, and are an antichain of versions. Again in most cases you can just use a single integer version to represent the frontier.
There is a Version
class that represents a version, the prefered way to create a version is using the v
helper function as this ensures that you reuse the same object for the same version making equality checks and comparisons more efficient:
const version = v(1)
Multidimensional versions are also supported, and are created using the v
helper function:
const version = v([1, 2])
In most cases you will only need to use a single integer version to represent the version which can be passed directly to the sendData
and sendFrontier
methods:
input.sendData(1, new MultiSet([[1, 1]]))
An Antichain
is a set of versions that are disjoint, it is used to represent the frontier of the data. An antichain can be created using the Antichain
constructor:
const frontier = new Antichain([v(1), v([2])])
In most cases you will only need to use a single integer version to represent the frontier and can be passed directly to the sendFrontier
method:
input.sendFrontier(1)
A MultiSet
is a map of values to their multiplicity. It is used to represent the changes to a collection.
A MultiSet is created by passing an array of [value, multiplicity]
pairs. Here we are creating a MultiSet with the values 1, 2, and 3, each with a multiplicity of 1:
const multiSet = new MultiSet([
[1, 1],
[2, 1],
[3, 1],
])
MultiSets could be used to represent any object:
// Here we have a MultiSet of new "comments" with the interface `Comment`
const multiSet = new MultiSet<Comment>([
[{ id: '1', text: 'Hello, world!', userId: '321' }, 1],
[{ id: '2', text: 'Hello, world!', userId: '123' }, 1],
])
An important principle of D2TS is "keyed" MultiSets, where the value
is a tuple of [key, value]
.
// Here we have a MultiSet of new "comments" but we have keyed them by the
// `userId`
const multiSet = new MultiSet<[string, Comment]>([
[['321', { id: '1', text: 'Hello, world!', userId: '321' }], 1],
[['123', { id: '2', text: 'Hello, world!', userId: '123' }], 1],
])
Inserts and deletes are represented as an element with a multiplicity of 1 or -1 respectively.
// Here we are inserting a one new comment and deleting one comment
const multiSet = new MultiSet<[string, Comment]>([
[['321', { id: '1', text: 'Hello, world!', userId: '321' }], 1],
[['123', { id: '2', text: 'Hello, world!', userId: '123' }], -1],
])
buffer()
Buffers the elements of the stream, emitting a version when the buffer is complete.
const output = input.pipe(buffer())
concat(other: IStreamBuilder<T>)
Concatenates two input streams - the output stream will contain the elements of both streams.
const output = input.pipe(concat(other))
consolidate()
Consolidates the elements in the stream at each version, essentially it ensures the output stream is at the latest known complete version.
const output = input.pipe(consolidate())
count()
Counts the number of elements in the stream by key
const output = input.pipe(
map((data) => [data.somethingToKeyOn, data]),
count(),
)
debug(name: string)
Logs the messages of the stream to the console, the name is used to identify the stream in the logs.
const output = input.pipe(debug('output'))
distinct()
Removes duplicate values from the stream by key
const output = input.pipe(distinct())
filter(predicate: (data: T) => boolean)
Filters the stream based on a predicate
const output = input.pipe(filter((x) => x % 2 === 0))
iterate<T>(f: (stream: IStreamBuilder<T>) => IStreamBuilder<T>)
Performs iterative computations on a stream by creating a feedback loop. This allows you to repeatedly process data until it reaches a fixed point or meets specific conditions.
The iterate
operator takes a function that defines the iteration step. Inside this function, you can apply any series of transformations to the stream, and the results will be fed back into the loop for further iterations.
// This example repeatedly doubles numbers and includes previous values,
// filtering out any values > 50
const output = input.pipe(
iterate((stream) =>
stream.pipe(
map((x) => x * 2), // Double each value
concat(stream), // Include original values
filter((x) => x <= 50), // Keep only values <= 50
map((x) => [x, []]), // Convert to keyable format
distinct(), // Remove duplicates
map((x) => x[0]), // Convert back to simple values
consolidate(), // Ensure consistent version updates
)
),
debug('results')
)
In this example:
- The
iterate
function creates a feedback loop on the input stream - Each value is doubled, then combined with all previous values
- Values greater than 50 are filtered out
- The remaining values are deduplicated and consolidated before the next iteration
The iteration will continue until no new values are produced (reaching a fixed point) or until the frontier advances beyond the iteration scope.
Common use cases for the iterate
operator include:
- Computing transitive closures in graph algorithms
- Propagating values until convergence
- Implementing fixed-point algorithms
- Simulating recursive processes with bounded results
This powerful operator enables complex recursive computations while maintaining the incremental nature of differential dataflow.
join(other: IStreamBuilder<T>, joinType?: JoinType)
Joins two keyed streams based on matching keys. The joinType
parameter controls how the join behaves:
'inner'
(default): Returns only records that have matching keys in both streams'left'
: Returns all records from the left stream, plus matching records from the right (with nulls for non-matches)'right'
: Returns all records from the right stream, plus matching records from the left (with nulls for non-matches)'full'
: Returns all records from both streams, with nulls for non-matches on either side
const input = graph.newInput<[key: string, value: number]>()
const other = graph.newInput<[key: string, value: string]>()
// Inner join - only matching keys
const innerJoin = input.pipe(join(other, 'inner'))
// Left join - all records from input, matching from other
const leftJoin = input.pipe(join(other, 'left'))
// Right join - all records from other, matching from input
const rightJoin = input.pipe(join(other, 'right'))
// Full join - all records from both streams
const fullJoin = input.pipe(join(other, 'full'))
The join operation is type-safe, with appropriate nullable types for the different join types:
// The two streams are initially keyed by the userId and commentId respectively
const comments = graph.newInput<[commentId: string, comment: Comment]>()
const users = graph.newInput<[userId: string, user: User]>()
// Map the comments to be "keyed" by the user id
const commentsByUser = comments.pipe(
map(([commentId, comment]) => [comment.userId, comment] as [string, Comment]),
)
// Left join - keeps all comments, even those without matching users
const output = commentsByUser.pipe(
join(users, 'left'),
map(([userId, [comment, user]]) => {
// user can be null in a left join if there's no matching user
return [
comment.id,
{
...comment,
userName: user?.name ?? 'Unknown User',
},
]
}),
)
When using SQLite persistence, you can supply the database as an additional parameter:
// Using SQLite persistence
const db = new BetterSQLite3Wrapper(sqlite)
const persistedJoin = input.pipe(join(other, 'inner', db))
map<U>(f: (data: T) => U)
Transforms the elements of the stream using a function
const output = input.pipe(map((x) => x + 5))
output(messageHandler: (message: Message<T>) => void)
Outputs the messages of the stream
input.pipe(
output((message) => {
if (message.type === MessageType.DATA) {
console.log('Data message', message.data)
} else if (message.type === MessageType.FRONTIER) {
console.log('Frontier message', message.data)
}
}),
)
The message is a Message<T>
object, with the structure:
type Message<T> =
| {
type: typeof MessageType.DATA
data: DataMessage<T>
}
| {
type: typeof MessageType.FRONTIER
data: FrontierMessage
}
A data messages represents a change to the output data, and has the following data payload:
type DataMessage<T> = {
version: Version
collection: MultiSet<T>
}
A frontier message represents a new frontier, and has the following data payload:
type FrontierMessage = Version | Antichain
pipe(operator: (stream: IStreamBuilder<T>) => IStreamBuilder<T>)
Pipes the stream through a series of operators
// You can specify the input and output types for the pipeline.
// Here we are specifying the input type as number and the output type as
// string.
const composedPipeline = pipe<number, string>(
map((x) => x + 5),
filter((x) => x % 2 === 0),
map((x) => x.toString()),
debug('output'),
)
const output = input.pipe(
map((x) => x + 1),
composedPipeline,
)
// Or as a function
const myPipe = (a: number, b: number) =>
pipe<number, number>(
map((x) => x + a),
filter((x) => x % b === 0),
debug('output'),
)
const output = input.pipe(myPipe(5, 2))
reduce(f: (values: [T, multiplicity: number][]) => [R, multiplicity: number][])
Performs a reduce operation on the stream grouped by key.
The function f
takes an array of values and their multiplicities and returns an array of the result and their multiplicities.
// Sum a values by key from the input stream
const output = input.pipe(
map((data) => [data.somethingToKeyOn, data.aValueToSum]),
reduce((values) => {
// `values` is an array of [value, multiplicity] pairs for a specific key
let sum = 0
for (const [value, multiplicity] of values) {
sum += value * multiplicity
}
return [[sum, 1]]
}),
output((message) => {
if (message.type === MessageType.DATA) {
// `message.data` is a MultiSet representing the changes to the output
// data
// In this example, the output stream will contain the change to the
// sum of the values for each key.
console.log(message.data)
}
}),
)
D2TS provides a set of operators for working with keyed streams, which are useful for operations like joins and grouping.
Keys a stream by a property of each element. This is useful for preparing data for joins or grouping operations.
const keyedStream = input.pipe(keyBy(item => item.id))
Removes the keys from a keyed stream, returning just the values.
const unkeyedStream = keyedStream.pipe(unkey())
Changes the key of a keyed stream to a new key based on a property of the value.
const rekeyedStream = keyedStream.pipe(rekey(item => item.newKey))
Example usage with joins:
// Transform comments into [issue_id, comment] pairs for joining
const commentsByIssue = inputComments.pipe(
rekey(comment => comment.issue_id)
)
// Join comments with issues
const issuesWithComments = issuesForProject.pipe(
join(commentsByIssue)
)
For persistence and larger datasets, a number of operators are provided that persist to SQLite:
consolidate()
: Consolidates data into a single versioncount()
: Counts the number of elements in a collectiondistinct()
: Removes duplicates from a collectionjoin()
: Joins two collectionsmap()
: Transforms elementsreduce()
: Aggregates values by key
Each take a SQLite database as the final argument, for example:
// Using better-sqlite3
const sqlite = new Database('./my_database.db')
const db = new BetterSQLite3Wrapper(sqlite)
const output = input.pipe(consolidate(db))
The operators will automatically create the necessary tables and indexes to store the state of the operators. It is advised to use the same database for all operators to ensure that the state is stored in a single location.
The BetterSQLite3Wrapper
is a wrapper around the better-sqlite3
library that provides a unified interface for the operators. Other SQLite database drivers can be supported by implementing the SQLiteDb
interface.
This implementation started out as a TypeScript port of the Materialize blog post, but has diverged quite a bit, adopting a pipeline api pattern, persistence to SQLite, and a few other changes to make the DX better.
-
Core data structures:
MultiSet
: Represents collections with multiplicitiesVersion
: Handles partially ordered versionsAntichain
: Manages frontiersIndex
: Stores versioned operator state
-
Operators:
- Base operator classes in
src/operators/
- SQLite variants in
src/sqlite/operators/
- Base operator classes in
-
Graph execution:
- Dataflow graph management in
src/graph.ts
andsrc/D2.ts
- Message passing between operators
- Frontier tracking and advancement
- Dataflow graph management in
- Differential Dataflow
- Differential Dataflow from Scratch
- Python Implementation
- DBSP (very similar to Differential Dataflow)