Skip to content

Automatic Dependency Tracking

expert22 min read

The Magic Question: How Does It Know?

When you write this code:

const firstName = signal('Alice');
const lastName = signal('Smith');

const fullName = computed(() => `${firstName.value} ${lastName.value}`);

How does fullName know it depends on firstName and lastName? You never declared those dependencies. You didn't pass an array like React's useEffect([firstName, lastName]). The system just... knows.

This isn't magic. It's a clever pattern called automatic dependency tracking, and once you understand it, every signal system becomes transparent.

The Mental Model

Mental Model

Imagine a detective following a suspect. The detective (tracking context) tails the suspect (your computation function) as they go about their business. Every time the suspect visits a location (reads a signal), the detective writes it down in a notebook. When the suspect finishes their route, the detective has a complete list of every signal they visited. That list becomes the dependency set.

Next time any of those signals change, the detective knows to re-tail the suspect, because the suspect's route might change -- maybe they skip one signal and visit a new one. So the dependency list is rebuilt on every execution.

The Tracking Context Pattern

The core mechanism is a module-scoped variable (often called activeComputation, activeEffect, or currentSubscriber) that points to whatever computation is currently being evaluated.

let activeComputation = null;

function signal(initialValue) {
  let value = initialValue;
  const subscribers = new Set();

  return {
    get value() {
      if (activeComputation) {
        subscribers.add(activeComputation);
        activeComputation.dependencies.add(this);
      }
      return value;
    },
    set value(newValue) {
      if (!Object.is(value, newValue)) {
        value = newValue;
        for (const sub of [...subscribers]) {
          sub.execute();
        }
      }
    }
  };
}

function computed(fn) {
  let cachedValue;
  let dirty = true;

  const computation = {
    dependencies: new Set(),
    execute() {
      dirty = true;
    },
    get value() {
      if (dirty) {
        cleanup(computation);
        const prevComputation = activeComputation;
        activeComputation = computation;
        try {
          cachedValue = fn();
        } finally {
          activeComputation = prevComputation;
        }
        dirty = false;
      }
      if (activeComputation) {
        // This computed is being read inside another tracking context
        // Register as dependency
      }
      return cachedValue;
    }
  };

  return computation;
}

The key lines are:

  1. activeComputation = computation -- before running the derivation, we set ourselves as the active context
  2. if (activeComputation) subscribers.add(activeComputation) -- when a signal is read, it checks if anyone is tracking
  3. activeComputation = prevComputation -- after running, we restore the previous context (supports nesting)

This is a stack. Computeds can nest inside other computeds. The prevComputation save/restore creates an implicit stack.

Quiz
Why does the tracking context pattern use a save/restore approach (prevComputation) instead of a simple global variable?

The Subscription Graph

After all computeds and effects have run once, the system has built a complete subscription graph -- a directed graph where edges represent "depends on" relationships.

const a = signal(1);
const b = signal(2);
const c = computed(() => a.value + b.value);
const d = computed(() => c.value * 2);
const e = computed(() => a.value * 10);

effect(() => console.log(d.value, e.value));

The resulting graph:

a ──→ c ──→ d ──→ effect
 \         ↗
  └──→ e ─┘
b ──→ c

When a changes:

  • c is dirty (depends on a)
  • d is dirty (depends on c)
  • e is dirty (depends on a)
  • The effect re-executes (depends on d and e)

When b changes:

  • c is dirty (depends on b)
  • d is dirty (depends on c)
  • e is NOT dirty (doesn't depend on b)
  • The effect re-executes, but e returns its cached value

This is automatic. You didn't declare any of these relationships. The tracking context discovered them by running your code.

Execution Trace
Initial run
effect() executes, sets activeComputation = effect
Tracking begins
Read d.value
d is dirty, evaluates: activeComputation = d
d becomes active tracker
Read c.value (inside d)
c is dirty, evaluates: activeComputation = c
c becomes active tracker (d is saved)
Read a.value (inside c)
a subscribes c
a.subscribers now contains c
Read b.value (inside c)
b subscribes c
b.subscribers now contains c
c done
c = 3, restore activeComputation = d
d's tracking resumes
c subscribes d
c.subscribers now contains d
d registered as subscriber of c
d done
d = 6, restore activeComputation = effect
effect's tracking resumes
Read e.value
e evaluates, tracks a
a.subscribers now contains c AND e
Effect done
Logs: 6, 10
Full dependency graph established

Cleanup on Re-execution

Here's a subtlety that most tutorials skip: dependencies can change between executions.

const showDetails = signal(false);
const name = signal('Alice');
const bio = signal('Engineer at Google');

const display = computed(() => {
  if (showDetails.value) {
    return `${name.value}: ${bio.value}`;
  }
  return name.value;
});

When showDetails is false, display depends on showDetails and name. When showDetails becomes true, display now also depends on bio.

But here's the critical part: if showDetails was true and becomes false again, display should stop depending on bio. Otherwise, changes to bio would trigger unnecessary recalculations.

The solution: clear all dependencies before each re-execution.

function reexecute(computation) {
  // Remove this computation from all signals' subscriber lists
  for (const dep of computation.dependencies) {
    dep.subscribers.delete(computation);
  }
  computation.dependencies.clear();

  // Re-execute: dependencies will be re-tracked from scratch
  const prevComputation = activeComputation;
  activeComputation = computation;
  try {
    computation.fn();
  } finally {
    activeComputation = prevComputation;
  }
}

Every execution starts with a clean slate. The computation "forgets" all its old dependencies and builds a fresh set by running the function again. This means the dependency graph is always accurate -- it reflects what the computation actually read, not what it might read.

Common Trap

This cleanup-and-retrack pattern means that dependency tracking has a cost proportional to the number of signals read per execution. If your computed reads 1000 signals, it creates and tears down 1000 subscriptions on every re-execution. In practice, this is fast (Set operations are O(1) amortized), but it's worth knowing that the dependency graph isn't free. Solid and Vue have both optimized this with techniques like double-buffering the subscription sets to avoid unnecessary Set allocations.

Quiz
If a computed reads signal A when condition is true and signal B when condition is false, what happens to the dependency set when the condition toggles?

The Diamond Dependency Problem

The diamond problem is the most important consistency challenge in reactive systems. Consider:

const source = signal(1);
const left = computed(() => source.value * 2);
const right = computed(() => source.value + 10);
const bottom = computed(() => left.value + right.value);

effect(() => console.log(bottom.value));

The dependency graph forms a diamond:

     source
    /      \
  left    right
    \      /
     bottom
       |
     effect

When source changes, both left and right are dirty. bottom depends on both. If the system naively propagates notifications:

  1. source changes → notifies left and right
  2. left recalculates → notifies bottombottom recalculates using old right
  3. right recalculates → notifies bottombottom recalculates again with correct values

That first calculation of bottom at step 2 is a glitch -- it used stale data. And bottom computed twice when it should have computed once.

The solution: topological evaluation

Well-designed signal systems don't eagerly propagate values. Instead:

  1. Mark all downstream nodes as dirty (push notification)
  2. Collect all dirty nodes
  3. Sort by topological order (depth in the graph)
  4. Evaluate in order
Execution Trace
source.value = 5
Mark left (depth 1), right (depth 1) as dirty
Push dirty flags, don't evaluate yet
Propagate dirty
Mark bottom (depth 2) as dirty
bottom's dependencies are dirty, so it's dirty too
Propagate dirty
Mark effect (depth 3) as scheduled
An effect with dirty dependencies needs to run
Evaluate depth 1
left = 5 * 2 = 10, right = 5 + 10 = 15
Both depth-1 nodes evaluate first
Evaluate depth 2
bottom = 10 + 15 = 25
Both inputs are fresh when bottom evaluates
Execute effect
console.log(25)
Correct value, zero glitches, each node computed exactly once
Quiz
In the diamond dependency graph, how many times does bottom's derivation function execute when source changes?

Batching Updates

What happens when multiple signals change at once?

const x = signal(1);
const y = signal(2);
const sum = computed(() => x.value + y.value);

effect(() => console.log(sum.value));

// Without batching:
x.value = 10;  // effect runs: logs 12
y.value = 20;  // effect runs: logs 30

// With batching:
batch(() => {
  x.value = 10;
  y.value = 20;
});
// effect runs once: logs 30

Without batching, changing x triggers the effect, which sees x=10, y=2 (intermediate state). Then changing y triggers the effect again with x=10, y=20 (final state). Two executions, one of which showed an intermediate state the user never intended.

Batching defers all notifications until the batch completes:

function batch(fn) {
  batchDepth++;
  try {
    fn();
  } finally {
    batchDepth--;
    if (batchDepth === 0) {
      flushPendingNotifications();
    }
  }
}

During a batch, signal writes still update the stored value, but subscriber notifications are queued instead of executed. When the batch ends, all queued notifications flush at once, and the topological evaluation runs on the final state.

Info

Vue and Angular batch automatically -- any synchronous state changes in the same execution context are batched. Solid and Preact require explicit batch() calls for certain patterns, though DOM updates are always batched. The TC39 proposal's Watcher API provides batching through its notify() callback.

Advanced: Dynamic Dependency Graphs

The dependency graph isn't static. It changes on every execution based on code paths taken:

const loggedIn = signal(false);
const userName = signal('');
const guestCount = signal(0);

const greeting = computed(() => {
  if (loggedIn.value) {
    return `Welcome back, ${userName.value}!`;
  }
  return `Guest #${guestCount.value}`;
});

When loggedIn is false:

  • greeting depends on loggedIn and guestCount
  • Changes to userName do NOT trigger a recalculation

When loggedIn becomes true:

  • greeting now depends on loggedIn and userName
  • Changes to guestCount do NOT trigger a recalculation

This is fundamentally different from React's useEffect dependency array, which is static. You declare all possible dependencies upfront. Signals discover actual dependencies at runtime. This means:

  • No stale closure bugs (signals always read the current value)
  • No unnecessary re-executions from listed-but-unread dependencies
  • No "exhaustive deps" lint rule needed
The tracking context stack in real frameworks

Real frameworks use a stack rather than a single variable, because effects can trigger synchronous computeds which can trigger more computeds:

const stack = [];

function pushTracking(computation) {
  stack.push(computation);
  activeComputation = computation;
}

function popTracking() {
  stack.pop();
  activeComputation = stack[stack.length - 1] ?? null;
}

Solid's reactive runtime (@solidjs/signals in 2.0) uses a flat array stack with a depth counter. Vue's @vue/reactivity package uses activeEffect with a parent pointer (linked list stack). Angular's signals use a similar global tracking context. The pattern is universal; the implementation details differ.

One edge case every framework handles: reading a signal inside an async callback within an effect. The tracking context is synchronous -- by the time your await resumes, the tracking is gone. This is why all signal frameworks warn that async code inside effects doesn't track properly. The workaround is to read all signals synchronously, then do async work with the values.

Key Rules
  1. 1Dependencies are discovered at runtime by tracking which signals are read during computation execution
  2. 2Dependencies are cleared and rebuilt on every re-execution, keeping the graph accurate for conditional code paths
  3. 3Glitch-free execution requires topological sorting: evaluate shallow nodes before deep ones
  4. 4Batching defers notifications until all synchronous state changes complete, preventing intermediate states
  5. 5Async code inside effects breaks tracking -- read all signals synchronously before awaiting
What developers doWhat they should do
Reading a signal outside a tracking context and expecting reactivity
Without an active tracking context, the signal has no subscriber to notify. The read works but is not reactive
Always read signals inside computed, effect, or a framework's rendering context
Putting async/await inside a computed derivation
The tracking context is synchronous. After await, the activeComputation is null, so any signal reads after the await are not tracked
Use the framework's async primitive (createResource in Solid, asyncComputed in Vue)
Assuming the dependency graph is static
This dynamic tracking is a feature, not a bug. It means conditional dependencies work correctly without manual declaration
Dependencies change on every execution based on which code branches run