MH-FLOCKE MH-FLOCKE
HomeDocsGitHubBlogPaperYouTubeReddit𝕏

World Model & Dreams

Spiking World Model & Dream Engine

The world model predicts the next sensor state from current sensors + motor command. Prediction error drives learning via the Free Energy Principle (Friston 2010).

Predictive Coding

The model is a small SNN that learns via R-STDP with negative PE as reward. Less error = more reward = strengthen prediction synapses.

predicted = WorldModel.predict(sensor_t, motor_t)
PE = mean((predicted - sensor_{t+1})²)

Dream Engine

During dream phases (every 100 steps), two mechanisms consolidate learning offline:

  • Replay (70%) — Re-train on stored experiences (hippocampal replay)
  • Hallucination (30%) — World model generates imagined states from random inputs

References

  • Rao & Ballard (1999). Predictive coding in visual cortex. Nature Neuroscience
  • Friston (2010). The free-energy principle. Nature Reviews Neuroscience
  • O’Neill et al. (2010). Reactivation of waking experience and memory. Trends in Neurosciences

API Reference

SpikingWorldModel(config: WorldModelConfig)

predict(sensor_input, motor_command) → Tensor

Predict next sensor state via internal SNN. Output from membrane potentials (sigmoid).

train_step(sensor_input, motor_command, actual_next) → float

Train and return PE (MSE). Applies R-STDP with -PE × lr × 10 as reward.

get_state() → dict

Returns n_neurons, n_input, n_hidden, n_output, mean_prediction_error.

DreamEngine(world_model, creature_snn)

record_experience(sensor, action, reward)

Store in replay buffer (max 10,000).

dream(n_steps=100, replay_ratio=0.7) → dict

Offline learning. Returns n_replay_steps, n_hallucination_steps.

WorldModelConfig

n_hidden: 200   tau_base: 30.0ms   learning_rate: 0.02