Generated using AI. Be aware that everything might not be accurate.



Chapter 15: Action Plan and Next Steps

You have now covered the full landscape of point process theory and simulation — from the memoryless Poisson process to self-exciting Hawkes cascades, doubly stochastic Cox processes, and marked ETAS models. This final chapter gives you a decision guide for choosing the right model, pointers to the research frontier, and a reading list for going deeper.


15.1 Model Decision Guide

The following questions help narrow down which model to use:

Step 1: Is the rate constant?

  • Yes → HPP. Check the Fano factor and inter-arrival KS test.
  • No → Proceed to Step 2.

Step 2: Is the non-stationarity deterministic (known driver) or random?

  • Deterministic (time of day, known covariates) → NHPP with a parametric λ(t).
  • Random (unobserved environment) → Cox process. Proceed to Step 3.

Step 3: Is the process overdispersed but without self-excitation?

  • Overdispersion (F > 1) + no clustering in ACF → Cox/LGCP.
  • Clustering + self-excitation (ACF shows short-range positive correlation) → Hawkes.

Step 4: Do events have meaningful attributes (size, type)?

  • No → ground process models (HPP, Hawkes, etc.).
  • Yes → Marked point process. If marks affect future rates → ETAS or marked Hawkes.

Step 5: Are there multiple interacting streams?

  • No → Univariate model.
  • Yes → Multivariate Hawkes. Check cross-excitation via the kernel matrix.
Observation Recommended Model
Constant rate, no memory HPP
Time-varying rate, no memory NHPP
Refractory period, no clustering Renewal
Bursts triggered by events Hawkes
Multiple interacting streams Multivariate Hawkes
Overdispersion from latent environment Cox / LGCP
Magnitude-dependent cascades ETAS / Marked Hawkes

15.2 Key Formulas to Remember

Formula Meaning
λ*(t) = λ HPP
λ*(t) = μ + Σ α·exp(−β(t−tᵢ)) Hawkes with exp kernel
n* = α/β < 1 Stationarity condition
E[λ*] = μ/(1−n*) Mean Hawkes rate
ℓ = Σ log λ*(tᵢ) − ∫ λ*(t)dt Log-likelihood (universal)
τᵢ = ∫₀^{tᵢ} λ*(s)ds ~ Exp(1) Time-rescaling residuals
ρ(A) < 1 Multivariate stationarity
Var[N] = E[Λ] + Var[Λ] Cox overdispersion

15.3 Software Ecosystem

Library Strengths Install
tick (Inria) Fast Hawkes simulation and MLE, multi-dimensional pip install tick
hawkeslib Simple Python Hawkes library pip install hawkeslib
pyHawkes Research-oriented, various kernels GitHub
PINT (R) Point process inference toolkit in R CRAN
PySEF Stochastic event forecasting (seismology) GitHub
lifelines Survival analysis, renewal processes pip install lifelines

For most applications, the from-scratch implementations in this book’s code/ directory are sufficient and educational. Use tick when you need speed for large datasets.


15.4 Further Reading

Foundational textbooks:

  • Daley & Vere-Jones, An Introduction to the Theory of Point Processes (Vol. 1 and 2) — the definitive reference
  • Kingman, Poisson Processes — elegant treatment of the Poisson process family
  • Snyder & Miller, Random Point Processes in Time and Space

Seminal papers:

  • Hawkes (1971) — Spectra of some self-exciting and mutually exciting point processes — the original Hawkes paper
  • Ogata (1981) — On Lewis’ simulation method for point processes — thinning algorithm
  • Ogata (1988) — Statistical models for earthquake occurrences — ETAS model
  • Brown et al. (2002) — The time-rescaling theorem and its application to neural spike train data analysis

Recent research:

  • Du et al. (2016) — Recurrent Marked Temporal Point Processes (RNNs for point processes)
  • Mei & Eisner (2017) — The Neural Hawkes Process
  • Zuo et al. (2020) — Transformer Hawkes Process
  • Reinhart (2018) — A review of self-exciting spatio-temporal point processes

15.5 Research Frontiers

Neural point processes: Replace the parametric intensity with a neural network (RNN, Transformer). Can capture complex non-Markovian dependencies but loses interpretability and requires more data.

Non-parametric kernel estimation: Instead of fixing φ(t) = α·exp(−βt), estimate the kernel from data using EM algorithms (Bacry & Muzy, 2016) or regularization.

Spatio-temporal processes: Extend to 2D space: earthquake locations, crime events, disease spread. The conditional intensity becomes λ*(t, x, y).

Hawkes processes for graphs: Events on nodes of a network; edges define cross-excitation structure (social networks, neural connectomes).

Uncertainty quantification: Bayesian approaches (MCMC for Hawkes, variational inference for LGCP) give full posterior distributions over parameters.


15.6 Practical Checklist

Before declaring a model fit:

  • Checked raw data for edge effects and completeness
  • Verified Fano factor to guide model choice
  • Fitted model with multiple starting points; convergence checked
  • Applied time-rescaling theorem; KS test p > 0.05
  • Examined QQ plot for systematic deviation
  • Checked ACF of rescaled residuals; Ljung-Box p > 0.05
  • Compared models with AIC/BIC or LRT
  • Validated branching ratio / spectral radius for stationarity
  • Assessed whether the model is interpretable in domain context

15.7 Summary

Point processes are a rich and practically essential class of stochastic models. The journey from the memoryless Poisson process (chapter 2) to the full ETAS model (chapter 10) represents increasing realism: more complex dynamics, more parameters, more powerful inference machinery.

The key insight that unifies the entire book: the conditional intensity λ*(t) is everything. Choose λ*(t) to match your domain knowledge, estimate it from data using the universal log-likelihood, and validate it with the time-rescaling theorem. Everything else is details.


← Chapter 14 Table of Contents


>> You can subscribe to my mailing list here for a monthly update. <<