Online POMDP and POSG Planning for Deception and Counter-Deception
Professor Zachary Sunberg
University of Colorado Boulder
DECODE-AI Project Kickoff Meeting
February 26, 2025
PI: Prof. Zachary Sunberg
PhD Students
Postdoc
POMDP = Partially Observable Markov Decision Process
For DECODE-AI:
\([x, y, z,\;\; \phi, \theta, \psi,\;\; u, v, w,\;\; p,q,r]\)
\(\mathcal{S} = \mathbb{R}^{12}\)
\(\mathcal{S} = \mathbb{R}^{12} \times \mathbb{R}^\infty\)
Very large continuous state and observation spaces
State Space:
Online: Computing as you are interacting (Kahneman Thinking Fast and Slow System 2)
Markov Decision Process (MDP)
Aleatory
\([x, y, z,\;\; \phi, \theta, \psi,\;\; u, v, w,\;\; p,q,r]\)
\(\mathcal{S} = \mathbb{R}^{12}\)
\(\mathcal{S} = \mathbb{R}^{12} \times \mathbb{R}^\infty\)
\[\underset{\pi:\, \mathcal{S} \to \mathcal{A}}{\text{maximize}} \quad \text{E}\left[ \sum_{t=0}^\infty R(s_t, a_t) \right]\]
Partially Observable Markov Decision Process (POMDP)
Aleatory
Epistemic (Static)
Epistemic (Dynamic)
\([x, y, z,\;\; \phi, \theta, \psi,\;\; u, v, w,\;\; p,q,r]\)
\(\mathcal{S} = \mathbb{R}^{12}\)
\(\mathcal{S} = \mathbb{R}^{12} \times \mathbb{R}^\infty\)
[Lim, Becker, Kochenderfer, Tomlin, & Sunberg, JAIR 2023]
\[|Q_{\mathbf{P}}^*(b,a) - Q_{\mathbf{M}_{\mathbf{P}}}^*(\bar{b},a)| \leq \epsilon \quad \text{w.p. } 1-\delta\]
For any \(\epsilon>0\) and \(\delta>0\), if \(C\) (number of particles) is high enough,
No direct relationship between \(C\) and \(|\mathcal{S}|\) or \(|\mathcal{O}|\)
1. Low-sample particle filtering
2. Sparse Sampling
[Lim, Becker, Kochenderfer, Tomlin, & Sunberg, JAIR 2023; Sunberg & Kochenderfer, ICAPS 2018; Others]
[Deglurkar, Lim, Sunberg, & Tomlin, 2023]
Task 3.3 Goal: Use more than variance in learned components
Kahneman System 1
Kahneman System 2
(Ours)
Progress so far:
[Lee et al., ICML 2019]
[Ma et al., ICLR 2020]
Query
Key
Value
Key Challenge: Interpreting particle beliefs in an order-invariant way
The POMDP is a good model for information gathering, but it is incomplete:
Partially Observable Stochastic Game (POSG)
Image: Russel & Norvig, AI, a modern approach
P1: A
P1: K
P2: A
P2: A
P2: K
Task 4.1 Goal: Create online POSG planning algorithms for real-world problems
Partially Observable Markov Decision Process (POMDP)
Aleatory
Epistemic (Static)
Epistemic (Dynamic)
Aleatory
Epistemic (Static)
Epistemic (Dynamic)
Interaction
[Becker & Sunberg, AAMAS 2025]
Our approach: combine particle filtering and information sets
Joint Belief
Joint Action
[Becker & Sunberg, AAMAS 2025]
Open (related) questions:
Prof. Zachary Sunberg
PhD Student: Tyler Becker
PhD Student: Himanshu Gupta
PhD Student: Jackson Wagner
DECODE-AI Task 3.3: Integrating online POMDP planning with learned components
DECODE-AI Task 4.1: Online planning in POSGs
[Peters, Tomlin, and Sunberg 2020]
Incomplete Information Extensive form Game
Our new algorithm for POMGs
POMDPs.jl - An interface for defining and solving MDPs and POMDPs in Julia
PI: Prof. Zachary Sunberg
PhD Students
Postdoc
[Mern, Sunberg, et al. AAAI 2021]
[Lim, Tomlin, & Sunberg CDC 2021]
Individual Infectiousness
Infection Age
Incident Infections
Need
Test sensitivity is secondary to frequency and turnaround time for COVID-19 surveillance
Larremore et al.
Viral load represented by piecewise-linear hinge function