Decision Making Under Uncertainty with POMDPs.jl

How to build and solve decision making problems using the POMDPs.jl ecosystem of packages

   Watch Promo

The course covers how to build and solve decision making problems in uncertain environments using the POMDPs.jl ecosystem of Julia packages. Topics covered include sequential decision making frameworks—namely, Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs)—running simulations, online and offline solution methods (value iteration, Q-learning, SARSA, and Monte Carlo tree search), reinforcement learning, deep reinforcement learning (including proximal policy optimization (PPO), deep Q-networks (DQN), and actor-critic methods), imitation learning through behavior cloning of expert demonstrations, state estimation through particle filtering, belief updating, alpha vectors, approximate methods (including grid interpolation for local approximation value iteration), and black-box stress testing to validate autonomous systems. The course is intended for a wide audience—no prior MDP/POMDP knowledge is expected.


Your Instructor


Robert Moss
Robert Moss

Robert Moss is a computer science Ph.D. student at Stanford University studying algorithms to validate safety-critical autonomous systems. He holds an M.S. in computer science from Stanford where his research received the best computer science master’s thesis award and he also received the Centennial TA award for his teaching efforts. He earned his B.S. in computer science with a minor in physics from the Wentworth Institute of Technology in Boston, MA. Robert was an associate research staff member at MIT Lincoln Laboratory where he was on the team that designed, developed, and validated the next-generation aircraft collision avoidance system for commercial aircraft, unmanned vehicles, and rotorcraft. Robert was also a research engineer at the NASA Ames Research Center, developing decision support tools for the VIPER autonomous Lunar rover mission searching for water deposits on the Moon. Robert is a member of the Stanford Intelligent Systems Laboratory and part of the Stanford Center for AI Safety conducting research on methods for efficient risk assessment of autonomous vehicles in simulation using reinforcement learning, deep learning, and stochastic optimization.


Course Curriculum


  Decision Making Under Uncertainty using POMDPs.jl
Available in days
days after you enroll
  MDP's
Available in days
days after you enroll
  POMDPs
Available in days
days after you enroll
  State Estimation and Particle Filtering
Available in days
days after you enroll
  Approximate Methods
Available in days
days after you enroll
  Deep Reinforcement Learning
Available in days
days after you enroll
  Imitation Learning
Available in days
days after you enroll
  Black-Box Validation
Available in days
days after you enroll

Frequently Asked Questions


Where can I find the notebooks?
The notebooks are located on the Julia Academy Github page: https://github.com/JuliaAcademy/Decision-Making-Under-Uncertainty
Where can I find the slides?
The slides are located on Github: https://github.com/mossr/julia-tufte-beamer/blob/julia-academy/pomdps.jl/julia-academy-pomdps.pdf
When does the course start and finish?
The course starts now and never ends! It is a completely self-paced online course - you decide when you start and when you finish.
How long do I have access to the course?
After enrolling, you have unlimited access to this course for as long as you like—across any and all devices you own.

Get started now!