Abstract
Strategic Long-Range Transportation Planning (SLRTP) is pivotal in shaping prosperous, sustainable, and resilient urban futures. Existing SLRTP decision support tools predominantly serve forecasting and evaluative functions, leaving a gap in directly recommending optimal planning decisions. To bridge this gap, we propose an Interpretable State-Space Model (ISSM) that considers the dynamic interactions between transportation infrastructure and the broader urban system. The ISSM directly facilitates the development of optimal controllers and reinforcement learning (RL) agents for optimizing infrastructure investments and urban policies while still allowing human-user comprehension. We carefully examine the mathematical properties of the ISSM; specifically, we present the conditions under which our proposed ISSM is Markovian, and a unique and stable solution exists. Then, we apply an ISSM instance to a case study of the San Diego region of California, where a partially observable ISSM represents the urban environment. We also propose and train a Deep RL agent using the ISSM instance representing San Diego. The results show that the proposed ISSM approach, along with the well-trained RL agent, capture the impacts of coordinating the timing of infrastructure investments, environmental impact fees for new land development, and congestion pricing fees. The results also show that the proposed approach facilitates the development of prescriptive capabilities in SLRTP to foster economic growth and limit induced vehicle travel. We view the proposed ISSM approach as a substantial contribution that supports the use of artificial intelligence in urban planning, a domain where planning agencies need rigorous, interpretable models to justify their actions.