Using Car to Infrastructure Communication to Accelerate Learning in Route Choice
DOI:
https://doi.org/10.5753/jidm.2021.1935Keywords:
urban mobility, multiagent systems, reinforcement learning, vehicle to infrastructure communicationAbstract
The task of choosing a route to move from A to B is not trivial, as road networks in metropolitan areas tend to be over crowded. It is important to adapt on the fly to the traffic situation. One way to help road users (driver or autonomous vehicles for that matter) is by using modern communication technologies.
In particular, there are reasons to believe that the use of communication between the infrastructure (network), and the demand (vehicles) will be a reality in the near future. In this paper, we use car-to-infrastructure (C2I) communication to investigate whether the road users can accelerate their learning processes regarding route choice by using reinforcement learning (RL). The kernel of our method is a two way communication, where road users communicate their rewards to the infrastructure, which, in turn, aggregate this information locally and pass it to other users, in order to accelerate their learning tasks. We employ a microscopic simulator in order to compare this method with two others (one based on RL without communication and a classical iterative method for traffic assignment). Experimental results using a grid and a simplification of a real-world network show that our method outperforms both.
Downloads
References
ALEGRE , L. N., ZIEMKE , T., AND BAZZAN , A. L. C. Using reinforcement learning to control traffic signals in a real-world scenario: an approach based on linear function approximation. IEEE Transactions on Intelligent Transportation Systems, 2021. under-review.
AULD , J., VERBAS , O., AND STINSON , M. Agent-based dynamic traffic assignment with information mixing. Procedia Computer Science vol. 151, pp. 864–869, 2019.
BAZZAN , A. L. C. Aligning individual and collective welfare in complex socio-technical systems by combining metaheuristics and reinforcement learning. Eng. Appl. of AI vol. 79, pp. 23–33, 2019.
BAZZAN , A. L. C. AND GRUNITZKI , R. A multiagent reinforcement learning approach to en-route trip building. In 2016 International Joint Conference on Neural Networks (IJCNN). pp. 5288–5295, 2016.
BAZZAN , A. L. C. AND KLÜGL , F. Experience sharing in a traffic scenario. In Proc. of the 11th Int. Workshop on Agents in Traffic and Transportation, I. Dusparic, F. Klügl, M. Lujak, and G. Vizzari (Eds.). Vol. 2701. CEUR-WS.org, Santiago de Compostella, pp. 71–78, 2020.
BURIOL , L. S., HIRSH , M. J., PARDALOS , P. M., QUERIDO , T., RESENDE , M. G., AND RITT , M. A biased random-key genetic algorithm for road congestion minimization. Optimization Letters vol. 4, pp. 619–633, 2010.
GRUNITZKI , R. AND BAZZAN , A. L. C. Combining car-to-infrastructure communication and multi-agent reinforcement learning in route choice. In Proceedings of the Ninth Workshop on Agents in Traffic and Transportation (ATT-2016), A. L. C. Bazzan, F. Klügl, S. Ossowski, and G. Vizzari (Eds.). CEUR Workshop Proceedings, vol. 1678. CEUR-WS.org, New York, 2016.
GRUNITZKI , R. AND BAZZAN , A. L. C. Comparing two multiagent reinforcement learning approaches for the traffic assignment problem. In Intelligent Systems (BRACIS), 2017 Brazilian Conference on, 2017.
KAELBLING , L. P., LITTMAN , M., AND M OORE , A. Reinforcement learning: A survey. Journal of Artificial Intelligence Research vol. 4, pp. 237–285, 1996.
KOSTER , A., TETTAMANZI , A., BAZZAN , A. L. C., AND PEREIRA , C. D . C. Using trust and possibilistic reasoning to deal with untrustworthy communication in VANETs. In Proceedings of the 16th IEEE Annual Conference on Intelligent Transport Systems (IEEE-ITSC). IEEE, The Hague, The Netherlands, pp. 2355–2360, 2013.
LOPEZ , P. A., BEHRISCH , M., BIEKER -W ALZ , L., ERDMANN , J., FLÖTTERÖD , Y.-P., HILBRICH , R., LÜCKEN , L., RUMMEL , J., WAGNER , P., AND WIESSNER , E. Microscopic traffic simulation using sumo. In The 21st IEEE International Conference on Intelligent Transportation Systems, 2018.
ORTÚZAR , J. D . D. AND WILLUMSEN , L. G. Modelling transport. John Wiley & Sons, Chichester, UK, 2011.
RAMOS , G. DE . O. AND GRUNITZKI , R. An improved learning automata approach for the route choice problem. In Agent Technology for Intelligent Mobile Services and Smart Societies, F. Koch, F. Meneguzzi, and K. Lakkaraju (Eds.). Communications in Computer and Information Science, vol. 498. Springer Berlin Heidelberg, pp. 56–67, 2015.
SANTOS , G. D. DOS . AND BAZZAN , A. L. C. Accelerating learning of route choices with C2I: A preliminary investigation. In Proc. of the VIII Symposium on Knowledge Discovery, Mining and Learning. SBC, pp. 41–48, 2020.
SANTOS , G. D. DOS . AND BAZZAN , A. L. C. Sharing diverse information gets driver agents to learn faster: an application in en route trip building. PeerJ Computer Science vol. 7, pp. e428, March, 2021.
SHARON , G., HANNA , J. P., RAMBHA , T., LEVIN , M. W., ALBERT , M., BOYLES , S. D., AND STONE , P. Real-time adaptive tolling scheme for optimized social welfare in traffic networks. In Proc. of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017), S. Das, E. Durfee, K. Larson, and M. Winikoff (Eds.). IFAAMAS, São Paulo, pp. 828–836, 2017.
TAN , M. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the Tenth International Conference on Machine Learning (ICML 1993). Morgan Kaufmann, pp. 330–337, 1993.
TAVARES , A. R. AND BAZZAN , A. L. An agent-based approach for road pricing: system-level performance and implications for drivers. Journal of the Brazilian Computer Society 20 (1): 15, 2014.
TUMER , K., WELCH , Z. T., AND AGOGINO , A. Aligning social welfare and agent preferences to alleviate traffic congestion. In Proceedings of the 7th Int. Conference on Autonomous Agents and Multiagent Systems, L. Padgham, D. Parkes, J. Müller, and S. Parsons (Eds.). IFAAMAS, Estoril, pp. 655–662, 2008.
WARDROP , J. G. Some theoretical aspects of road traffic research. Proceedings of the Institution of Civil Engineers, Part II 1 (36): 325–362, 1952.
WATKINS , C. J. C. H. AND DAYAN , P. Q-learning. Machine Learning 8 (3): 279–292, 1992.
YU , Y., HAN , K., AND OCHIENG , W. Day-to-day dynamic traffic assignment with imperfect information, bounded rationality and information sharing. Transportation Research Part C: Emerging Technologies vol. 114, pp. 59–83, 2020.
ZHOU , B., SONG , Q., ZHAO , Z., AND LIU , T. A reinforcement learning scheme for the equilibrium of the in-vehicle route choice problem based on congestion game. Applied Mathematics and Computation vol. 371, pp. 124895, 2020.
ZIEMKE , T., ALEGRE , L. N., AND BAZZAN , A. L. C. Reinforcement learning vs. rule-based adaptive traffic signal control: A fourier basis linear function approximation for traffic signal control. AI Communications, 2021.