This paper addresses the problem of mission assignment and task offloading, where autonomous vehicles utilize mobile edge computing to perform efficient processing in an Open RAN-based Intelligent Transportation System (ITS). Existing studies have limitations, such as failing to consider inter-mission interdependencies and the cost of offloading tasks to edge servers, leading to suboptimal decision-making. To address these limitations, this paper proposes Oranits, a novel system model that optimizes performance through vehicle cooperation while explicitly considering mission dependencies and offloading costs. To achieve this, we first develop a metaheuristic-based evolutionary computing algorithm called Chaotic Gaussian-Based Global ARO (CGG-ARO) as a baseline for optimization within a single slot. Second, we design a reinforcement learning framework called Multi-Agent Double-Deep Q-Network (MA-DDQN) that integrates multi-agent coordination and multi-action selection mechanisms to reduce mission assignment time and improve adaptability compared to baseline methods. Extensive simulation results show that CGG-ARO improves the number of completed missions and overall profit by approximately 7.1% and 7.7%, respectively, while MA-DDQN improves the number of completed missions and overall profit by 11.0% and 12.5%, respectively. These results highlight Oranits's ability to enable faster, more adaptive, and more efficient task processing in dynamic ITS environments.