Rapid urbanization and rising vehicle ownership have intensified the demand for efficient parking, especially in dense urban areas. Fully automated multi-level parking systems provide a promising solution, but real-time space allocation remains a major challenge. This paper presents an adaptive Reinforcement Learning (RL) framework using Deep Q- learning to optimize dynamic slot allocation. The state space integrates high-resolution data such as vehicle dimensions, parking duration, demand patterns, and occupancy levels, enabling context-aware decision-making. The action space supports adaptive strategies including priority-based assignment, dynamic rerouting, and load balancing. A novel reward function balances space utilization, vehicle search time, and energy efficiency while prioritizing user- centric metrics like wait time and throughput. Simulations in a realistic 3D parking environment show a 10% reduction in search times and a 15% improvement in throughput compared to heuristic methods. These findings demonstrate the potential of RL-driven approaches to transform automated parking, advancing smart transportation theory while offering practical guidance for next-generation urban infrastructure.