The integration of artificial intelligence into mobile devices is fraught with serious challenges, especially due to the limited resources available and the requirements for real-time data processing. The article discusses modern approaches to reducing computing costs and resources in systems for mobile objects with artificial intelligence, including model optimization, and computing allocation strategies for mobile platforms with limited resources.
Keywords: artificial intelligence, moving objects, lightweight models, peripheral models, hardware acceleration, knowledge distillation, quantization