What effect, if any, does a system’s CPU speed have on GPU inference with CUDA in llama.cpp?

What effect, if any, does a system’s CPU speed have on GPU inference with CUDA in llama.cpp?
How does the new Puget Mobile 17″ compare to the MacBook Pro M3 Max 16″ in performance across a variety of AI-powered workloads?