I ran out of points with Suno, so can't test that
But having used Udio more now, you do have a lot of control. It is much like when generating images, you can direct the AI to do certain things using tags, but exactly which works and whether it gets it right has a bit of randomness to it. Also, you generate only 30 seconds at the time, so in the last song I posted which is 5.10 I think, there are a lot of generations going into it, especially because of the genres I tried to mix I think.
Using tags like [Solo], [Interlude] etc. will help guide the AI to do what you want it to.
Also, the echoes etc. can be made using (). Something like .... house (house-house-house). So there are a lot of things you can do to influence the AI, but still, you have to potentially generate a lot and then choose those bits you want and cut away those you do not.
But something that could make it much better was if you could split the music from the vocals. Because sometimes the music is very cool, but the vocals are not. In the last song as well, I had issues getting her to sing in the same soft tone, because maybe the AI think that now it is time to focus more on the House genre. But can't see why the tools for manipulating songs wouldn't improve, because to me at least, it seems easier to manipulate music than an image without it completely ruining the song.
I think you can also use tags in Suno in the lyrics, but again ran out of credits. But I think Udios extend functionality offers more control than Suno does.