But I must then immediately point out that the creative part of that cooperative effort would immediately lose control.
What do you mean by that?
This is one of the big fears about Artificial Intelligence (AI) for example:
A fear, which mostly lives in hyped up media. When techies like Elon Musk express a fear of AI, it concerns how such technology might be implemented and used. Tesla's are packed with AI. I can assure you that Musk doesn't fear his cars to learn to become a terminator.
It may be (and now surely is) possible to teach a machine how to learn on its own.
But only within the parameters it is initially created for. People tend to misunderstand or ignore that part.
Let's continue with the Tesla example.
The self-driving function of a Tesla car might be connected to a cloud based self-learning AI, which over time makes Tesla cars better at self-driving. And this based on a variety of factors. For example: it would become better at anticipating weather conditions and how the breaks, speed etc could better respond to icy roads or certain types of wind or alike. And all that to increase safety. And over time, after analysing ridiculous amounts of "experience" data, they will become ridiculously good and driving safely.
If the AI is made for making self-driving cars safer, then that is what that AI engine will be learning. It won't suddenly learn the car how it can become a suicide bomb in a self-invented quest to exterminate all humans.
AI engines aren't like human brains. They can only learn the things they are made for.
Having done so, however, the programmer who managed the feat will have little control over what happens next.
Only in terms of what it will be learning within the parameters it was build for.
And that is kind of logical, because if the programmer didn't need the AI engine for the "safety algorithm" to become so good, then he could just write the perfect safety algorithm right from the beginning, without needing a long period for the AI to continuously improve such a function.
This is pretty much the same story as with genetic algorithms - essentially an optimisation module.
If the engineers at Boeing knew from the start how to develop a highly efficient fuel/fluid distribution system in their airplanes, then they wouldn't have commissioned the development of such a GA system to evolve one... instead, they would have just designed it themselves.
Another big point I feel like making here, is that the analogy between evolution and AI is a false one.
An AI has an intended goal. Evolution does not.
An AI can be thought of as an entity as wel: a program. Evolution isn't an entity nore a program. Instead, it's an inevitable process.