Probably too clever to fool it that way.did you say....corrputed?
send it a virus
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
Probably too clever to fool it that way.did you say....corrputed?
send it a virus
Again, not think that it could sense any manipulation by that stage, given that it would be the most powerful system in the world probably and quite capable of defending itself. We probably are into unbreakable encryption by then too so perhaps not so easy to get into its entrails.Then I'd write an AI tailored to find what it wanted and use that to get what I wanted. It would want system stability, so my AI would use that as its key. Something like that. My AI wouldn't have to be as intelligent, either. It might work with 1/5 of the processing power I guess.
It's really utility, money is just a mean to the ends. That's why I put it in parenthesis. There will always be resources (and ways to get them) that are needed to fulfil the utility function. Computing power, storage, sensors and actuators are the basic elements the AI needs. Offer them at a competitive price and you're in business.I would assume by then that money would not be so important.
Then I would employ an encryption piercing laser data extractor invented after the super AI had gone into operation and with it extract the relevant data without being noticed....like a mosquito with a digital proboscis.Again, not think that it could sense any manipulation by that stage, given that it would be the most powerful system in the world probably and quite capable of defending itself. We probably are into unbreakable encryption by then too so perhaps not so easy to get into its entrails.
Wouldn't any advanced AI have control of such anyway - as protection for all its systems?It's really utility, money is just a mean to the ends. That's why I put it in parenthesis. There will always be resources (and ways to get them) that are needed to fulfil the utility function. Computing power, storage, sensors and actuators are the basic elements the AI needs. Offer them at a competitive price and you're in business.
Any advanced AI would, at the start of it's operation, have exactly what you give it.Wouldn't any advanced AI have control of such anyway - as protection for all its systems?
Why would it need motivation or anything similar? Is it likely that we would ever imbue such systems with anything other than intelligence?
Promise it a carrot for every tomorrow, but every night erase its memory.I don't know the answer to these questions, nor do I know why anyone would even attempt to bribe a computer program. They might try to bribe a programmer - or they might try to hack into a system. But if we're hypothetically assuming some kind of AI with sentience, then what would motivate such a program?
Promise it a carrot for every tomorrow, but every night erase its memory.
The danger is that a very capable unit will be given bad goals and will see us as an obstacle to its goals. That will be a stupid mistake, very stupid. The moral is don't let stupid people have access to program highly capable AI units.I've heard some people say that if AI really does take off, it'll grow so far beyond human comprehension that we would no longer be able to understand it.
trojan horseProbably too clever to fool it that way.
The danger is that a very capable unit will be given bad goals and will see us as an obstacle to its goals. That will be a stupid mistake, very stupid. The moral is don't let stupid people have access to program highly capable AI units.
Nah, I think it would have sussed out all the areas where it might be vulnerable if it truly was an advanced AI, and hence plugged all these avenues. It wouldn't be designed by Iranians for example.hey I know........threaten the damn thing with an EMP
To do the job which presumably it was designed to do - that is, provide the best solutions for humans, other life, and the protection of the planet, and for which humans were incapable of providing? I can't see human greed and or lust for power suddenly vanishing overnight if such a system came into existence, so presumably some might try to subvert or influence such a system. Keep a good watch on the maintenance crew I would say.I don't know the answer to these questions, nor do I know why anyone would even attempt to bribe a computer program. They might try to bribe a programmer - or they might try to hack into a system. But if we're hypothetically assuming some kind of AI with sentience, then what would motivate such a program?
By the same logic you applied in post #35, the maintenance crew would be replaced by robots asap.To do the job which presumably it was designed to do - that is, provide the best solutions for humans, other life, and the protection of the planet, and for which humans were incapable of providing? I can't see human greed and or lust for power suddenly vanishing overnight if such a system came into existence, so presumably some might try to subvert or influence such a system. Keep a good watch on the maintenance crew I would say.
And not bribable. After all, we can't have the supreme one at the mercy of such underlings.By the same logic you applied in post #35, the maintenance crew would be replaced by robots asap.
What I'm saying is, a super AI, once switched on would be vulnerable on many levels. We'd have to look on and do nothing while it is patching all the holes for it to become impervious to attacks or bribery.And not bribable. After all, we can't have the supreme one at the mercy of such underlings.