I think if we are placing any AI system into such a position of power (or even just influence as to advice) - and given that we might need an equivalent independent system to check such before accepting any actions or conclusions from such (perhaps a number of such and voting) - then it would have gone through many phases of prototypes and testing before we brought it into the 'real world', and having a constant 'system status' check, by means of testing for any vulnerabilities, would probably be part of such. Perhaps that would be one avenue for attacks, posing as testing however. So we would need other AI to check these, and so on ad finitum ...What I'm saying is, a super AI, once switched on would be vulnerable on many levels. We'd have to look on and do nothing while it is patching all the holes for it to become impervious to attacks or bribery.