"Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation."
In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom
Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of (self) consciousness?
The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter-connected imperatives. I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots. The system can become hyper-sensitive to inputs that have little or nothing to do with supply and demand.
If you're into stuff like this, you can read the full review.