A few rational objections:
1. Why would a future computer intelligence have any interest in resurrecting you as an individual?
Answer: It wouldn't. It probably isn't even possible to single out any particular person from this point in history for resurrection at the point in time we're speaking of - the decay of information so many billions of years into the future will be complete; there will be no record of "you" existing in any meaningful way at that point.
What it would have an interest in, and be capable of doing, is replicating every possible version of the universe which has ever been possible to exist, which would include the universe as we currently find it, which includes ourselves. This theory does not require that the computer take any particular interest in you or I. You aren't going to be simulated because the computer looked for information about your life and encoded it into its programming, but because the computer ran a simulation of the current Big Bang from fourteen billion years ago.
2. Would we necessarily perceive our subjective selves in the event that we end up in a future simulation?
Answer: It's plausible that you would, because what is being simulated is not your subjectivity - your consciousness isn't being directly coded and run through a program - but the entire state of the universe that led up to and produced you. Again, your existence in this scenario is incidental - which, paradoxically, makes it more likely that you will be aware, not as a copy of a computer program but as an actual sentience.
3. Would it be possible for me to be conscious again and to make different life-choices?
Answer: Probably not. This, again, ends in a cyclical return.
QI requires an open-ended multiverse in which every action creates a splinter universe. Omega Point requires a closed universe of finite duration that ends in heat death and obliteration.