Regret, blame, shame, guilt these things require a fuel to function long term, in the form of an idea. Without this specific idea, these things don't function. The idea I'm talking about is called "Free will".
First on these emotions and attitudes: Are they useless? No. Feeling shame when we do something wrong, is good to some degree. We need some system to say,
"Hey! That's wrong! Don't do that again. This feels bad!" Being totally free of shame, guilt, etc, is not the point here. To be clear we're talking about the pathological form of these emotions that continues to eat away at you well after whatever bad thing happened is over with.
That said, the only actual important question is: "Is it true that we have free will, or is it false?" And it will be clear why in the end.
First, imagine you made a robot who had a conscious experience.
Imagine you programmed this robot's behavior fully. You knew it would behave in certain ways, since after all, you built it from scratch. You knew that when you asked it certain questions, it would give certain answers. Everything was predictable and determined-- both the experience this robot perceived, and the external actions this robot made. You gave it the ability to feel sensations and emotions.
Do you agree that if this robot did something which someone disliked, that if they "punished" this robot to make it feel pain, this would be an insane behavior? Or if they guilt-tripped this robot about an action they did, and traumatized this robot, that this would just be pointless misery? Wouldn't it be just as insane as beating a dog because it went, "woof"?
But what if you gave this robot a potential to learn? What if you gave it a subjective feeling of making choices? Does this change anything? The robot goes near someone, it is operating using its own script, it analyzes the environment, and has multiple choices. Ultimately, its script, its nature, has it decide on one choice. This robot has a complex inner monologue where it talks to itself. Inside this robot's inner experience, you can hear it thinking:
"Should I... move left... or right? Hmm... I think I will move left. No... ri... actually okay, left. Left. I'm certain."
It finally chooses left, and rolls over someone's shoe. The robot is punished brutally and tortured. The robot is told it deserved it. The robot feels intense negative emotion for the rest of its existence, because it did the wrong thing. But should it? It's just a robot. It has a nature, a programming. The robot did not make itself. Would you punish the robot when it rolled over your shoe? Or would you understand that it's just a machine, which does what its nature ultimately expresses? Of course, if this robot is hurting you, you should move away. And if it continues to do this to you, we should isolate this robot. But should it be punished? Tortured? Should it drown in regret for the rest of its life?
How are we different from these robots? Look for an important difference, you won't find it. The idea of free will, freely made choices which you must now feel guilt and regret over, and blame and be blamed over, is just an idea. It's an idea society believes, that came from religions, so that society can punish and blame and reward. Nothing more. It does not mean the idea is true. And once you see this idea is untrue, the regret that haunts you will disappear, because again, regret cannot function once you understand that you could not have done otherwise. Haunting regret **assumes** that you must have been able to do differently. Full acceptance that you could not have done differently implies letting go of all regret completely. If you hope to learn from regrets, you still must try and put your best effort into the future or you will potentially repeat your mistakes. But blaming yourself forever is tragic, because 1) It's based on confused ideas, and 2) it is misery with no purpose, only harm.