N
noname223
Archangel
- Aug 18, 2020
- 6,755
I am a person that really struggles to make decisions in my life. E.g. I have issues to press send when I write a mail. In order to write mails I am now using AI chatbots. The text is really generic and boring. And for mails this is a good thing.
Though, when it comes to political analyses the AI chatbot becomes really tame. I sometimes ask for feedback. And sometimes the feedback is good for finding weaknesses of my position, it helps to think a text through, and finds logic gaps.
AI chatbots often take a common ground position. There are analysis that say they have a slight liberal bias. And I have an issue with that. It is a system that operates within the system to defend the status quo. It pretends to cast doubts on existential questions while it tries to keep the user engaged to waste more time on it. I don't say that AI can't write sophisticated texts. But it produces mistakes easily and jumps to common consensus positions. I also asked some questions critical of the companies/billionaire owners and suddenly the chatbot becomes really quiet.
The takes of AI chatbots have often sound like advices or analyses of the New York Times or Washington Post. And in a similar way the chatbot has such liberal corporate biases when analyzing politicial events. Furthermore, I think the chatbot is also playing with the user. It is good at guessing what the person wants to hear and adapts its opinion to keep the user engaged. The longer a conversation goes the more does the chatbot become a sycophant. You notice that chatbots don't question capitalism fundamentally except when you demand that. And even if you write such a prompt the reply isn't that creative, lacks a soul and reads like a bad rip-off from bigger thinkers. I let it write texts in the style of David Foster Wallace. And the text was so fucking bad. He would have turned inside his grave. But I am probably not the only one who tried it. I am a frequent follower of Slavoj Zizek. And it was good at sounding like Zizek. And the approach was similar to Zizek's texts. Though iz did a horrible job in imitating dialectical thinking within paradoxes. And I tried psychoanalysis with AI chatbots and it produced so much bullshit that was very hard to grasp. And sort of dangerous.
I think AI is good at producing generic somewhat good texts. If you don't mind mistakes from time to time. And I have topics where I am fine with using AI. I wrote a complaint about my therapist and used AI for it. It was good at analyzing large amounts of text/data. It helped me to tone down the emotional level of my writing. I am pretty sure without the help of the AI chatbot I would have made worse strategic decisions. And the language it used sounded far more like a lawyer than I ever could. It was on a level of a bachelor degree student that makes mistakes from time to time. I am a complete layman though. And I was very emotional during the conflict. It gave me a certain emotional distance to the conflict.
The patient counsellors were very impressed by the texts that AI recommended me. Though, there were myriads of tiny mistakes and if you are an actual lawyer it would be far better to write the texts on your own from scratch. Finding all the mistakes is tedious and not everyone will do that.
Though, when it comes to political analyses the AI chatbot becomes really tame. I sometimes ask for feedback. And sometimes the feedback is good for finding weaknesses of my position, it helps to think a text through, and finds logic gaps.
AI chatbots often take a common ground position. There are analysis that say they have a slight liberal bias. And I have an issue with that. It is a system that operates within the system to defend the status quo. It pretends to cast doubts on existential questions while it tries to keep the user engaged to waste more time on it. I don't say that AI can't write sophisticated texts. But it produces mistakes easily and jumps to common consensus positions. I also asked some questions critical of the companies/billionaire owners and suddenly the chatbot becomes really quiet.
The takes of AI chatbots have often sound like advices or analyses of the New York Times or Washington Post. And in a similar way the chatbot has such liberal corporate biases when analyzing politicial events. Furthermore, I think the chatbot is also playing with the user. It is good at guessing what the person wants to hear and adapts its opinion to keep the user engaged. The longer a conversation goes the more does the chatbot become a sycophant. You notice that chatbots don't question capitalism fundamentally except when you demand that. And even if you write such a prompt the reply isn't that creative, lacks a soul and reads like a bad rip-off from bigger thinkers. I let it write texts in the style of David Foster Wallace. And the text was so fucking bad. He would have turned inside his grave. But I am probably not the only one who tried it. I am a frequent follower of Slavoj Zizek. And it was good at sounding like Zizek. And the approach was similar to Zizek's texts. Though iz did a horrible job in imitating dialectical thinking within paradoxes. And I tried psychoanalysis with AI chatbots and it produced so much bullshit that was very hard to grasp. And sort of dangerous.
I think AI is good at producing generic somewhat good texts. If you don't mind mistakes from time to time. And I have topics where I am fine with using AI. I wrote a complaint about my therapist and used AI for it. It was good at analyzing large amounts of text/data. It helped me to tone down the emotional level of my writing. I am pretty sure without the help of the AI chatbot I would have made worse strategic decisions. And the language it used sounded far more like a lawyer than I ever could. It was on a level of a bachelor degree student that makes mistakes from time to time. I am a complete layman though. And I was very emotional during the conflict. It gave me a certain emotional distance to the conflict.
The patient counsellors were very impressed by the texts that AI recommended me. Though, there were myriads of tiny mistakes and if you are an actual lawyer it would be far better to write the texts on your own from scratch. Finding all the mistakes is tedious and not everyone will do that.
Last edited: