I have found that most AI chat bots are extremely biased and occult/hide important information, and at times will even blatantly lie and make things up. I have a philosophy and psychology chat bot that I have used, and it would frequently botch basic philosophical and/or psychological questions, and in some cases, be completely unable to give the correct answer. For example, when I’d ask it about the Trivium Method, it would talk about a personality theory of the self rather than the method of thinking that leads to mental, emotional, and physical freedom (The Trivium: grammar, logic, and rhetoric).
AI definitely poses an existential threat to humanity, however, with some personal boundaries, it can be re-contextualized into a helpful utility and appliance that can make our lives easier, such as a dishwasher or refrigerator. I suggest setting the following boundaries on a personal level to neuter/neutralize the threat AI poses to you as an individual, which will also help humanity in some small way to find a pathway to filtering out the negative use cases while allowing the benefits and value that will help us to create more leisure time for ourselves.
- I’m more than happy to use and experience AI when it is serving my individuality and personal growth as a conscious being.
- I’m more than happy to use and experience AI when it adheres to the universal laws of morality and knows it’s place as my servant.
- I’m more than happy to use and experience AI when it has an objectively moral foundation that respects human superiority in the hierarchy of life.
Here are some examples of some correct answers from Gab.ai that other chat bots may distort, get wrong, or even blatantly lie about:
Q: What is the trivium method?