Similar experience for me. It wasn’t that I felt my thoughts were invalid, but I didn’t feel like it was impacting me in the moment, and then every session was like “sure, that was illogical, but I still felt that at the time”.
I’ve been trying ACT, and while I don’t know if it’s been effective yet, at least it’s helping me process and understand my thoughts better.
When these language models are trained, a large portion is augmenting the training by adding noise to the data.
This is actually something LLMs are trained to do because it helps them infer meaning despite typos.