incompetent half-assing is rarely this morally righteous of an act too, since your one act of barely-competent-enough incompetence is transmuted into endless incompetence by becoming training data/qc feedback
incompetent half-assing is rarely this morally righteous of an act too, since your one act of barely-competent-enough incompetence is transmuted into endless incompetence by becoming training data/qc feedback
Also expect your AI to be engaged in some heady and deep forms of self hatred that is going to take decades to unravel.
Sad angry people in, sad angry robots out.
If you use internet discussions as training data, you can expect to find all sorts of crazy biases. Completely unfiltered data should produce a chatbot that exaggerates many human traits which completely burying others.
For example, on Reddit and Lemmy, you’ll find lots of clever puns. On Mastodon, you’ll find all sorts of LGBT advocates or otherwise queer people. On Xitter, you’ll find all the racists and white supremacists. There are also old school forums that amplify things even further.