You can try, but you can’t make it correct. My ideal is to write code once that is bug-free. That’s very difficult, but not fundamentally impossible. Especially in small well-scrutinized areas that are critical for security it is possible with enough care and effort to write code with no security bugs. With LLM AI tools that’s not even theoretically possible, let alone practical. You will just need to be forever updating your prompt to mitigate the free latest most fashionable prompt injections.
Like what, any specific examples?
I have been hearing this repeatedly as a talking point from people defending Firefox but without any specific example of what they do and don’t allow themselves to take and sell, it rings quite hollow.