In principle it shouldn’t be very hard because premium versions of AI coding assistants keep regenerating the code until it compiles (without requiring anything after the initial prompt). After that it becomes a matter of checking whether the compiled code does what you want. If it does not you can tell it to fix those behaviours without knowing any/much coding at all. Though if you can’t point it to the location of the code that causes the problem, it becomes a tug of war. This is because they can’t really follow instructions like “keep the last change you made but do this” with %100 efficiency.
We shouldn’t even begin to discuss stuff like good practices. If the person using the AI coding assistant isn’t experienced in the field and doing stuff like coding databases, then you are pretty much at the mercy of AI. It may superficially seem to know what are good security practices but again if the person at the helm doesn’t know them and or can’t check them in the code, it is pretty much up to chance. See for instance:
I feel like there might be a sweet spot for AI coding assistants but it is definitely not asking it to write a complete app or a website with a db from scratch. They should really tune it so that it can do the time consuming boiler plate stuff so people can actually focus on development, testing, and problem solving. Instead they try to develop it and sell it as an almost completely autonomous coder which seems like a futile effort for the current state of LLMs.
And precisely the approaches like “we don’t need old styled senior coders anymore we can get anyone to produce code for cheaper” is what might fuck us. There will be a gap in the transfer of good practices and experience from the older generation to the younger which AI won’t be able to fill. People will have to rediscover all of those again with probably quite a lot of pain for them and their users.
Yep, understood. And I still don’t think this is a common skill for most 18 year olds. But I’d like to know if this is more common than I realize.
In principle it shouldn’t be very hard because premium versions of AI coding assistants keep regenerating the code until it compiles (without requiring anything after the initial prompt). After that it becomes a matter of checking whether the compiled code does what you want. If it does not you can tell it to fix those behaviours without knowing any/much coding at all. Though if you can’t point it to the location of the code that causes the problem, it becomes a tug of war. This is because they can’t really follow instructions like “keep the last change you made but do this” with %100 efficiency.
We shouldn’t even begin to discuss stuff like good practices. If the person using the AI coding assistant isn’t experienced in the field and doing stuff like coding databases, then you are pretty much at the mercy of AI. It may superficially seem to know what are good security practices but again if the person at the helm doesn’t know them and or can’t check them in the code, it is pretty much up to chance. See for instance:
https://www.veracode.com/blog/genai-code-security-report/
I feel like there might be a sweet spot for AI coding assistants but it is definitely not asking it to write a complete app or a website with a db from scratch. They should really tune it so that it can do the time consuming boiler plate stuff so people can actually focus on development, testing, and problem solving. Instead they try to develop it and sell it as an almost completely autonomous coder which seems like a futile effort for the current state of LLMs.
And precisely the approaches like “we don’t need old styled senior coders anymore we can get anyone to produce code for cheaper” is what might fuck us. There will be a gap in the transfer of good practices and experience from the older generation to the younger which AI won’t be able to fill. People will have to rediscover all of those again with probably quite a lot of pain for them and their users.