I've been experimenting with a few different AI tools for code generation and assistance.
As I get better at guiding these tools to produce code in the style and at the quality level we expect, I've noticed a pattern that repeats across various tools: Occasionally, they will offer a solution that technically works, but it's far from the ideal approach.
For example, there was an authentication issue I was running into with Sanctum, and the AI suggested a workaround that removed the auth:sanctum
middleware from the route group and manually layered on a few additional middleware to specific routes instead.
This did actually work, but it was a lot more convoluted than simply using the auth:sanctum
middleware as intended, and layering on the statefulApi
configuration option.
And it only worked for the current routes. If we added another route, maybe it stops working. And also a future Sanctum update could break it in unexpected ways.
I find it necessary to maintain a vigilant mindset when reviewing AI-generated code, much like you would when reviewing a pull request from another developer on your team.
Making it work is one metric of success, but using the framework and packages as intended in a predictable way is also crucial to a maintainable codebase.
Here to help,
Joel
P.S. Want some help reviewing your Laravel code?