For the past week, I have been heavily interacting with some AI tools for my coding needs. When I usually need a solution, I reach out to Google, get some ideas from there, and try to implement it on my own. But then I thought, why not interact with Chat GPT or similar AI and get the answers from them? The interaction was excellent, and I was even able to get code samples from the AI.
Initially I started the discussion only as a brain storming session but then later I thought maybe AI it can give me the code so I wanted to give it a try by asking but to my surprise, I was right that AI was able to generate the code and give it to me and it was very close to perfection. I started testing the code and had to make some tweaks here and there and the code was suitable for me.

Now comes the interesting part. I had to do a lot of back-and-forth communication when I wanted some enhancements in the generated code. First, I had problems with linting, and I asked GPT to lint the code and provide me with clean code with no unused variables in the code. Even till here, GPT did the perfect job. But then I found some variable hanging around. When I asked GPT when there was a variable hanging around it said it was for a feature that I requested long back. One thing that I understood was that GPT does not remember exactly what I want and serves me accordingly. It can pretend to act brilliant, but it isn't.
For multiple feature enhancements, I had to ask multiple questions to GPT and come up with the final code. The exact problem happened here. Just because I asked so many questions about different features, GPT somehow misunderstands that I want a mix of everything. Sometimes I might have asked about a feature, but would have implemented the code with a different logic. GPT confuses both things, and the final output given by GPT usually has some code references that are a mix and match of both requests. This is something that is not very great when it comes to getting answers from GPT.
For security reasons, it is very important to review the code generated by GPT line by line. That's when we might realize that a major refactoring might be required, both logic-wise or readability-wise. We might see obvious things, but AI makes those changes only if we ask it to.
If you like what I'm doing on Hive, you can vote me as a witness with the links below.
Vote @balaz as a [Hive Witness](https://vote.hive.uno/@balaz)
Vote @kanibot as a [Hive Engine Witness](https://votify.vercel.app/kanibot)