Agreed. It gives results that appear promising, and if they were correct all the time it would be amazing… But it’s not, though sometimes it is.
I have one I was messing with that would scan through a document and answer questions about it with sources cited from the document. I feel like that’s the best path to trusting the output.
Also, I think game questions are best suited to asking solutions to linear quests that have a defined answer. I asked it about a good Destiny build, and it answered but what it gave me was a pretty basic build that doesn’t work the best in the current meta, but I kinda knew it couldn’t give me a good answer there.
Because it’s not always totally correct, you can’t trust it. Investors are shown examples where it is correct and incorrectly extrapolate the trend.
Large LLM model?