Yes and no. They can do the job but they are too easily tricked and too quick to hallucinate to be able to reliably do the job.
Compared to a human after 8 hours of continuous customer support, you’re going to have far more errors of a much greater variety and risk with any current llm models compared to any human that isn’t actively attempting to destroy your company
LLMs are kinda, sorta there already, aren’t they?
Yes and no. They can do the job but they are too easily tricked and too quick to hallucinate to be able to reliably do the job.
Compared to a human after 8 hours of continuous customer support, you’re going to have far more errors of a much greater variety and risk with any current llm models compared to any human that isn’t actively attempting to destroy your company