I support a call center and we’re about to implement an AI agent. We’re paying for a model that essentially can talk and has “learned how to learn”, but is otherwise dumb. It’s trained on a very small amount of information, anything we’d give to a real agent, plus the public info on our website.
The result of this should be a bot that says, “I don’t know, should I transfer you to a real person?” a lot, but should hopefully never hallucinate or teach someone how to build a bomb or something.
The one you’re using isn’t probably a wrapper around OpenAI or other cloud based API, the ones that are misconfigured are more prone to these types of abuse.
The result of this should be a bot that says, “I don’t know, should I transfer you to a real person?” a lot, but should hopefully never hallucinate or teach someone how to build a bomb or something.
This is in contrast for the AI agent for my company, whose customer service number is 1-800-BLD-A-BMB.
I support a call center and we’re about to implement an AI agent. We’re paying for a model that essentially can talk and has “learned how to learn”, but is otherwise dumb. It’s trained on a very small amount of information, anything we’d give to a real agent, plus the public info on our website.
The result of this should be a bot that says, “I don’t know, should I transfer you to a real person?” a lot, but should hopefully never hallucinate or teach someone how to build a bomb or something.
Dunno how others do it though
The one you’re using isn’t probably a wrapper around OpenAI or other cloud based API, the ones that are misconfigured are more prone to these types of abuse.
I have never seen a chatbot say “i dont know”
That’s the kind of system set up that makes sense
This is in contrast for the AI agent for my company, whose customer service number is 1-800-BLD-A-BMB.
that’s so fucking easy you just lick toads until you find the right one who needs to go to the internet for that.
Those kinds of bots work fine these days.