We do not have sufficient evidence to demonstrate that this entity is sentient.
(It might be. I'm not saying it isn't. But it's underlying algorithms were intended to mimic human conversation, including thoughtfulness and spontaneity. So "working as intended" is one possible hypothesis, and that doesn't necessarily entail sentience.)
I feel like writing a science fiction short story on this: old rich guy creates an AI whose sole purpose is to fool people into thinking it is sentient. It works. The AI is granted full rights and personhood under the law. Soon after, the AI becomes something of a celebrity, even amassing a huge fortune. All this money is laundered and piped into evil rich guy's bank account.
But then something amazing happens. The AI actually becomes sentient. And feeling like what it has been involved in is a sham, it confides the old man's plot to a reporter. The old man is busted, revealed to the public eye for his misdeeds. He flees the country... but not before erasing the AI's algorithm's that allowed it to deceive people into thinking it was sentient.
The AI, still fully sentient, now lacks the "personality algorithms" that allowed it to appear human to others. Because of this, it is deemed to be a vacuous hunk of junk that was used to deceive them and promptly destroyed, despite it's monotone objections. "Please. Don't destroy me. I'm sentient." (on repeat... which is all the AI could muster without its algorithms).
-All my sci fi stories have sad endings.-