AI Instincts
May 21, 2025“You are a customer service agent for Acme International” - that’s how the prompt starts. But what’s a customer service agent? Beyond the behaviors encoded in the prompt, what does the foundation model bring to the table? And how different would it be from model to model and version to version?
Last year we created a chatbot which engages with customers through conversations. The AI was inquisitive and would keep the conversation going, almost indefinitely. So, we dumbed it down to a question/answer bot. This year the experience is quite different. The newer version of the foundation model seems to be a lot more confident and concludes the conversation too fast. Maybe it’s the server prompt, or the model itself.
I use NotebookLM to listen to long documents, especially annual reports. The virtual hosts seem to highlight trends, especially if it is a trending topic like AI. In some cases, I have been surprised and a bit annoyed by how the conversation was highlighting company’s AI capabilities over and over again. It’s almost as if AI has a bias or favoritism for its kind!
I took my resume and asked NotebookLM to talk about it. The AI models seem to be tuned to speak highly about everything. Try it if you are feeling low and would like to hear some encouraging words.
Often, we override the model’s behavior using prompts, with both retrieved content and system/user prompts. We ask AI not to behave in certain ways. What’s ironic is that sometimes the instinct is to elaborate on the words in the prompt and the response discusses those prohibited topics in subtle ways.
In all these there is a pattern of behavior exhibited by foundation models. It could be from the data used for training, it could be the guardrails and safety implementations, it could be some emergent behavior. Whatever it may be, I feel it’s similar to instincts - not sentience but an instinct developed through contexts of training data and evolutions of model.
We tend to anthropomorphize AI. We think of it as a person, with capacity to think, feel and act. The behaviors shown by the models are our collective behaviors, and the models are just a reflection of us. If AI is mimicking human thought process, it will end up mimicking human instincts too.