With the integration of ChatGPT into Microsoft products, are we seeing the rise of Clippy 2.0 (the old Microsoft Office Assistant paper clip)? Will it go better this time?
Clippy was a virtual office assistant in Microsoft office from 1997. A “cute” animated character that would pop up and offer help. It was removed around 2007. Why? It was more annoying than helpful. I remember turning it off myself after a few uses.
- It would jump up on the screen with animations and sound effects – cute the first time, annoying the second. Okay, I did not really find it cute the first time. It was just distracting.
- It jumped up at the wrong times too often – if you paused to think, it would start tapping the glass of the screen. It would suggest “I think you are doing …” and get it wrong too often.
- It lacked depth – it could help you learn how to use basic features, but describe anything complex and it just gave up. It did not offer better utility than searching the docs or online for examples.
There are lots of articles around talking about problems. It became popular to dislike Clippy.
One article I thought was interesting is from Digital Humans, What went wrong with clippy the virtual assistant pioneer people loved to hate. It made the additional point that it lacked the ability to create a human emotional connection with users. It was partly not personalized (for example it did not remember your name), and it did not adapt to who you were.
Drifting back in my memory, I was thinking about the old British Sci-Fi, Blakes 7. There were 3 computers that stood out in particular:
- Zen, the calm ship computer which did not show much emotion (until it died – “I have failed you, I am sorry”).
- Orac, the grumpy, bossy, opinionated, deceptive computer that could take over other computers. (Of course I liked it!)
- Slave, the replacement ship computer that called everyone master/mistress and groveled about its lack of ability.
These computers had their own, distinct, personalities. It made the series more interesting.
There is a point where the personalities of people around you can be annoying, but usually people adapt to the people around them (to a limit). So while I do think people would like to choose the personality of a virtual assistant they have to use a lot, I don’t think it is the make or break point.
So if its not personality, what does make a chat bot annoying? Imagine a chat bot asking lots of questions about you and trying to show empathy. “Welcome to Monday! Did you have a great weekend?” That is not helping me. I know its a program. Let me get on with important things. (I find this annoying in people too by the way. I still remember a support center call where the agent I was talking to. Once they found out I was Australian, they insisted in talking about Cricket. I was after utility – getting my issue addressed. Social conversation was slowing that down and it was just annoying.)
So I think good personalization (as distinct from personality) is about adapting to my individual needs and desires. Give me better advice, advice that understands what problem I am trying to solve. Inject some personality, fine, but don’t waste my time in the process of doing it.
Could a Clippy 2.0 succeed today? I think the answer so far is “probably”. The language understanding models like ChatGPT are so much better at understanding intent. There is no personality to speak of, but the utility is greatly improved. I can ask and refine questions. I can get useful results.
There is the real problem still of the quality of answers. Sometimes they are wrong. I think (hope?) people will get used to the idea of if you ask a person, you will get wrong information sometimes. They are repeating what they know. It’s the same with these chat solutions. They repeat what they know, but what they know may be wrong or biased. That is going to be an ongoing challenge for years I believe. (Humans don’t always agree, so why do we think computers will magically always get the “right” answer?)
That does not mean technologies like ChatGPT is not useful. They raise points of view for me to think about and form an opinion on.
So do I think virtual assistants like Clippy might make a comeback? I think yes.
- I think it will be a mistake if the assistant tries to do much more than help you with your tasks. (Don’t talk to me about off topic things – I know you are a program, not a person.)
- Remember cute animations are eye candy. The assistant has to solve real problems or users will be annoyed. There is nothing wrong with eye candy, but candy alone is not good for you.
- Don’t try and make the assistant “human” to the extent it would try and be your friend. It is a program. You can trick people for a bit, but when the illusion breaks they will feel tricked. It is better to make users understand it is a computer with limits.
- I personally prefer assistants to use stylized cartoon characters rather than try to look as human realistic as possible, for the same reason as the previous point. They are getting much better, but as soon as the illusion breaks, you end up worse than you started. I think its better to be clear “this is a virtual assistant – it can help you achieve your task”.
For myself, one use case for assistants that I think could be a winner is as a replacement to on-site search. Allow me to explore products by talking to an expert virtual assistant to help me explore options on an ecommerce site.
Show a character on the screen or not. It can be cute, add depth, but is not the core value. If it does not do a good job of helping users, it will fail no matter how pretty.
(Note: I am playing with 3D models in a web browser at the moment for fun – you can make the mouth move, play animation clips etc. This is already possible in most modern web browsers. You don’t need an VR/MR headset to make use of 3D.)
Personally, I would love to have expert advice (where “expert” means it knows more than me) to guide me through options of unusual questions. Product category hierarchies are useful, but more useful is for me to categorize/identify products based on my particular needs an interests. “I will never buy any sports shoe without at least some blue trim.” “I want a sports shoe that has a flexible sole and does not squeak on the floor during turns.” These are real requirements that came up during a recent family shopping trip. They are not always in the product details that manufacturers supply about their products. So bring in different sources of information (including customer reviews) to provide better answers.
For example, for squeaky shoes, use general information about types of soles and associate that with products that have a sole of that type. But if there are reports of a specific shoe not being susceptible because of a different design, override that information. Get that sort of knowledge into the answers provided.
So yes, I think Clippy 2.0 or equivalent virtual assistants have a future, especially with VR/AR/MR. But I think the primary challenge is to have assistants that better solve the needs of users (and the new round of AI chat technologies appear to have made a great step forwards). Then add the eye candy.