Discussions about AI are everywhere since a few years. Enthusiastic supporters announcing the greatest breakthrough in human history as well as determined opponents proclaiming the end of the world. Both groups never get tired to repeat the ever same arguments.
I gave chatbots a try in the last few weeks using Gemini, Copilot and mostly Lumo. You can’t comment on and can’t criticize what you don’t know. That’s why I also started looking into Android a few years ago. I finally made my peace with Android after getting used to GrapheneOS (and saw pretty much all my assumptions on stock Android versions confirmed from looking deeper into it).
Now then… AI chatbots. Summarizing my experiences into few sentences is tough. Listing examples of prompts and collected LLM answers would be far too long and boring. I’ll try bullet points without sorting them into positive or negative categories.
Unsorted List Of My Findings
- Depending on the prompt the output can be complete nonsense
- Summarizing articles online is surprisingly good in many cases. I prefer reading full texts though
- Sentence structure is mostly correct. Sentences can be on the more complex end and choice of words includes less common words giving the output a kind of “educated and cultured” looking impression
- This somewhat masks the undeniable fact that the claims and outputs from LLMs are sometimes completely wrong. A confident tone bringing across messages with phrases like, “Here is why:” For legal protection companies providing the chatbots always include a tiny disclaimer stating that the LLM may make mistakes and users are responsible for checking responses
- Outputs can be huge. People falling for the common tl;dr-problem fail at concentrated reading and checking of what they got. They might just copy+paste it somewhere. You have seen such posts, don’t you?
- Search assistants or whatever these things are called that pop up uncalled for when using normal search engines get on my nerves. Concise search terms aren’t good prompts for LLMs
- A chatbot can be somewhat helpful on programming tasks. It helped me finding a stupid error in a C++ program and gave me hints into the right direction when trying to work on a DPM – data position measurement – scanner for CDs (I should continue working on that!). The output (a Bash script) wasn’t usable from the beginning since the
sg_rawcommand it crafted contained errors. But I could fix it and develop it further. Somehow the bot insisted on changing some fixes back to a broken state when I uploaded my version
These unsorted points are my findings on technical topics. The immense energy demand and financial investments in the AI hype are hard to justify given the mixed results.
User Problemss
I have to also blame a certain kind of users rather the chatbots alone. Blind trust into LLMs on technical topics is a human weakness. Nowadays it is common to see a forum post of this kind:
Hello. I have [description of problem]. I already asked ChatGPT and tried [some questionable steps] and this didn’t help. Alternatively: The situation got a lot worse. HELP!!
Asking a bot for help on a topic you understand to some extent can be helpful. I got some really good nudges sometimes. Limiting my prompts on topics I have basic knowledge on allows me to spot so-called hallucinations easily. If I was to ask some bot how to repair my car brakes… there is a high chance I’ll slam into a concrete wall or a big tree. This is because I have no idea if the provided steps are correct.
PsychiatristGPT
Aside from the bullet points above I want to mention one more facet: Simulated social interaction. I confronted Proton Lumo with parts of my life story and my depression. The answers on this were stunningly good. Emphatic, friendly, inviting for further discussion. Always understanding.
This is both immensely sad and dangerous!
Sad because a machine is able to give replies that are a thousand times more helpful and constructive than statements by real human beings. This felt like an arrow shot right into the heart.
Dangerous because many people don’t seem to be aware of the limitations of current AI. It simulates human interaction on an impressive level, but it doesn’t feel. There is no sympathy. There is no understanding friend. It is a computer, and it does the only thing it can do: compute. It is a statistical system calculating the most likely output string on a given input string.
I’ve seen people stating that only their preferred bot is able to understand them. There are parents who are unable to decide how to react on their children’s needs without asking their – what they perceive as omniscient and infallible – electronic oracle.
There is an inherent risk of behavioral addiction, especially in conjunction with already existing addictions (internet, smartphone, social media)
I would go as far as see a potential of an increased risk of acute schizophrenic episode or psychosis for already vulnerable personalities. What is found under the term “AI psychosis” is almost certainly not a new condition, just a new trigger.
That’s pretty much it, but a blog entry on this topic must not simply end this way. How would any LLM end a text like this? (Yes, I did write this whole thing completely on my own!)
It ends with the magic word…
…Conclusion
Chatbots are a tool. A tool with not yet a well-defined area of application but with certain strengths. I don’t get the hype. And I would not want to trust a machine running a business or making decisions on their own. Experiments with agentic AI had disillusioning results so far.