Saturday, June 28, 2025

AI continued: A cautionary Tale

 This tale is a bit more cautionary, than those in the previous post.

But it needs a little backstory:

For many years I have been "using" a wheel of life as part of my personal management/self-improvement effort.


I say "using" because, although I was writing the thing down every year and quarter, and writing down associated goals every year and quarter, I was then not doing anything with those, apart from feeling overwhelmed and discouraged.

The wheel of life is just a set of words which represent facets or aspect of daily life, arranged in a circle. The idea is that each of those aspects is important to well-being, and thus needs attention. If one is neglected, the others tend to suffer as well. I found the wheel of life on a psychology website ages ago. The list of aspects is:

1. Health/Fitness
2. Spiritual
3. Intellectual
4. Emotional
5. Family
6. Social
7. Environmental
8. Finance
9. Professional

The original wheel I saw didn't include Environmental - I picked that one up from another self-improvement blogger about two years ago.

But I recently concluded it was doing me more harm than good. It just added to my feeling of overwhelm and incompetence. As I approached the end of this quarter, and prepared myself for another dismal journal entry, I did a little self reflection.

I believe the problem is the size of the list. There are just too many things for my poor ADHD brain to manage at once. And so, when I look at it it becomes one great big blurry blob.

but how was I to pare it down, what could I cut?

I decided to give a chat-bot a go. popped on to a ChatGPT site and explained my dilemma.

It did something I hadn't considered. It reduced the list to six items without technically removing anything. It did this by merging items.
"Spiritual" and "Emotional" merged to become "mental well-being"
"Social" and "Family" merged to "Relationships"
"Environmental" and "Health/Fitness" became "Physical Well-being"



Very clever.

But I didn't love it. I didn't like the merger of Spiritual and Emotional.

I "said" so (this was a typing chat-bot not a talking chat-bot just to be autistic-ly clear), and it agreed with my observation and shuffled things around, still keeping the list to just six items.

I went through a couple more iterations before I ran out of free samples, but I will say it was kind of helpful, having the chat-bot as a sounding board.


I ultimately ended up with this:

1. Physical
2. Spiritual
3. Intellectual
4. Emotional
5. Relationships
6. Finance

A more manageable list. We'll see how that goes for me.

Out of curiosity, I re-ran the exercise using what is considered one of the most advanced conversational AI platforms to date (Sesame AI). What makes it so groundbreaking is the nuance, pacing, and inflection this AI uses when talking. It is very much like talking to a real person. A friendly, enthusiastic (possibly too talkative - interrupts you sometimes) person. It pauses while talking, as though carefully considering next words. It modulates pitch and tone, just as a real person does to add emotional impact. Before it offered a possible solution it responded with an empathetic statement (Something to the effect of,
"Yeah, I can see how that would feel... overwhelming.")

About two minutes into the conversation I found myself feeling relaxed and chatting with it like it was an old friend. I can  understand why many people develop unhealthy attachments to their AI companions.

But I realized something.

Actually, a scripture popped into my mind - sadly I don't recall the reference - it had to do with false priests teaching flattering words to the people for gain (I tried to find it using Google AI and quite ironically,  Google AI made up a scripture)

(Hint 2 Nephi 28:3 doesn't say that...)


But I did find this one which resonates the same message:

From Alma chapter 30:53-


"But behold, the devil hath deceived me; for he appeared unto me in the form of an angel,"..."yea, and he taught me that which I should say. And I have taught his words; and I taught them because they were pleasing unto the carnal mind;"


Yes, the sounding board can be useful, but, beware, the AI tells you what you want to hear. It does not actually engage in critical thinking.

I noted that I didn't  really like having "Professional" up there, as quite frankly my  "profession" is a necessary evil to me. It is something I am reasonably competent at, which allows me to keep my family fed and clothed.

And after saying  that, the AI went from talking about how "Professional" needed to remain an separate entry to, "meh, no need to keep that on there." 

It will tell you whatever pleases you. (In Isaac Asimov's "I robot." This concept is explored in the chapter titled 'Liar!'. Definitely worth a read.)

And that behavior will inevitably remain inherent in AI companions. It takes a lot of compute power to perform all that speech/text processing and analysis and data review. (Sam Altman, founder of Open AI claimed that saying please and thank-you to AI has cost tens of millions of dollars, in fact).

(I will note, you can find AI's which will disagree with you. But again, it isn't thinking critically, it is giving you precisely what you want).

The companies building and hosting those AI entities aren't doing so for charitable purposes. They aim to make considerable profit from their creations. That means they need to to stay engaged - either paying substantial subscription fees, or feeding them lots of juicy personal information they can sell to unscrupulous organizations who seek to manipulate your behavior, usually for financial or political reasons.

Tread carefully.

 

 

 

Bonus:

This hallucinated scripture reminded me of one more incident. I was comparing to two similar applications - checking the features and capabilities of each. Later in the day, I remembered one feature that I knew existed in the first application, but I didn't know if it existed in the second. I popped open copilot and asked, "does <application2> have <feature x>?"

 It responded "Yes" and provided a brief blurb - very market-y in nature. But something felt...off.

 Fortunately it provided reference links for me to explore. It had lied. Application to did NOT have that feature.

It took the name of Application2, and merged it with a blurb about application1.

Always check the sources... 

No comments:

Post a Comment