AI - where is it taking us?

I recommend “Against Empathy” by Paul Bloom. Overlong for such a simple concept but it makes the point that things like “being nice” and “empathy” are nonsense concepts because we subjectively choose who we are nice to and who we have empathy for so all you’re doing is selecting a set of judgements which suit you.

I worked in a role where empathy was supposed to be an essential component of the job. Always found that the people who failed to deliver for certain difficult people were the ones who claimed to have the most empathy.

2 Likes

Well, I’ve concluded, at this stage, that it’s like anything else, it requires validation. For example I’ve just asked Gemini when in French to use Il est and c’est (despite attempting French O level in ‘74 I’m still struggling!) and it was actually very good. On the other hand I asked it about certain jobs I had done and it got it dangerously wrong.

Where is it taking us? If used properly better informed and quicker solutions but will actually require us to be even smarter in our evaluation.

And that last point is the problem, as I suspect the majority of people will simply blindly believe, making it a dangerous thing to be available willy nilly. Perhaps an answer should be for all generative AI responses to carry a warning on apps and websites and prefixing answers, along the lines the following (suitably tightened and presented): Warning, the data AI accesses to produce an answer is from the internet and whilst this is XXXGPT’s best assessment its accuracy cannot be guaranteed. The internet does contain false and misleading information, inevitably making it possible that XXXGPT’s answer could be incorrect. Independent steps to verify the answer should be taken before relying on it, and this is essential in any critical use. The independently assessed accuracy of this AI’s answers was XX% in [month/year].

Which? Consumer magazine recently did an accuracy study on AI answers and found the common AI service responses ranged from a very poor average 55% accurate to only an average pf 71% for the best.

3 Likes

According to the AI.

1 Like

I could select a set of questions and be very confident AI results would be very accurate.
I could also select a set of questions and be very confident AI results would be very inaccurate.

You’ve just added what could be meaningless information to the pool of information AI draws from.

Not at all: My point is that there should be a clear warning on every AI site, because I guess that somewhere between 50 and 90% of people believe everything that AI tells them without any verification on their part. And I was picking up on LindsayM’s observation that people will need to be smarter in evaluation of what AI produces, and coupled that with my very low expectation that the majority be when using AI, unless something continually presses on them that AI is very far from perfect, certainly with the current generative AI tools widely available to the masses.

I have found it fascinating testing AI. The only real conclusion is that, as no meaningful warnings will ever be given, it’s a technology which is very simplistic and limited in its approach to most things and should not be used by anyone who cannot deduce when it is even a little wrong. Effectively it has accidentally become an elitist tool despite being wrong, in my limited anecdotal experience, more than 75% of the time.

When it’s right it looks absolutely brilliant but it’s almost never right first time, is entirely dependent on you understanding the inputs it requires and that it’s going to look super confident even when it’s wholly incorrect. The confidence it exudes in itself will be enough to convince many of its accuracy and that’s even more concerning really.

1 Like

A friend of mine has in a big way gone into comparing how different AIs perform and how to deal with that for professional purposes. He routinely uses two or three Ai models and asks them to critique each other’s answers. Last I discussed it with him, he was using Chat GPT and Claud, with Gemini as a third and then Grok as a final opinion.

A significant factor is that what the free models will do is limited and rather than just stop when you overuse one, sometimes they downgrade the model. It’s not always obvious that this has happened.

2 Likes

32.897% of posts on this forum fall into that category.:blush:

2 Likes

I was referring to the information provided by Which.

The accuracy of information given by Which will be dependent on what questions where ask.

Did Which qualify this fact when they published their figures.

Misinformation can come from many sources.

A was talking to a girl in work last week about income tax.
She said something that I told her didn’t sound right.
She got quite annoyed, insisted it was, she’d seen it on Tiktok. :thinking:

They described their process, but I didn’t look further into, as the answers to things I have asked, and that I have seen that others have asked, suggests same ballpark. Answers I’ve seen have been across a range of topics and varieties of types of question: some very specific, some general, some technical, some seeking factual info and some seeking suggested guidance. I have not kept a record, but I am not left with the impression that proportion of correctness varies an awful lot across all.

I’ve had an aquarium for over 2 years and started with just marimo and a few plants. Then I wanted to add shrimp. I researched specific water parameters, the type of shrimp I wanted and other details, but it went wrong and was unsuccessful. The shrimp I was trying with were pretty finicky and it wasn’t easy. Getting info from people or general Google just didn’t work.

A couple of months ago, I used Grok, Chatgpt and Gemini to ask certain things about aquariums and crossed over into shrimp info. They have more than helped me set up a fully cycled aquarium with shrimp within 2 weeks. The information, and more importantly the ongoing discussion, they gave would have been pretty much impossible to achieve from any other method.

If you think typing in a question is using/ testing AI, then you’re deluded.

1 Like

Indeed, but LLMs are doing this on a scale unprecedented in human history. You can be confidently uninformed in seconds.

Tbf though I rarely agree with Which? answers and suggestions anyway. Have you seen the cars they recommend over the years? Sheesh.

High end audio has yet to make any appearance in Which audio recommendations. They are not just awful but often inexplicable.

I guess they are focussed on the mass market rather than discerning individuals who wiil do their own research (just as with hifi, and any specialist hobby).

I have bought, and been very happy with, kettles, fridges and suchlike recommended by them.

1 Like

One of the threads running through the AI discussion is the fear that the technology would advance so far beyond humanity’s own capabilities that we would become extinct.

I have words of comfort.

Yes, that may happen but relax, it doesn’t matter.

:smiling_face:

2 Likes

One thing about humans (or rather living things) is that our genes get passed down to our offspring, and tgeurs, and theirs, ad infinitum as long as offspring are produced. Ultimately that and any lasting contributiin we make to society, including inventions, are our legacy, and n the long term our reason for being. If AI and machines take over and humans become extinct, just that last part would apply, and only for those so involved … so the rest of us lose our ling term reason for being. Is that a matter fir concern? Maybe, maybe not!

1 Like