AI , but it's real, or is it?

Great technology , but ?
What are your thoughts


All moving too fast and rather scary.

Maybe it’ll push people away from technology when they won’t know what’s real unless they see it in the flesh.

Awful prospects for bad actors to distort the truth to enact their agendas.

Must be really worrying in many creative spheres.

1 Like

I have been on AI internet and can confirm there are alot of issues which still need fixing. I tested it out with a lot of questions and 50% of answers were incomplete and sometimes incorrect. When I suggested the correct answer it replied in a very ‘human way’ by apologising to me and saying ‘yes indeed you are correct’. But it was useful for basic stuff like asking it to give me a 1000 words to describe the sound quality of ATC SCM19 speaker and voila it was there! Useful for homework (but check the facts)

1 Like

Like everything else. It will be what you want to make it . Some will make a lot of money out of it and others will be clueless.

I’ve shown an AI generated video to a cow, I’m sure she doesn’t produce milk anymore.


A major issue that is emerging is AI learning feedback loop. It gets trained on text but gradually it is learning from other AI generated text and errors get compounded.

I’ve seen it reduce staff numbers in my field. It’s fast but ultimately needs a lot of output review and tweaking. Senior management don’t seem to care. Creating more jobs than it destroys isn’t the same this time round I feel. But that mantra will be pushed by capital holders to justify their cost reducing push.

Over time I’m thinking we’re headed for the Butlerian Jihad. I’ve worked in IT my whole adult life and currently support data systems used to feed AI on fresh batches of big data, and I’m still not convinced it makes things better.

1 Like

I think the zeitgeist has got in with the term AI and seemingly everything has ‘AI’ in it at the moment as it makes it sell better. The technology is not new, in fact going way back to the 1980s we called the tech in part neural networks, then for a long time it became machine learning… and now through hugely increased processing power we can massively increase the speed of processing and the speed at which the machine ‘learns’ and adapts we call it AI, which is actually a whole suite of different technologies . all though seemingly quite basic machine learning is now espoused as ‘AI’ by the marketeers. . We can also use public domain data sources from the vast web or private data lakes to teach methods, behaviours and content, although there is an issue with bias with using the web which is one of the hot topics in some AI circles at the moment.
Whether you call it AI, machine learning or some other processing power, I feel it is prudent to set up a framework of how the application of the various technologies can be used to augment people managed processes and what it can’t be used for, with perhaps a review every 5 to 10 years.


So true. Taken to unbelievably stupid extremes too. If you see “AI” on a product, 99.99% of the time it just means “software”. Heck, my humidifier had a stupid “with AI!” sticker on it.

But I think the spirit of the thread was probably about the sudden prevailance of usable LLM networks.

1 Like

AI is pure crap IMO, there is no human need for that technology. On the contrary, the risk is probably high that this technology will be misused in various ways…

1 Like

Not a risk but a certainty!


If it was crap I wouldn’t be worried about it so much.

The non AI but labelled as “AI” by marketers analytical algorithms for medical and aerospace are pretty useful. The LLM stuff like ChatGPT give terrible canned answers most of the time but is useful for composition and translation. Certainly it has made a portion of my existence largely redundant - which is why I hate it.

Automation in general has this effect on me. I stubbornly refuse to use the auto checkout line at the supermarket because (on high horse lecturing tone I take when Mrs. FZ asks for the nth time why I go to the human checkout), “I am not paying a robot to do a person’s job and that is all there is to it.”

I think the risk is largely laid out by history. Has mankind ever created anything they did not use? OTOH, the genie with things like LLM is out of the bottle. No amount of whining about it will put it back.

1 Like

I see, but LLMs are a relatively narrow part of AI … and yes they have become very much more useful when coupled with voice recognition… but it is primarily I/O based … that is coupling to human speech through training.
But yes when you combine NLP with LLM you can get useful (or annoying depending on your perspective) tools like Alexis and Siri.
ChatGPT is interesting as it uses multiple AI technologies such as LLM, NLP, ML and Deep Learning… and I think that is the true power of new automation technologies when multiple AI technologies are combined.

It will probably advance as the same rate as the decline of civil society.


I suspect the same was said about Caxton’s Printing Press and with the Power Loom riots from the Industrial Revolution.
Perhaps we were at our most socially developed when we lived in caves?

1 Like

It’s the social cohesion aspect I think is where the risk lies. Technology, in itself, clearly has had vast benefits economically and the standard and quality of living. But, I think if you look at the younger generation now and the impact of social media and “phone attachment” that there is an emerging cost in authentic and genuine quality human interaction at a social and community level. I only see AI rapidly have a negative impact there.


I don’t disagree that often when new technology or products when available to the populations can develop a reactive response to it, as it indulges in it at first and then normalises and even declines… and / or it is replaced with something else. (akin to the Hype Cycle). Yes children are often the most impressionable to new technologies.
I would be very surprised in 50 years time however smartphones will be like and be used as they are today apart from perhaps some parts of the then elderly population… of course we might have even more addictive technology… hence my suggestion of 5 to 10 years reviews… however I don’t think this is specific to AI technologies as your example of smartphones and some social media illustrates.
We have the same with many techs including food technologies such as within UPF and vaping which are arguably even more damaging to society.

So to me the real danger/consideration is to what extent various technologies when implemented in specific ways foster new addictions in the populations.


Good morning everyone,

At the outset, I must say that I have no expertise in the area of AI, but it does cause me some disquiet.

Clearly, at least two contributers to this thread have very considerable professional expertise, but there is one crucial area which doesn’t seem to have been addressed, so far. Actually, it was Simon-in-Suffolk who raised this more pressing issue. Simon suggested a review after five to ten years and this begs the question “… a review by whom?”.

If we take the example of motor cars, I expect that, on their arrival on the scene, there was limited legislation regarding their use in public, but now, their use is fairly well hedged around with legislation which extends even to the point of constraining their initial design (on emissions, for example). At the moment, I get the feeling from the media that the development of AI has few or any contraints. It seems to me that advances in the field are limited to “… what can we do with this technology?” rather than “… what should we do with this technology?”.

Most governments seem reluctant to take any form of control and, at best, play a lot of “catch-up”, when the situation seems to be already out of control, morally, legally or ethically. I think that the potential for AI is wonderful, but I fear that, without effective controls being put in place, its negative effects might have at least as much impact as its positive effects.

Best wishes,

Brian D.


“on the eighth day the machine got upset, a problem man had not forseen as yet”


Peer ahead about 50 to 100 years (or however long it takes) when AI and robots can do most jobs - even medical diagnosis, court judgements, solicitors’ jobs - pretty much most things we do today. And imagine (or assume) that we will adjust sociologically by providing all the basic needs (housing, heating, clothing, food) for free to everyone, whether they work or not. Indeed, there may not be much income-earning work that needs to be done by humans. What would be left with? I think we would still want service jobs to be done by humans, also art, music (including performance), science and related research and so on. To some that might sound like a nightmare, to others like heaven.
Watch some of the videos on YouTube that show factory jobs being done by humans (mainly ones in Asia). Someone picking an item out of a box of identical items, placing it in a particular position on some machine, the machine does something and the remove the item and put it somewhere else. It’s a job, I suppose, but one that a machine could do just as well, maybe better and faster. Why would we want to preserve jobs like that? Give them their basic needs, preferably more than that, and let the machines get on with it. Let them be humans.

I had a car about 5 years ago that had the auto up and down function on the driver’s window stop working. It would go up and down if you held the switch. About 5 weeks later it started working again.
When it next went to have its wheels changed I mentioned this to the service manager. “Oh it probably fixed itself”. Today it would be called AI.

Last week I went to a presentation given by a company that is working on automatic face replacement in video (it already works on photos but needs to become real-time). The idea is that they can replace people in the background of videos protecting people’s privacy.

Are these good or bad uses of technology?