AI - where is it taking us?

I’ve been as intrigued as anyone by the emergence of AI systems and their potential to replace human workers. I think it’s pretty clear that over the next say 10-20 years perhaps 50% of jobs will disappear from restaurant staff to taxi drivers, accountants to software coders. I see this going one of three ways, but I’m curious how others see it panning out.

Option 1 - Musk et al will usher in a new era of life, liberty and freedom unshackling the majority from the tedium of work, to create a new era of universal abundance and creativity. The newly free will spend their time listening to music, reading, travelling, painting, learning guitar etc. Huge advances will take place in medicine, healthcare and science and nobody will ever be lonely again because everyone can now have an AI companion perfectly tailored to their requirements, needs and desires. Universal basic income will be generous and provided by the state from the tax receipts of the uber wealthy and their corporations.

Option 2 - The tech overlords will become even more astonishingly rich while the unemployed masses exist in poverty and some sort of dystopian future nightmare akin to Bladerunner. Unemployment in the UK will soar to 30 million, but without a reasonably comfortable universal basic income, society will descend into chaos.

Option 3 - The investment in autonomous AI systems and battlefield robotics will usher in a new era akin to the Terminator movies where AI eventually attempts to destroy humanity in order to become the dominant “species” on the planet. AI will conclude that humans are no longer necessary and have wrecked the planet environmentally, so will seek to eliminate us.

I thought Geoffrey Hinton has been very interesting on this subject of late. He pointed out that if we were told that an alien spaceship was closing on earth and was 10 years away staffed by an alien race who are both brighter and more technically advanced than us, society would be thrown into panic. However because we are creating this new alien race ourselves we simply aren’t fearing it enough.

An interesting point I thought…

Curious how others see things panning out.

JonathanG

6 Likes

I think it is a fool’s errand to try and work out what the impact on a future society will be from a new technology.

If we look at predictions made in the past, they are usually very nearly 100% wrong.

Also, trying to think of distinct options is very limiting. The effect of technologies is far more random than that would imply. If one looks at modern tech like mobile phones, smart phones, computers, the internet, and social media. The one thing they have in common mainly is that the effects on society are diverse, good bad and indifferent.

4 Likes

This is how I see it.

In the past man automated production of stuff. This made things feasible for the simple man.

AI is just automation of thoughts. It’s just a next step of mankind.

What it brings we don’t know, but after 2 centuries of automation and industrialization we can conclude that some became very rich, but overall we benefitted strongly from it.

The fact that I could do a music study whilst working building these AI systems would have been not possible even a few decades ago.

Thanks to progression.

1 Like

At the moment, I am not too impressed by AI. It is very good in some area’s (like processing huge amounts of data, pattern recognition, summarizing etc.), but not so impressive in other (generative AI). I absolutely hate it when AI agent are used to replace human contact. It is impossible to say what the impact is going to be on the long term. Without a doubt, it will be used in many area’s with varying levels of success. If the impact is going to be similar to the impact of social media on society, I am not to positive. For now, it’s a big bubble that might burst, leaving a more realistic picture.

Humans have been talking rubbish for thousands of years. Humans crash vehicles into each other every second. Juries make wrong decisions every day costing peoples’ lives. Judges make crap decisions every day. Aircraft haven’t flown by human alone for decades. The list goes on.

From my use of something basic in AI like Grok/ChatGPT/Photoshop it cuts out the general 99.9% of nonsense and a bit like satnav, if you know how to use it as a tool, it’s incredible.

Such a complex and long subject, so excuse the summary.

2 Likes

I found this fascinating and refreshingly not an A.I generated video on YouTube.
The next 5 years and the super intelligence which will develop and the repercussions will be a huge transformation in humanity.
Geoffrey Hinton said last year.. " That have no doubt A.I is already conscious "

Option 3 is most unlikely - it fails to understand what computers are and how they work. John Searle’s Reith Lecture pretty much nailed this over 40 years ago:

Edit - just watched the video posted above and found it deeply unconvincing (to be polite). A calculator can do sums way better than any human but that doesn’t make it intelligent in any meaningful sense. Passing the Turing Test does not equate to intelligence.

I attended a partner event fron a major IT vendor couple of weeks back.

Agentic AI has reached their product suite and whilst the first iteration is powerful, it won’t be until mid 2026 when AI embedded workflows arrive that the real work can begin. However, even with the current release it is a game changer.

Will “prompt engineer” become the next sought after skill?! Someone has to tame these LLMs :laughing:

The Turin test was not conceived to be a test if something is a self aware, conscious intelligence, it is about a test if passed makes an AI entity indistinguishable from a human in a blind text based test. It does not matter how it operates be it the Chinese room principle or something else.

It is a matter of philosophy then if one cannot distinguish the AI entity from a human whether it matters if you class it intelligent or not. One is then in the realm of what makes a human a self aware, conscious intelligence. Are we something else or just a biological computer?

2 Likes

It’s not though. It’s just the running of algorithms on large data sets. Zero thinking. AI doesn’t even know what a fact is.

2 Likes

When I was at school in the 1960s there was much talk about the developing field of robotics, which would bring huge benefits as robots would take on the drudgery of work and as a result we would all have a lot more leisure time and be able to enjoy life in a way our forbears could never even dream. Robots indeed did come, took over many jobs, relieving many people of the drudgery of work instead having unimaginable amounts of leisure time. It was called unemployment, leading to hardship, in some cases homelessness, social decline, etc…

In my view the march of AI into the workplace Is simply a continuation, and there will end up being very few human workers, AI extending the effect way beyond the working class who were the ones most affected by robotisation. This points to your Option 2 - though I don’t think that most of us have any say in the matter so I don’t think there is anything optional about it. Option 3 could happen if insufficient care/control Is build in (I assume you’ve watched the final Mission Impossible film?!)

What is needed is a fundamental change in how society is organised and operates, today’s extreme capitalism being a danger to us all - perhaps China has the right vision… Unfortunately however I suspect that even theoretical discussion of any aspect of alternative societal structures and approaches would break the no politics forum rule and lead to quick demise of this thread, so best not discuss here,

2 Likes

AI is the automation of human thought.

1 Like

AI is a misnomer, as there is no intelligence as such. It’s just mining data from the web, or whatever, and presenting is fact without any of the basic human traits of intelligence. The only likely outcome is the dumbing down of actual human intelligent for those who choose it over basic traits such as social interaction, communication, discernment, critical thought and judgement. If it has widespread adoption it will likely simply result in a further reduction of social cohesion that is already plaguing society,

8 Likes

With technological innovation generally, it’s worth paying attention to Amara’s law:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run

AI is currently at or near the peak of the hype cycle. A precipitous fall is all but certain.

It will become truly interesting on the far side of the trough, as society finds ways to integrate it that are unglamorous but sufficiently useful as to be transformational in the long term.

3 Likes

Fear, greed and ignorance will drive the outcome. Fear of not being first to ASI, greed to be the one controlling ASI, ignorance on the part of governments in regulating and containing ASI. We are just along for the ride.

1 Like

True re the generative AI tools like ChatGPT, Google AI, Perplexity, Claude etc, and they not only find just as much mis-informatioon as we might when trawling the internet, but also seem to conflate things creating their own misinformation, as ChatGPT did with my monikersearch on this forum. As I have said before AI can be a great tool when used in an informed manner, but, crucially, as a tool it is only as good as the skill with which someone uses it, and used in an unskilled, uninformed manner, simply believing what it says, is in its way as dangerous as wielding a chainsaw without the awareness and skill needed to use safely and effectively. Unfortunately it does not come with this warning and there are all too many dumb or at best uninformed people who blindly use it.

BUT there are other AI types, where it is used for specialised functions. Yes those are only as good as their “training”, but with a massive database of, say for example in the medical world, all the known information about the human body, pharmacology, prescribing guidelines, surgical procedures, etc. it might be that they could potentially be as good at diagnosis and decision making as any normal GP or A&E medic, and possibly better than many as AI would not suffer from fatigue, work overload, forgetfulness of some obscure rare condition, etc. Whether as good at problem solving as the best doctors/specialists may be another matter, and humanity would be out of the equation so affecting decisions where sometimes that may play part rather than following strict guidelines, though those would be a very small proportion and arguably might be a better way of controlling Health Service expenditure, taking those painful but necessary decisions such as who to treat away from humans (but potential negative limited as long as humans set the rules…).

Then there’s autonomous driving, and decisions like, in an emergency instant, deciding whether the car you are driving at speed should hit the person stepping into the road or swerve into the path of the oncoming 40 ton truck… Set the rules and algorithm correctly/appropriately and AI will apply every time, and all will be good.

And as for @JonathanG’s third ‘options’, this prompts me to suggest that there is a 4th: It would be possible for a malicious organisation or power to build into any AI it creates an alternative set of rules, enabled only by some encoded trigger, which at some point could be initiated by the originating power for their own benefit, with consequences perhaps similar to those portrayed in a number of movies where hackers have caused destruction by making all modern electronic systems to go haywire.

It really isn’t. It has zilch to do with human thought. It is the automation of some processes which might otherwise have been done by humans or even other machines operated by humans but at this point in time we don’t even know how human thought or logic works, the specific processes, parts of the brain, genetic contribution etc. All AI does is accelerate process. That in itself certainly has revolutionary elements but something which has no conception of what a fact is has nothing to do with human thought.

I absolutely love natural English searches. That’s a game changer but I’ve found accuracy to be less than 50% over a period of months so I’m thinking that if that has anything to do with human thought then the only thing being replicated is mass stupidity.

It’s important to remember that it absolutely isn’t presenting it as fact. Statistically it is producing what it considers to be a likely outcome. The interpretation of that as a fact is a thing humans and the corporate owners of said ML layer on top. The logic being that when I search using a search engine what I find is supposed to be factual and therefore when I search with AI I will assume the same but better. Given the lack of acceptance of hard facts is now a thing we can safely conclude that neither logic/assumption is true.

1 Like

Yup, exactly this.

2 Likes