AI , but it's real, or is it?

I see what you are saying regards training and plagiarism. I suspect the problem lies with the fact that the trained system may then use the training data, or sections of it, as part of answers it gives with no appropriate referencing to origin.

Also agree the bias part based on WWW content, potentially a reflection of todays society and the need to increase click count to make money.

Ok, so you are thinking of look up query or conversational LLM systems? More reputable ones will use Deep Learning architectures like Transformer, such as GPT, then use generative methods to provide a dynamic conversational style response which most people apparently prefer. But specific references are usually attributed, like in a text book, but there have been issues with the accuracy of attribution I understand.
I personally use ChatGPT like Google web search… as in essence they do some similar functions, but I can be more granular with ChatGPT. But as with all such search engine responses one is always wary of the integrity of the response. and usually don’t take at face value unless the info is from a trusted source.

1 Like

Thanks for all that, good stuff which I will look into further to improve my understanding. Still keen to learn despite being retired, exercise for the brain.

You are welcome - well if you are interested in reading up on this area of computer science as well as contemporary use cases in applying the technology this is a good a starting point:

Many thanks for the link, I’ll go and follow up on it now. I have been plodding through a book (Becoming an AI Expert by Kris Hermans) to get a better understanding. Must confess some of the content is a bit difficult to consume, I’m not a book person for learning, always preferred course tutorial type environment.

if you have a computer science background and/or perhaps a background in mathematical statistics you should gel with it :slightly_smiling_face:

Electronics originally although I quietly moved to IT and spent around 15 years doing IS Enabled Business Change. Basically ensuring we specified the IS services we required and then confirmed delivery of those services from various large IS delivery type organisations. Always had an interest in understanding the technology being used to deliver a service be that hardware, software or some combination.

Afraid I struggle with all of this. There was a fairly sane suggestion that facial recognition should be treated as plutonium and effectively banned because it has no positive use case. I come close to this view with AI.

The problem with where we are now with ML, AI et al is that it’s possible to present apparently positive use cases but the lack of oversight means that it’s not possible to know why something works and there’s no assessment of the biases built in. Advances in medical analysis are the positive example frequently offered and yet we already know, for example, that many of the better detection rates for certain cancers do not apply to woman or specific ethnicities. Few write that story nor feel it’s a matter of concern that these technologies are black box. Even when a bias is detected the authors may be unable to figure out why.

Find me a positive use case and it’s simple enough to counter with information already out there about biases etc.

If pure data is used, there is no bias.

Take the analysis of the energy usage of a large building or buildings. There will a lot of data, which is just to vast for anybody to analyse.
Hundreds energy meters, hundreds of temperature sensor, co2 sensors, humidity sensors etc.…… Snap shots taken every 15 min

Using AI, the data can be analysed, allowing energy usage to be reduced.

Universities at present are using their own buildings to research this stuff.

Surely this is just an algorithm and automation? :thinking:

There is no such thing as “pure data”. The choices as to what data is collected and why hold many hidden biases.

A university not so far from me has done exactly as you described and identified massive “wastage” in two areas of a set of buildings. Cue lots of materials being produced about the value of shutting doors and windows etc. If only those who chose to gather the data had considered talking to people about what their metrics and aims were before they started. Then they might have noticed that both areas were populated by significant numbers of female employees with menopausal symptoms. Thousands of pounds wasted.

Few will be surprised to learn that the decisions as to purpose and what data was collected were all made by men.

Well, you’ve just proved my point.

Quote
Take the analysis of the energy usage of a large building or buildings. There will a lot of data, which is just to vast for anybody to analyse.

No man or woman would know where to start. That’s why AI is needed.

The case you describe could be something as simple as somebody looking at an electricity meter once a day.
Do you have a link we could look at, or is it anecdotal?

Algorithm and automation have been in use for a long time. (using just a few parameters).

AI will be replacing the algorithm, with something far more complicated and powerful.

Keeping your building energy costs example how will it know the rules and desired outcome?

I suppose the desired outcome is to reduce energy usage.

I binged (BUILDING energy usage ai research) and this came up.

https://www.sciencedirect.com/science/article/abs/pii/S1364032116307420

If (energy_usage > 0) disable (heating_system);

1 Like

FS would be

Energy Usage MAX. = Heating Output MAX + Fenestration Openings MAX. :grinning:

1 Like

I actually quit a job over the unethical use of such things back in 2007.

At the time, I was working with a company that was implementing voice analysis software (designed by the ex secret service engineering staff of another country with a dubious human rights record) into various use case applications. The core engine had multiple uses and it had been packaged as a mobile lie detector; a “do they love me?” app for a phone; health analysis; threat detection; and so forth. The algorithm could do some things accurately like measure heart rate just from a spoken voice and work out how long you’d been awake and how much sleep you’d had. The health app, which I developed an IVR protocol for, allowed patients to talk to the application and it would do health analysis of stress and make recommendations about whether it thought you needed to make dietary changes and stuff. I wasn’t sure I really trusted this engine because our CEO had a couple years before started taking the lie detector version into staff meetings and it was quite frankly trash. I bullshitted it on purpose constantly to prove it was unreliable but the management team would have none of it. I could fool it 90% of the time.

It got to a point where it was integrated into some scary security apparatus for things like airports where you are asked “Are you a terrorist?” and regardless of your answer it processes your voice to determine threat level. That was dubious enough. Then they rolled out a law enforcement interrogation version and it got really scary. The police officer would ask the detainee, “Did you sell the drugs?” to which the person might reply, “Yes” and then the system would ream off instant analysis like, the subject is innocent but is lying to protect a loved one who is known by the subject to be the guilty party.

Then the company started making contracts to sell this thing to oppressive regimes in countries with some serious human rights issues and that was the last straw for me. I was out.

Last time I visited the US, they had that airport voice threat analysis at Denver airport. I was nearly beside myself with disgust. I worked with this thing for a few years. I’d not trust it as far as I could have thrown it.

2 Likes

I explained poorly. Men chose the data. Algorithm did the rest. Data choices are never neutral and data is never neutral. We have known this for years given analysis of the core datasets used by ML to assess the effectiveness of AI turned out to be riddled with exactly these issues.

There is a shortish report but for understandable reasons the whole issue exploded into a wider internal debate and so after it was sent out as part of a wider university email it was effectively retracted. MEN have been sniffing around and give that there’s so much bad press for said establishment on so many fronts at present it would be extraordinary if they were to effectively score a massive own goal by letting it out into the wild.

So, yeah, I’ve seen the headline stuff; had a link via email and then seen that link go dead very quickly.

1 Like