AI and science and art

Discussions about serious topics, for serious people
IvanV
Stummy Beige
Posts: 2967
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Wed May 22, 2024 9:11 am

dyqik wrote:
Tue May 21, 2024 6:37 pm
AI, of any variety, doesn't postulate new models based on conceptual understanding. It can only form patterns from it's existing concepts.

This makes it useless for science.
monkey wrote:
Tue May 21, 2024 7:13 pm
They've tried getting of AI to do physics at least once. I remember this story from a couple of years back - clicky.

They got a neural net to model dynamic systems like a double pendulums of various types. The neural net came up with models that worked - it identified patterns, worked out how to describe them in maths, and made good predictions.

The trouble was, there was no explanation of what the variables mean, and in meat physics variables have meaning so's you know what's going on. So the researchers had no idea if what was is doing is useful or not. All they knew is that it was doing things differently to how a meat physicist would do it.

I think this was one of Dyqik's points.

(link to actual paper - clicky)
That's very interesting. As I read it, tt came up with state variables that we couldn't understand. So it had located things that were not within its existing concept set. Since it's existing concept set was supplied to it by us. But maybe dyqik would interpret that differently.

In a sense, this is what we need, something that is not hidebound by our lived experience, and so can come up with descriptions we overlook due to the inevitable narrowing of view that comes with that.

Clearly there is a problem when we can't actually unravel what its state variables are. They are doubtless some curious composite that is hard to untangle. It isn't actually useful.

I've had to examine some AI data analysis forecasting model recently, and it was non-transparent in a similar way. And researching it, this is just what this class of AI data analysis model does. It's doubtless great if you don't care how it makes its forecast. But we do care. It's no use that it combines the input variables in highly contingent ways that leaves no transparency on how it came to its forecast. We do actually need to know what effect certain parameters were having. We do need to know that its methods are consistent with the known laws of physics, etc. Whilst it is doubtless to some extent true that various things are contingent on everything else, i practical reality it in many cases it was implausible that it they were as highly contingent on everything else as it appeared to indicate. You ought to be able to say within quite narrow bounds what effect these factors are having, everything else held constant.

But whilst these do suffer from lack of transparency, I gain from this some sort of partial optimism that maybe with time and experience we could devise and train them to produce more useful outputs, by helping them to recognise the difference.

User avatar
dyqik
Princess POW
Posts: 7894
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI and science and art

Post by dyqik » Wed May 22, 2024 10:32 am

IvanV wrote:
Wed May 22, 2024 9:11 am
dyqik wrote:
Tue May 21, 2024 6:37 pm
AI, of any variety, doesn't postulate new models based on conceptual understanding. It can only form patterns from it's existing concepts.

This makes it useless for science.
monkey wrote:
Tue May 21, 2024 7:13 pm
They've tried getting of AI to do physics at least once. I remember this story from a couple of years back - clicky.

They got a neural net to model dynamic systems like a double pendulums of various types. The neural net came up with models that worked - it identified patterns, worked out how to describe them in maths, and made good predictions.

The trouble was, there was no explanation of what the variables mean, and in meat physics variables have meaning so's you know what's going on. So the researchers had no idea if what was is doing is useful or not. All they knew is that it was doing things differently to how a meat physicist would do it.

I think this was one of Dyqik's points.

(link to actual paper - clicky)
That's very interesting. As I read it, tt came up with state variables that we couldn't understand. So it had located things that were not within its existing concept set. Since it's existing concept set was supplied to it by us. But maybe dyqik would interpret that differently.
It came up with meaningless variables, meaning that it's just doing epicycles, rather than physics. This is not science.

IvanV
Stummy Beige
Posts: 2967
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Wed May 22, 2024 11:19 am

dyqik wrote:
Wed May 22, 2024 10:32 am
It came up with meaningless variables, meaning that it's just doing epicycles, rather than physics. This is not science.
To be useful in this space, I don't think AI necessarily needs to be able "do physics", and I have no expectation of AI being able to "think". It is rather a calculation aid. What AI programs can do is search through large numbers of options and find the one that best means the criteria that we set it. This is what it does as it spanks us at games like chess. It's not thinking, it is just performing a mathematical optimisation.

As you say, it comes up with meaningless convoluted variables, and your characterisation of them as being like just like epicycles is very appropriate. But we can set objectives, and it is the function of neuro-linguistic program can look for "solutions" that better meet objectives. If we can somehow indicate to the NLP some kind of criteria for "meaningfulness" in a solution, it might be able to search through potential solution spaces to find solutions that are more meaningful to us, and are ones that we have overlooked. We somehow need to be able to deprecate epicycle-like solutions in its search criteria. Clearly there are something like convergence criteria in such things, and whether there is some kind of impediment there, I don't know.

There are these AI data analysis systems out there, and having now come up against people using them. I can understand that they are often useful to people who just want a pretty good prediction and don't care very much how it works. For example, the people doing the stuff that chooses what website content to present to us. And for the games-playing engines, we have no expectation that we can deduce playing rules from them. But for those of us who need to understand what is influencing what, these non-transparent calculators are pretty useless. So I hope that the people who devise these things will recognise this shortcoming and try to find systems that are more useful in these situations, there would appear to be a desire for it. But maybe

I'm reminded of an earlier phase of the development of backgammon engines. Whilst the engines were already, in most circumstances, playing better backgammon than any living player, some people discovered that they could beat the computer by tempting it into weird positions that never occurred in human games. You couldn't do that with human opponents, because to even moderately competent human players they were such weird and bad positions that they strenuously avoided going anywhere like that. Lacking such human prejudices, and no prior experience of those positions, because they never occurred in its training set, the computer's self-determined position evaluation function misevaluated them as good positions, due to certain features that in broader cases are advantageous. As we can see, the computer couldn't "think" and see what was obvious to an even moderately competent backgammon player. But as soon as we included a few of the weird positions like that in its training set, it quickly adjusted its position evaluation algorithms to recognise them as highly undesirable positions, and it then avoided them as strenuously as humans did. And now computers spank us at backgammon, and we know no tactics for beating them, short of weighting the dice. It's not quite the same thing, but it indicates how we can improve the optimisation procedure for the computers to be able to better optimise.

User avatar
dyqik
Princess POW
Posts: 7894
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI and science and art

Post by dyqik » Wed May 22, 2024 11:31 am

There's lots of older statistical tools that do similar things - principal component analysis, for example. These are used to explore data, just as machine learning techniques are now.

But what they are used for is data exploration, not the scientific hypothesis and test process, which is what you suggested AI could do.

IvanV
Stummy Beige
Posts: 2967
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Wed May 22, 2024 12:32 pm

dyqik wrote:
Wed May 22, 2024 11:31 am
There's lots of older statistical tools that do similar things - principal component analysis, for example. These are used to explore data, just as machine learning techniques are now.

But what they are used for is data exploration, not the scientific hypothesis and test process, which is what you suggested AI could do.
I have read through my posts again, and I don't see where you got that thought from. I have suggested only that it could search for patterns in data.

Yes, there are older statistical tools that do that, but the world moves on and we look for better mathematical tools.

I started from suggesting that AI might look for patterns in data, find a mathematical description that underlies such patterns. We have discovered it can do that, but currently in non-transparent ways that are not useful. Such non-transparency is unfortunately a common feature of how present AI programs explore data. I suggested we could perhaps prompt it to look for more transparent descriptions.

User avatar
bjn
Stummy Beige
Posts: 2996
Joined: Wed Sep 25, 2019 4:58 pm
Location: London

Re: AI and science and art

Post by bjn » Wed May 22, 2024 9:02 pm

dyqik wrote:
Wed May 22, 2024 11:31 am
There's lots of older statistical tools that do similar things - principal component analysis, for example. These are used to explore data, just as machine learning techniques are now.

But what they are used for is data exploration, not the scientific hypothesis and test process, which is what you suggested AI could do.
PCA is a core machine learning technique. Used for all sorts.

/reply-guy

IvanV
Stummy Beige
Posts: 2967
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Thu May 30, 2024 11:24 am

A lecture to be given later this year in Oxford by Terence Tao (mathematician, Fields Medal, at UCLA) on the uses of AI in science and maths was just advertised to me. So I googled what he might have been up to in that space. I find he sits on the President's Council of Advisors on Science and Technology (PCAST).

PCAST published a report on the uses of AI to advance science recently. I haven't looked through the report itself yet. But the press release is saying similar things to what I said - look for patterns in large amounts of data, identify candidate solutions to pressing research problems, a tool for, rather than replacement of, scientists. But we know what press releases are like and maybe someone wants to evaluate the full report. At the same time, as I said above, I remain painfully aware that the AI data analysis I have seen is not very useful for these purposes. It is me wittering from a position of ignorance when I suggest that it seems possible that models can be trained in what is "useful" output, and head towards that. I can kind of imagine that is possible, but maybe an expert can tell me otherwise.

User avatar
discovolante
Stummy Beige
Posts: 4160
Joined: Fri Oct 11, 2019 5:10 pm

Re: AI and science and art

Post by discovolante » Wed Jul 10, 2024 6:49 pm

I forgot to mention (I think?) that at my last writing group I wrote 500 words then got Chat GPT to do the second 500, without telling anyone. Actually I used Claude first but it was awful. Anyway the AI part was written entirely competently but very cheesy and overly polemic, like bog standard crap fiction I suppose. The comments I got were quite positive apart from a couple of remarks about 'hackneyed' phrases, and the polemical aspects. When I did the big reveal during the meeting the responses were quite interesting but my favourite bit was that one of my friends said 'yeah actually...it's sh.t!' Which connected very nicely with another person's piece where they argued that nobody listens to music (or consumes any art) blindly and we're heavily influenced by who we think produced it. I did prod a bit to see if they had just been being polite before they knew but they insisted they weren't. Case closed I reckon.

ETA I probably should have said I used the free versions of chat GPT and Claude, I was going to pay for the most up to date ones but my natural stinginess kicked in and I decided not to bother.
To defy the laws of tradition is a crusade only of the brave.

IvanV
Stummy Beige
Posts: 2967
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Thu Jul 11, 2024 9:57 am

discovolante wrote:
Wed Jul 10, 2024 6:49 pm
....they argued that nobody listens to music (or consumes any art) blindly and we're heavily influenced by who we think produced it...
In the case of wine, experiments verify this. Though the information used in the classic experiment is price, which is correlated with producer reputation.

There is also a classic experiment on perception of musical performances, which indicates we are substantially affected by what we see in assessing what we hear.

In the classic wine experiment, experts are given samples of wine for tasting. They are given some basic information on each wine, and the price of the wine, and asked to rate it. Without informing them, the flight of samples would include some duplicate samples of several of the wines, but with different prices attributed in the information. It is normal for experts to rate 100+ samples in a session, and many wines of the same type are often quite similar, so you can slip in these duplicates without it likely to be being noticed. The analysis found a high correlation between the price information and ratings given for the same wine.

The musical experiment related to the judging of musical competitions. Musical experts were asked to watch/listen to recordings of a previous competition, which they were unaware of the results, and asked how they would rate the performers. Two cases were compared. In one case, they were given the music with no visuals. In the other case, they were given just the visuals with no sound - seems odd but they were asked to judge from what they were seeing what they might be hearing. The latter was much better correlated with the marks actually given by the judges in the competition, indicating that those judges were substantially influenced by what they saw, even though they were supposed to be judging what they heard.

User avatar
Grumble
Light of Blast
Posts: 4970
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI and science and art

Post by Grumble » Thu Jul 11, 2024 2:06 pm

In brass band competitions the judges are blinded routinely, they get a copy of the music and sit behind a curtain. The scores rarely correlate fully with the comments of the audience who are free to see normally.
where once I used to scintillate
now I sin till ten past three

monkey
After Pie
Posts: 1981
Joined: Wed Nov 13, 2019 5:10 pm

Re: AI and science and art

Post by monkey » Thu Jul 11, 2024 2:50 pm

The visual part of of a performance is obviously important, stage presence is a thing and contributes to the enjoyment of live music. I've seen many bands I like, but been bored at their gigs, I have also seen bands I like live, but not on recording. The decoupling of the visual and the aural seems silly to me, both are part of the whole and you should be judging on the whole performance.

ETA: Should have added that this is obviously genre dependent. I'm sure people who listen to orchestras or brass bands look for different things to what I do with my preferred musics.

Post Reply