AI and references to published papers
AI and references to published papers
An interesting podcast episode* from How Many Geese in which they asked an AI text generator to write a section for a podcast.
Came up with an interesting story (bio sciences and ecology related) and was able to provide references.
But - there's a catch.
Spoiler:
* Episode 'Now That's What I Call Nature' from 16 May 23.
Came up with an interesting story (bio sciences and ecology related) and was able to provide references.
But - there's a catch.
Spoiler:
* Episode 'Now That's What I Call Nature' from 16 May 23.
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!
- bob sterman
- Dorkwood
- Posts: 1136
- Joined: Mon Nov 11, 2019 10:25 pm
- Location: Location Location
Re: AI and references to published papers
Not sure we need a spoiler warning? Thought this was pretty widely known??
But I'll go along with it...
Spoiler:
But I'll go along with it...
Spoiler:
-
- Dorkwood
- Posts: 1511
- Joined: Mon Nov 11, 2019 1:22 pm
Re: AI and references to published papers
There are other AIs that are better at this apparently and are much better at supplying real references - these are subscription rather than free.
We've seen it now in a few places in submissions by students. Now it could just be the case of us seeing the obvious stuff and missing the really good ones... but in the main you can spot it in the language - it's generally too polished and "highfalutin" even by undergrad standards where they are often trying to sound "scientific" by using big words. Mainly it's the references, which are bl..dy awful: fake, or if real they are poor. One example - a student cited a reference in a blatantly ChatGPT-d paragraph, apparently a global study about the pandemic and indoor air quality, but it actually was a study from 2012 in some specific US counties.
We had a play around and asked it about different academic staff - it frequently got their area of research wrong (!) and just invented papers by them.
Another thing - they don't seem to be very good at a sustaining things over length. Could possibly pass muster in something that is 500 words long and someone wasn't paying attention, but not so good for longer and more complex pieces.
ETA - some of my colleagues have got quite spooked and panicked by this... I'm currently more sanguine.
We've seen it now in a few places in submissions by students. Now it could just be the case of us seeing the obvious stuff and missing the really good ones... but in the main you can spot it in the language - it's generally too polished and "highfalutin" even by undergrad standards where they are often trying to sound "scientific" by using big words. Mainly it's the references, which are bl..dy awful: fake, or if real they are poor. One example - a student cited a reference in a blatantly ChatGPT-d paragraph, apparently a global study about the pandemic and indoor air quality, but it actually was a study from 2012 in some specific US counties.
We had a play around and asked it about different academic staff - it frequently got their area of research wrong (!) and just invented papers by them.
Another thing - they don't seem to be very good at a sustaining things over length. Could possibly pass muster in something that is 500 words long and someone wasn't paying attention, but not so good for longer and more complex pieces.
ETA - some of my colleagues have got quite spooked and panicked by this... I'm currently more sanguine.
Re: AI and references to published papers
There's also the problem that it invents software libraries when asked to provide code for common programming tasks.
People have noticed this, and started generating malware packages with those names, that get automatically downloaded, installed and run when the code is implemented by a programmer.
People have noticed this, and started generating malware packages with those names, that get automatically downloaded, installed and run when the code is implemented by a programmer.
-
- Snowbonk
- Posts: 530
- Joined: Tue Nov 12, 2019 8:05 am
Re: AI and references to published papers
Yes we have seen much of the same for student reports. I am somewhat worried when students twig that if they do some additional work they can hide the AI pretty well. We are now getting training on how to set assignments that are hard to do with AI.
I received one assignment where the references were all real, if rather broad reviews, but what struck me about the reference list was the Accessed on dates went all the way back to 2018, with none from 2023, for an assignment set in 2023.
The lazy students who typically plagiarise are going to leave the AI tells in their work.
I received one assignment where the references were all real, if rather broad reviews, but what struck me about the reference list was the Accessed on dates went all the way back to 2018, with none from 2023, for an assignment set in 2023.
The lazy students who typically plagiarise are going to leave the AI tells in their work.
Here grows much rhubarb.
Re: AI and references to published papers
The best version of an AI-proof assignment I've heard of asked students to generate an AI essay in response to a prompt of their choice, and then find and explain the errors in it.Chris Preston wrote: ↑Wed Jun 14, 2023 11:09 amYes we have seen much of the same for student reports. I am somewhat worried when students twig that if they do some additional work they can hide the AI pretty well. We are now getting training on how to set assignments that are hard to do with AI.
I received one assignment where the references were all real, if rather broad reviews, but what struck me about the reference list was the Accessed on dates went all the way back to 2018, with none from 2023, for an assignment set in 2023.
The lazy students who typically plagiarise are going to leave the AI tells in their work.
Re: AI and references to published papers
Current LLM don't "hallucinate" the wrong answers and get correct answers "right", they hallucinate everything, it's just that sometimes they hallucinate the correct answer. The best way to think of any request to an LLM is not "please tell me the answer to this question", but rather "please tell me what an answer to this question might look like". When it gets it right, it just stumbled upon it. Stochastic parrots are one of the better descriptions I've found. Some things that are very well known (eg: height of Mount Everest) it is very likely to get right, the more obscure it gets, the less likely. All depending on the input data that was used to generate the LLM.
Apparently there's an LLM that used 4chan as input. Unsurprisingly, its responses are truly horrendous.
Apparently there's an LLM that used 4chan as input. Unsurprisingly, its responses are truly horrendous.
-
- Snowbonk
- Posts: 530
- Joined: Tue Nov 12, 2019 8:05 am
Re: AI and references to published papers
The Junior Ps were here the other evening and were using Chat GTP to get recipes using strange collections of ingredients. At one point they decided that MSG had to be an ingredient, so asked for a recipe using a cup of MSG. Chat GTP wrote it couldn't do that, because it would be too unhealthy.
Here grows much rhubarb.
- Woodchopper
- Princess POW
- Posts: 7084
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI and references to published papers
Yes, I've played around with GPT a bit. It would be foolish to copy and paste. But as far as I can see someone who is intelligent but lazy could spend a couple of hours editing a CHAT GPT output rather than a couple of days reading material and writing an essay from scratch. Problem is that the lazy one won't understand the subject nearly as well, if at all.Chris Preston wrote: ↑Wed Jun 14, 2023 11:09 amYes we have seen much of the same for student reports. I am somewhat worried when students twig that if they do some additional work they can hide the AI pretty well. We are now getting training on how to set assignments that are hard to do with AI.
- Woodchopper
- Princess POW
- Posts: 7084
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI and references to published papers
For example, what Chat GPT seems to be good at is structuring an argument and listing the most important points. If someone is writing from scratch then getting that structure right can take some time and the author needs to first understand the subject.Woodchopper wrote: ↑Wed Jun 14, 2023 12:08 pmYes, I've played around with GPT a bit. It would be foolish to copy and paste. But as far as I can see someone who is intelligent but lazy could spend a couple of hours editing a CHAT GPT output rather than a couple of days reading material and writing an essay from scratch. Problem is that the lazy one won't understand the subject nearly as well, if at all.Chris Preston wrote: ↑Wed Jun 14, 2023 11:09 amYes we have seen much of the same for student reports. I am somewhat worried when students twig that if they do some additional work they can hide the AI pretty well. We are now getting training on how to set assignments that are hard to do with AI.
Re: AI and references to published papers
A colleague I was talking to about the issue found this. ALthough its aimed at use in software dev I cant help thinking it would apply more broadly.dyqik wrote: ↑Wed Jun 14, 2023 11:00 amThere's also the problem that it invents software libraries when asked to provide code for common programming tasks.
People have noticed this, and started generating malware packages with those names, that get automatically downloaded, installed and run when the code is implemented by a programmer.
The poisoning of ChatGPT