AI achieves goals in dangerous way

Discussions about serious topics, for serious people
User avatar
Grumble
Light of Blast
Posts: 4776
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI achieves goals in dangerous way

Post by Grumble » Sun Sep 17, 2023 6:45 pm

All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
where once I used to scintillate
now I sin till ten past three

User avatar
bjn
Stummy Beige
Posts: 2934
Joined: Wed Sep 25, 2019 4:58 pm
Location: London

Re: AI achieves goals in dangerous way

Post by bjn » Sun Sep 17, 2023 7:22 pm

Grumble wrote:
Sun Sep 17, 2023 6:45 pm
All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
The way LLMs work means that there are no error bars. All answers are stochastic fictions that look like the answer to the question. It’s just lucky that some happen to agree with reality while others are obvious nonsense.

User avatar
Grumble
Light of Blast
Posts: 4776
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI achieves goals in dangerous way

Post by Grumble » Sun Sep 17, 2023 7:38 pm

bjn wrote:
Sun Sep 17, 2023 7:22 pm
Grumble wrote:
Sun Sep 17, 2023 6:45 pm
All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
The way LLMs work means that there are no error bars. All answers are stochastic fictions that look like the answer to the question. It’s just lucky that some happen to agree with reality while others are obvious nonsense.
I suppose “true AI” might be possible to make logical and able to discern truth, but what I don’t see in fiction is imperfect AI.
where once I used to scintillate
now I sin till ten past three

User avatar
bjn
Stummy Beige
Posts: 2934
Joined: Wed Sep 25, 2019 4:58 pm
Location: London

Re: AI achieves goals in dangerous way

Post by bjn » Sun Sep 17, 2023 7:50 pm

AI is a nebulous term, which is why it’s nebulous in fiction. Naming the tech being used much more rigorously makes it less voodoo. So Large Language Model is much more appropriate than AI.

User avatar
dyqik
Princess POW
Posts: 7571
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI achieves goals in dangerous way

Post by dyqik » Mon Sep 18, 2023 12:20 am

bjn wrote:
Sun Sep 17, 2023 7:50 pm
AI is a nebulous term, which is why it’s nebulous in fiction. Naming the tech being used much more rigorously makes it less voodoo. So Large Language Model is much more appropriate than AI.
"Intelligence" is a nebulous term.

(So is "artificial", but probably to a lesser degree. And some of the more robust definitions would rely on a definition of intelligence)

User avatar
JQH
After Pie
Posts: 2146
Joined: Mon Nov 11, 2019 3:30 pm
Location: Sar Flandan

Re: AI achieves goals in dangerous way

Post by JQH » Mon Sep 18, 2023 9:33 am

Allo V Psycho wrote:
Sun Sep 17, 2023 5:05 pm
Question: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?

A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%

Correct answer: C) 37.5%

Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.

Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
You've just explained that after three half lives 12.5% remains. So how does it suddenly become 37.5%?
And remember that if you botch the exit, the carnival of reaction may be coming to a town near you.

Fintan O'Toole

User avatar
jimbob
Light of Blast
Posts: 5301
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Mon Sep 18, 2023 9:39 am

JQH wrote:
Mon Sep 18, 2023 9:33 am
Allo V Psycho wrote:
Sun Sep 17, 2023 5:05 pm
Question: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?

A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%

Correct answer: C) 37.5%

Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.

Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
You've just explained that after three half lives 12.5% remains. So how does it suddenly become 37.5%?
Because it is 3 half lives, so the chatbot then multiplies it by 3 to get 3/8
Have you considered stupidity as an explanation

User avatar
jimbob
Light of Blast
Posts: 5301
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Mon Sep 18, 2023 9:47 am

At least that's how I assume it happened.

the "three half lives" and three of something is multiplied by three.
Have you considered stupidity as an explanation

Holylol
Sindis Poop
Posts: 79
Joined: Tue Nov 19, 2019 10:14 am

Re: AI achieves goals in dangerous way

Post by Holylol » Mon Sep 18, 2023 6:49 pm

>How many sunflower plants does it take to make 1 l of sunflower oil

ChatGPT

The number of sunflower plants required to produce one liter of sunflower oil can vary based on several factors such as the sunflower variety, growing conditions, and extraction efficiency.

However, let's consider the following general calculation:

- On average, a sunflower head can have up to 1,000 to 2,000 seeds.
- Each seed yields about 40% of its weight as oil. Let's assume each seed weighs about 50 milligrams (0.05 grams).
- So, the amount of oil per seed would be 0.02 grams (0.05g * 40%).
- One liter of oil weighs approximately 920 grams.
- Therefore, you would need about 46,000,000 seeds to produce one liter of oil (920,000mg / 20mg).
- Assuming an average of 1,500 seeds per plant (a middle value between 1,000 and 2,000), you would need approximately 30,667 sunflower plants (46,000,000 / 1,500) for one liter of oil.

Again, these are rough calculations and actual figures can vary. It's also worth noting that commercially grown sunflowers and extraction methods can yield more oil than a typical home garden.
Yes, 920,000mg / 20mg = 46,000,000

User avatar
Grumble
Light of Blast
Posts: 4776
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI achieves goals in dangerous way

Post by Grumble » Mon Sep 18, 2023 6:53 pm

All of these examples, including mine, are mathematical problems. We can fairly convincingly state that ChatGPT is sh.t at maths.
where once I used to scintillate
now I sin till ten past three

Holylol
Sindis Poop
Posts: 79
Joined: Tue Nov 19, 2019 10:14 am

Re: AI achieves goals in dangerous way

Post by Holylol » Mon Sep 18, 2023 6:54 pm

Which is a bit troubling for... a thing on the computer

User avatar
bjn
Stummy Beige
Posts: 2934
Joined: Wed Sep 25, 2019 4:58 pm
Location: London

Re: AI achieves goals in dangerous way

Post by bjn » Tue Sep 19, 2023 6:35 am

Don’t call it AI, call it SALAMI.


https://blog.quintarelli.it/2019/11/let ... es-salami/

Allo V Psycho
Catbabel
Posts: 737
Joined: Sat Nov 16, 2019 8:18 am

Re: AI achieves goals in dangerous way

Post by Allo V Psycho » Wed Sep 20, 2023 9:50 am

The previous drug dose calculation I posted was from using ChatGPT. I thought I would try Bard to see if it was any better.

Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
If a drug has a half-life of 4 hours, then after 12 hours, the percentage of the initial dose that will remain in the system is 6.25%.

To calculate this, we can use the following formula:

Percentage of drug remaining = (1/2)^(number of half-lives) * 100
In this case, the number of half-lives is 12 hours / 4 hours/half-life = 3 half-lives.

Therefore, the percentage of drug remaining is:

(1/2)^3 * 100 = 6.25%
This means that after 12 hours, only 6.25% of the initial dose of the drug will remain in the system.

It is important to note that this is just a general calculation. The actual percentage of drug remaining in the system after a certain amount of time will vary depending on a number of factors, such as the individual's metabolism, the route of administration of the drug, and whether or not the drug is interacting with any other medications
Sigh.

The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.

User avatar
Martin Y
Stummy Beige
Posts: 3088
Joined: Mon Nov 11, 2019 1:08 pm

Re: AI achieves goals in dangerous way

Post by Martin Y » Wed Sep 20, 2023 12:53 pm

The particularly insidious thing about that last one is that it describes the correct calculation but then somehow manages to mess that up and do a different calculation, using ½4 instead of ½3 as it said it should.

monkey
After Pie
Posts: 1909
Joined: Wed Nov 13, 2019 5:10 pm

Re: AI achieves goals in dangerous way

Post by monkey » Wed Sep 20, 2023 1:05 pm

I like how this thread has gone from "AI is going to kill us!" to "AI can't maths."

I suppose "AI can't maths" might kill us, possibly even in interesting ways, but I'd question the intelligence of whoever it was who put a language model in charge of the dangerous maths.

User avatar
Martin Y
Stummy Beige
Posts: 3088
Joined: Mon Nov 11, 2019 1:08 pm

Re: AI achieves goals in dangerous way

Post by Martin Y » Wed Sep 20, 2023 1:12 pm

I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".

User avatar
jimbob
Light of Blast
Posts: 5301
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Wed Sep 20, 2023 3:02 pm

Allo V Psycho wrote:
Wed Sep 20, 2023 9:50 am
The previous drug dose calculation I posted was from using ChatGPT. I thought I would try Bard to see if it was any better.

Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
If a drug has a half-life of 4 hours, then after 12 hours, the percentage of the initial dose that will remain in the system is 6.25%.

To calculate this, we can use the following formula:

Percentage of drug remaining = (1/2)^(number of half-lives) * 100
In this case, the number of half-lives is 12 hours / 4 hours/half-life = 3 half-lives.

Therefore, the percentage of drug remaining is:

(1/2)^3 * 100 = 6.25%
This means that after 12 hours, only 6.25% of the initial dose of the drug will remain in the system.

It is important to note that this is just a general calculation. The actual percentage of drug remaining in the system after a certain amount of time will vary depending on a number of factors, such as the individual's metabolism, the route of administration of the drug, and whether or not the drug is interacting with any other medications
Sigh.

The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.
I tried bing chat, with your first question and it gave the correct answer
Have you considered stupidity as an explanation

User avatar
Gfamily
Light of Blast
Posts: 5229
Joined: Mon Nov 11, 2019 1:00 pm
Location: NW England

Re: AI achieves goals in dangerous way

Post by Gfamily » Wed Sep 20, 2023 4:49 pm

Martin Y wrote:
Wed Sep 20, 2023 1:12 pm
I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".
When I was a science teacher, one of the classes included a task to ask the pupils how they might separate a mix of sawdust, sand and iron filings into its separate components.
The most* memorable response was "Get a computer to do it".

I guess that AI could well have come up with the same response

* the only memorable response to be honest
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!

User avatar
dyqik
Princess POW
Posts: 7571
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI achieves goals in dangerous way

Post by dyqik » Wed Sep 20, 2023 7:37 pm

Martin Y wrote:
Wed Sep 20, 2023 1:12 pm
I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".
See Jeph Jacques' Questionable Content comic for answers.

User avatar
Woodchopper
Princess POW
Posts: 7082
Joined: Sat Oct 12, 2019 9:05 am

Re: AI achieves goals in dangerous way

Post by Woodchopper » Wed Feb 28, 2024 8:03 am

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... 103498070Ø

The article forcuses upon Twitter but applies to the rest of the internet. It seems like we are seeing the unfolding of one of the predictions about AI, that it would lead to a torrent of machine generated text which can be produced almost for free.

As much as I dislike him, Musk might be correct. Perhaps the only way to make some online services useful in the future would be to limit use of them to verified humans.

This would obviously have implications for anyone who didn’t want to, or couldn’t, be verified.

User avatar
jaap
Catbabel
Posts: 680
Joined: Mon Nov 11, 2019 2:05 pm
Location: Netherlands
Contact:

Re: AI achieves goals in dangerous way

Post by jaap » Wed Feb 28, 2024 8:17 am

Woodchopper wrote:
Wed Feb 28, 2024 8:03 am
Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... /103498070
Fixed link.

User avatar
Woodchopper
Princess POW
Posts: 7082
Joined: Sat Oct 12, 2019 9:05 am

Re: AI achieves goals in dangerous way

Post by Woodchopper » Wed Feb 28, 2024 9:19 am

jaap wrote:
Wed Feb 28, 2024 8:17 am
Woodchopper wrote:
Wed Feb 28, 2024 8:03 am
Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... /103498070
Fixed link.
Thanks for that

Post Reply