Page 2 of 2

Re: AI achieves goals in dangerous way

Posted: Sun Sep 17, 2023 6:45 pm
by Grumble
All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.

Re: AI achieves goals in dangerous way

Posted: Sun Sep 17, 2023 7:22 pm
by bjn
Grumble wrote:
Sun Sep 17, 2023 6:45 pm
All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
The way LLMs work means that there are no error bars. All answers are stochastic fictions that look like the answer to the question. It’s just lucky that some happen to agree with reality while others are obvious nonsense.

Re: AI achieves goals in dangerous way

Posted: Sun Sep 17, 2023 7:38 pm
by Grumble
bjn wrote:
Sun Sep 17, 2023 7:22 pm
Grumble wrote:
Sun Sep 17, 2023 6:45 pm
All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
The way LLMs work means that there are no error bars. All answers are stochastic fictions that look like the answer to the question. It’s just lucky that some happen to agree with reality while others are obvious nonsense.
I suppose “true AI” might be possible to make logical and able to discern truth, but what I don’t see in fiction is imperfect AI.

Re: AI achieves goals in dangerous way

Posted: Sun Sep 17, 2023 7:50 pm
by bjn
AI is a nebulous term, which is why it’s nebulous in fiction. Naming the tech being used much more rigorously makes it less voodoo. So Large Language Model is much more appropriate than AI.

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 12:20 am
by dyqik
bjn wrote:
Sun Sep 17, 2023 7:50 pm
AI is a nebulous term, which is why it’s nebulous in fiction. Naming the tech being used much more rigorously makes it less voodoo. So Large Language Model is much more appropriate than AI.
"Intelligence" is a nebulous term.

(So is "artificial", but probably to a lesser degree. And some of the more robust definitions would rely on a definition of intelligence)

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 9:33 am
by JQH
Allo V Psycho wrote:
Sun Sep 17, 2023 5:05 pm
Question: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?

A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%

Correct answer: C) 37.5%

Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.

Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
You've just explained that after three half lives 12.5% remains. So how does it suddenly become 37.5%?

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 9:39 am
by jimbob
JQH wrote:
Mon Sep 18, 2023 9:33 am
Allo V Psycho wrote:
Sun Sep 17, 2023 5:05 pm
Question: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?

A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%

Correct answer: C) 37.5%

Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.

Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
You've just explained that after three half lives 12.5% remains. So how does it suddenly become 37.5%?
Because it is 3 half lives, so the chatbot then multiplies it by 3 to get 3/8

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 9:47 am
by jimbob
At least that's how I assume it happened.

the "three half lives" and three of something is multiplied by three.

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 6:49 pm
by Holylol
>How many sunflower plants does it take to make 1 l of sunflower oil

ChatGPT

The number of sunflower plants required to produce one liter of sunflower oil can vary based on several factors such as the sunflower variety, growing conditions, and extraction efficiency.

However, let's consider the following general calculation:

- On average, a sunflower head can have up to 1,000 to 2,000 seeds.
- Each seed yields about 40% of its weight as oil. Let's assume each seed weighs about 50 milligrams (0.05 grams).
- So, the amount of oil per seed would be 0.02 grams (0.05g * 40%).
- One liter of oil weighs approximately 920 grams.
- Therefore, you would need about 46,000,000 seeds to produce one liter of oil (920,000mg / 20mg).
- Assuming an average of 1,500 seeds per plant (a middle value between 1,000 and 2,000), you would need approximately 30,667 sunflower plants (46,000,000 / 1,500) for one liter of oil.

Again, these are rough calculations and actual figures can vary. It's also worth noting that commercially grown sunflowers and extraction methods can yield more oil than a typical home garden.
Yes, 920,000mg / 20mg = 46,000,000

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 6:53 pm
by Grumble
All of these examples, including mine, are mathematical problems. We can fairly convincingly state that ChatGPT is sh.t at maths.

Re: AI achieves goals in dangerous way

Posted: Mon Sep 18, 2023 6:54 pm
by Holylol
Which is a bit troubling for... a thing on the computer

Re: AI achieves goals in dangerous way

Posted: Tue Sep 19, 2023 6:35 am
by bjn
Don’t call it AI, call it SALAMI.


https://blog.quintarelli.it/2019/11/let ... es-salami/

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 9:50 am
by Allo V Psycho
The previous drug dose calculation I posted was from using ChatGPT. I thought I would try Bard to see if it was any better.

Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
If a drug has a half-life of 4 hours, then after 12 hours, the percentage of the initial dose that will remain in the system is 6.25%.

To calculate this, we can use the following formula:

Percentage of drug remaining = (1/2)^(number of half-lives) * 100
In this case, the number of half-lives is 12 hours / 4 hours/half-life = 3 half-lives.

Therefore, the percentage of drug remaining is:

(1/2)^3 * 100 = 6.25%
This means that after 12 hours, only 6.25% of the initial dose of the drug will remain in the system.

It is important to note that this is just a general calculation. The actual percentage of drug remaining in the system after a certain amount of time will vary depending on a number of factors, such as the individual's metabolism, the route of administration of the drug, and whether or not the drug is interacting with any other medications
Sigh.

The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 12:53 pm
by Martin Y
The particularly insidious thing about that last one is that it describes the correct calculation but then somehow manages to mess that up and do a different calculation, using ½4 instead of ½3 as it said it should.

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 1:05 pm
by monkey
I like how this thread has gone from "AI is going to kill us!" to "AI can't maths."

I suppose "AI can't maths" might kill us, possibly even in interesting ways, but I'd question the intelligence of whoever it was who put a language model in charge of the dangerous maths.

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 1:12 pm
by Martin Y
I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 3:02 pm
by jimbob
Allo V Psycho wrote:
Wed Sep 20, 2023 9:50 am
The previous drug dose calculation I posted was from using ChatGPT. I thought I would try Bard to see if it was any better.

Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
If a drug has a half-life of 4 hours, then after 12 hours, the percentage of the initial dose that will remain in the system is 6.25%.

To calculate this, we can use the following formula:

Percentage of drug remaining = (1/2)^(number of half-lives) * 100
In this case, the number of half-lives is 12 hours / 4 hours/half-life = 3 half-lives.

Therefore, the percentage of drug remaining is:

(1/2)^3 * 100 = 6.25%
This means that after 12 hours, only 6.25% of the initial dose of the drug will remain in the system.

It is important to note that this is just a general calculation. The actual percentage of drug remaining in the system after a certain amount of time will vary depending on a number of factors, such as the individual's metabolism, the route of administration of the drug, and whether or not the drug is interacting with any other medications
Sigh.

The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.
I tried bing chat, with your first question and it gave the correct answer

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 4:49 pm
by Gfamily
Martin Y wrote:
Wed Sep 20, 2023 1:12 pm
I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".
When I was a science teacher, one of the classes included a task to ask the pupils how they might separate a mix of sawdust, sand and iron filings into its separate components.
The most* memorable response was "Get a computer to do it".

I guess that AI could well have come up with the same response

* the only memorable response to be honest

Re: AI achieves goals in dangerous way

Posted: Wed Sep 20, 2023 7:37 pm
by dyqik
Martin Y wrote:
Wed Sep 20, 2023 1:12 pm
I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".
See Jeph Jacques' Questionable Content comic for answers.

Re: AI achieves goals in dangerous way

Posted: Wed Feb 28, 2024 8:03 am
by Woodchopper
Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... 103498070Ø

The article forcuses upon Twitter but applies to the rest of the internet. It seems like we are seeing the unfolding of one of the predictions about AI, that it would lead to a torrent of machine generated text which can be produced almost for free.

As much as I dislike him, Musk might be correct. Perhaps the only way to make some online services useful in the future would be to limit use of them to verified humans.

This would obviously have implications for anyone who didn’t want to, or couldn’t, be verified.

Re: AI achieves goals in dangerous way

Posted: Wed Feb 28, 2024 8:17 am
by jaap
Woodchopper wrote:
Wed Feb 28, 2024 8:03 am
Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... /103498070
Fixed link.

Re: AI achieves goals in dangerous way

Posted: Wed Feb 28, 2024 9:19 am
by Woodchopper
jaap wrote:
Wed Feb 28, 2024 8:17 am
Woodchopper wrote:
Wed Feb 28, 2024 8:03 am
Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... /103498070
Fixed link.
Thanks for that