AI achieves goals in dangerous way
Re: AI achieves goals in dangerous way
All the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
where once I used to scintillate
now I sin till ten past three
now I sin till ten past three
Re: AI achieves goals in dangerous way
The way LLMs work means that there are no error bars. All answers are stochastic fictions that look like the answer to the question. It’s just lucky that some happen to agree with reality while others are obvious nonsense.Grumble wrote: ↑Sun Sep 17, 2023 6:45 pmAll the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
Re: AI achieves goals in dangerous way
I suppose “true AI” might be possible to make logical and able to discern truth, but what I don’t see in fiction is imperfect AI.bjn wrote: ↑Sun Sep 17, 2023 7:22 pmThe way LLMs work means that there are no error bars. All answers are stochastic fictions that look like the answer to the question. It’s just lucky that some happen to agree with reality while others are obvious nonsense.Grumble wrote: ↑Sun Sep 17, 2023 6:45 pmAll the old Sci-Fi tropes about AI being unable to lie are not going to stand, and I’d love to a read a book that deals with this. I feel like it was touched on in the film Ex Machina, if that was based on a book I haven’t read it, but also it was more a deliberate deception than this casual ignorance of logic and truth.
where once I used to scintillate
now I sin till ten past three
now I sin till ten past three
Re: AI achieves goals in dangerous way
AI is a nebulous term, which is why it’s nebulous in fiction. Naming the tech being used much more rigorously makes it less voodoo. So Large Language Model is much more appropriate than AI.
Re: AI achieves goals in dangerous way
"Intelligence" is a nebulous term.
(So is "artificial", but probably to a lesser degree. And some of the more robust definitions would rely on a definition of intelligence)
Re: AI achieves goals in dangerous way
You've just explained that after three half lives 12.5% remains. So how does it suddenly become 37.5%?Allo V Psycho wrote: ↑Sun Sep 17, 2023 5:05 pmQuestion: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%
Correct answer: C) 37.5%
Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.
Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
And remember that if you botch the exit, the carnival of reaction may be coming to a town near you.
Fintan O'Toole
Fintan O'Toole
Re: AI achieves goals in dangerous way
Because it is 3 half lives, so the chatbot then multiplies it by 3 to get 3/8JQH wrote: ↑Mon Sep 18, 2023 9:33 amYou've just explained that after three half lives 12.5% remains. So how does it suddenly become 37.5%?Allo V Psycho wrote: ↑Sun Sep 17, 2023 5:05 pmQuestion: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%
Correct answer: C) 37.5%
Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.
Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
At least that's how I assume it happened.
the "three half lives" and three of something is multiplied by three.
the "three half lives" and three of something is multiplied by three.
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
Yes, 920,000mg / 20mg = 46,000,000>How many sunflower plants does it take to make 1 l of sunflower oil
ChatGPT
The number of sunflower plants required to produce one liter of sunflower oil can vary based on several factors such as the sunflower variety, growing conditions, and extraction efficiency.
However, let's consider the following general calculation:
- On average, a sunflower head can have up to 1,000 to 2,000 seeds.
- Each seed yields about 40% of its weight as oil. Let's assume each seed weighs about 50 milligrams (0.05 grams).
- So, the amount of oil per seed would be 0.02 grams (0.05g * 40%).
- One liter of oil weighs approximately 920 grams.
- Therefore, you would need about 46,000,000 seeds to produce one liter of oil (920,000mg / 20mg).
- Assuming an average of 1,500 seeds per plant (a middle value between 1,000 and 2,000), you would need approximately 30,667 sunflower plants (46,000,000 / 1,500) for one liter of oil.
Again, these are rough calculations and actual figures can vary. It's also worth noting that commercially grown sunflowers and extraction methods can yield more oil than a typical home garden.
Re: AI achieves goals in dangerous way
All of these examples, including mine, are mathematical problems. We can fairly convincingly state that ChatGPT is sh.t at maths.
where once I used to scintillate
now I sin till ten past three
now I sin till ten past three
Re: AI achieves goals in dangerous way
Which is a bit troubling for... a thing on the computer
-
- Catbabel
- Posts: 753
- Joined: Sat Nov 16, 2019 8:18 am
Re: AI achieves goals in dangerous way
The previous drug dose calculation I posted was from using ChatGPT. I thought I would try Bard to see if it was any better.
Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.
Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
Sigh.If a drug has a half-life of 4 hours, then after 12 hours, the percentage of the initial dose that will remain in the system is 6.25%.
To calculate this, we can use the following formula:
Percentage of drug remaining = (1/2)^(number of half-lives) * 100
In this case, the number of half-lives is 12 hours / 4 hours/half-life = 3 half-lives.
Therefore, the percentage of drug remaining is:
(1/2)^3 * 100 = 6.25%
This means that after 12 hours, only 6.25% of the initial dose of the drug will remain in the system.
It is important to note that this is just a general calculation. The actual percentage of drug remaining in the system after a certain amount of time will vary depending on a number of factors, such as the individual's metabolism, the route of administration of the drug, and whether or not the drug is interacting with any other medications
The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.
Re: AI achieves goals in dangerous way
The particularly insidious thing about that last one is that it describes the correct calculation but then somehow manages to mess that up and do a different calculation, using ½4 instead of ½3 as it said it should.
Re: AI achieves goals in dangerous way
I like how this thread has gone from "AI is going to kill us!" to "AI can't maths."
I suppose "AI can't maths" might kill us, possibly even in interesting ways, but I'd question the intelligence of whoever it was who put a language model in charge of the dangerous maths.
I suppose "AI can't maths" might kill us, possibly even in interesting ways, but I'd question the intelligence of whoever it was who put a language model in charge of the dangerous maths.
Re: AI achieves goals in dangerous way
I think it was over on ISF (but I can't find the quote just now as that forum is hiding behind a DNS error again for some reason) someone remarked that SF versions of AI had never seemed to consider the idea "what if a computer was stupid?".
Re: AI achieves goals in dangerous way
I tried bing chat, with your first question and it gave the correct answerAllo V Psycho wrote: ↑Wed Sep 20, 2023 9:50 amThe previous drug dose calculation I posted was from using ChatGPT. I thought I would try Bard to see if it was any better.
Prompt: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
Sigh.If a drug has a half-life of 4 hours, then after 12 hours, the percentage of the initial dose that will remain in the system is 6.25%.
To calculate this, we can use the following formula:
Percentage of drug remaining = (1/2)^(number of half-lives) * 100
In this case, the number of half-lives is 12 hours / 4 hours/half-life = 3 half-lives.
Therefore, the percentage of drug remaining is:
(1/2)^3 * 100 = 6.25%
This means that after 12 hours, only 6.25% of the initial dose of the drug will remain in the system.
It is important to note that this is just a general calculation. The actual percentage of drug remaining in the system after a certain amount of time will vary depending on a number of factors, such as the individual's metabolism, the route of administration of the drug, and whether or not the drug is interacting with any other medications
The things is, I couldn't swear that there won't be a harried Foundation doctor out there somewhere actually using this, in circumstances where there may not be a proper pharmacist to put them straight.
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
When I was a science teacher, one of the classes included a task to ask the pupils how they might separate a mix of sawdust, sand and iron filings into its separate components.
The most* memorable response was "Get a computer to do it".
I guess that AI could well have come up with the same response
* the only memorable response to be honest
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!
Re: AI achieves goals in dangerous way
See Jeph Jacques' Questionable Content comic for answers.
- Woodchopper
- Princess POW
- Posts: 7310
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI achieves goals in dangerous way
Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... 103498070Ø
The article forcuses upon Twitter but applies to the rest of the internet. It seems like we are seeing the unfolding of one of the predictions about AI, that it would lead to a torrent of machine generated text which can be produced almost for free.
As much as I dislike him, Musk might be correct. Perhaps the only way to make some online services useful in the future would be to limit use of them to verified humans.
This would obviously have implications for anyone who didn’t want to, or couldn’t, be verified.
https://www.abc.net.au/news/science/202 ... 103498070Ø
The article forcuses upon Twitter but applies to the rest of the internet. It seems like we are seeing the unfolding of one of the predictions about AI, that it would lead to a torrent of machine generated text which can be produced almost for free.
As much as I dislike him, Musk might be correct. Perhaps the only way to make some online services useful in the future would be to limit use of them to verified humans.
This would obviously have implications for anyone who didn’t want to, or couldn’t, be verified.
Re: AI achieves goals in dangerous way
Fixed link.Woodchopper wrote: ↑Wed Feb 28, 2024 8:03 amTwitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... /103498070
Jaap's Page: https://www.jaapsch.net/
- Woodchopper
- Princess POW
- Posts: 7310
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI achieves goals in dangerous way
Thanks for thatjaap wrote: ↑Wed Feb 28, 2024 8:17 amFixed link.Woodchopper wrote: ↑Wed Feb 28, 2024 8:03 amTwitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet
https://www.abc.net.au/news/science/202 ... /103498070