AI achieves goals in dangerous way

Discussions about serious topics, for serious people
User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

AI achieves goals in dangerous way

Post by jimbob » Thu Jun 01, 2023 9:45 pm

https://www.aerosociety.com/news/highli ... es-summit/

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
Have you considered stupidity as an explanation

User avatar
Grumble
Light of Blast
Posts: 4746
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI achieves goals in dangerous way

Post by Grumble » Thu Jun 01, 2023 10:00 pm

This makes Asimov remarkably prescient doesn’t it? I must read I, Robot and the Foundation series again.
where once I used to scintillate
now I sin till ten past three

User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Thu Jun 01, 2023 10:08 pm

Grumble wrote:
Thu Jun 01, 2023 10:00 pm
This makes Asimov remarkably prescient doesn’t it? I must read I, Robot and the Foundation series again.
Yup.

I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
Have you considered stupidity as an explanation

User avatar
dyqik
Princess POW
Posts: 7527
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI achieves goals in dangerous way

Post by dyqik » Thu Jun 01, 2023 11:13 pm

jimbob wrote:
Thu Jun 01, 2023 10:08 pm
Grumble wrote:
Thu Jun 01, 2023 10:00 pm
This makes Asimov remarkably prescient doesn’t it? I must read I, Robot and the Foundation series again.
Yup.

I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
A blind global optimizer (which is what a trained AI is) will happily explore and find any minimum in the fitness landscape, no matter how unstable, undesirable, unethical, or unphysical it is.

This is also why parasitic wasps, viruses and cancer exists.

User avatar
Brightonian
Dorkwood
Posts: 1429
Joined: Mon Nov 11, 2019 3:16 pm
Location: Usually UK, often France and Ireland

Re: AI achieves goals in dangerous way

Post by Brightonian » Fri Jun 02, 2023 6:54 am

A while ago, after reading claims that these AI engines can code anything in an instant, I decided to experiment:

(paraphrasing)

Me: If I gave you the connection parameters, would you be able to create a table in a database?

AI engine: Yeah sure buddy, no probs.

Me: Please create an empty table in [this database] with properties of your choosing.

AI engine: Done, yer good to go mate!

Me: No, there's no new table in the database.

AI engine: I am most terribly sorry. I cannot understand what has gone wrong. I most humbly apologize.

A few days later it occurred to me that probably they don't allow the AI engine to write to the Internet! A bored 14-year-old might think it fun to get AIs to hunt unsecured websites and trash them. Which is maybe why I've seen some scary public letters from Internet grandees of late.

User avatar
JQH
After Pie
Posts: 2141
Joined: Mon Nov 11, 2019 3:30 pm
Location: Sar Flandan

Re: AI achieves goals in dangerous way

Post by JQH » Fri Jun 02, 2023 8:09 am

... autonomous weapon systems ...
Sounds dangerously Terminator to me.
And remember that if you botch the exit, the carnival of reaction may be coming to a town near you.

Fintan O'Toole

User avatar
Woodchopper
Princess POW
Posts: 7057
Joined: Sat Oct 12, 2019 9:05 am

Re: AI achieves goals in dangerous way

Post by Woodchopper » Fri Jun 02, 2023 9:17 am

jimbob wrote:
Thu Jun 01, 2023 10:08 pm
Grumble wrote:
Thu Jun 01, 2023 10:00 pm
This makes Asimov remarkably prescient doesn’t it? I must read I, Robot and the Foundation series again.
Yup.

I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
Sort of. The problem with AI is that people tend to think of it as human (because in some ways it acts similar to humans). But its very different and is prone to act in ways that humans can't predict. So humans in a bureaucracy wouldn't decide that the most efficient means was to kill the person telling them what to do.

User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Fri Jun 02, 2023 9:45 am

Woodchopper wrote:
Fri Jun 02, 2023 9:17 am
jimbob wrote:
Thu Jun 01, 2023 10:08 pm
Grumble wrote:
Thu Jun 01, 2023 10:00 pm
This makes Asimov remarkably prescient doesn’t it? I must read I, Robot and the Foundation series again.
Yup.

I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
Sort of. The problem with AI is that people tend to think of it as human (because in some ways it acts similar to humans). But its very different and is prone to act in ways that humans can't predict. So humans in a bureaucracy wouldn't decide that the most efficient means was to kill the person telling them what to do.
Yes, I should have said, dialled up to 11.

Although there are instances where they might not have done it directly (after all the law does tend to add a constraint), but through avoidance, lead to deaths.
Have you considered stupidity as an explanation

User avatar
dyqik
Princess POW
Posts: 7527
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI achieves goals in dangerous way

Post by dyqik » Fri Jun 02, 2023 10:37 am

The simulation mentioned in the OP seems not to be a thing that happened, by the way.

https://twitter.com/jamestitcomb/status ... 1hu5A&s=19

It's also worth reading this analysis of how wrong the language in the story is, and how that misleads.

https://dair-community.social/@emilymbe ... 0206061447

User avatar
Woodchopper
Princess POW
Posts: 7057
Joined: Sat Oct 12, 2019 9:05 am

Re: AI achieves goals in dangerous way

Post by Woodchopper » Fri Jun 02, 2023 12:10 pm

dyqik wrote:
Fri Jun 02, 2023 10:37 am
The simulation mentioned in the OP seems not to be a thing that happened, by the way.

https://twitter.com/jamestitcomb/status ... 1hu5A&s=19
Yes, looks like someone's fantasy that got taken seriously by a journalist.
https://twitter.com/lee_georgina/status ... 95392?s=20

User avatar
Gfamily
Light of Blast
Posts: 5180
Joined: Mon Nov 11, 2019 1:00 pm
Location: NW England

Re: AI achieves goals in dangerous way

Post by Gfamily » Fri Jun 02, 2023 1:17 pm

"ChatGPT, give me a story about AI applying logic harmfully"
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!

User avatar
shpalman
Princess POW
Posts: 8242
Joined: Mon Nov 11, 2019 12:53 pm
Location: One step beyond
Contact:

Re: AI achieves goals in dangerous way

Post by shpalman » Fri Jun 02, 2023 4:05 pm

dyqik wrote:
Fri Jun 02, 2023 10:37 am
The simulation mentioned in the OP seems not to be a thing that happened, by the way.

https://twitter.com/jamestitcomb/status ... 1hu5A&s=19
No that's just what the AI wants you to think.
having that swing is a necessary but not sufficient condition for it meaning a thing
@shpalman@mastodon.me.uk

FlammableFlower
Dorkwood
Posts: 1508
Joined: Mon Nov 11, 2019 1:22 pm

Re: AI achieves goals in dangerous way

Post by FlammableFlower » Sat Jun 03, 2023 10:26 pm

I thought this thread was going to be about Al Capone Jr and some mad thing that denizen of this site had committed...

User avatar
Grumble
Light of Blast
Posts: 4746
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI achieves goals in dangerous way

Post by Grumble » Sun Jun 04, 2023 6:44 am

FlammableFlower wrote:
Sat Jun 03, 2023 10:26 pm
I thought this thread was going to be about Al Capone Jr and some mad thing that denizen of this site had committed...
What goals does Al have? The overthrow of the Republican Party in Texas?
where once I used to scintillate
now I sin till ten past three

User avatar
bjn
Stummy Beige
Posts: 2916
Joined: Wed Sep 25, 2019 4:58 pm
Location: London

Re: AI achieves goals in dangerous way

Post by bjn » Sun Jun 04, 2023 7:14 am

Well worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/

User avatar
Martin Y
Stummy Beige
Posts: 3080
Joined: Mon Nov 11, 2019 1:08 pm

Re: AI achieves goals in dangerous way

Post by Martin Y » Sun Jun 04, 2023 10:47 am

That was indeed worth a read. Cheered me up.

User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Sun Jun 04, 2023 12:12 pm

bjn wrote:
Sun Jun 04, 2023 7:14 am
Well worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Thanks for that
Have you considered stupidity as an explanation

User avatar
Woodchopper
Princess POW
Posts: 7057
Joined: Sat Oct 12, 2019 9:05 am

Re: AI achieves goals in dangerous way

Post by Woodchopper » Sun Jun 04, 2023 12:33 pm

bjn wrote:
Sun Jun 04, 2023 7:14 am
Well worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Well yes and no.

Yes, there's a lot of hype. I also agree that the subtext of the doomsayers is the often self-serving message that 'this is very powerful'.

But, it doesn't seem to be a repeat of crypto. To start with, no one ever figured out a widespread use for crypto (aside from speculation, money laundering and other types of criminality). But I'm already seeing people using generative artificial intelligence as part of their jobs. For example, I know people who use it at work as a code writing assistant or to create images.

As mentioned in the article, the most important social consequence may well be that some people have to look for other jobs. That makes it look more significant than crypto.

User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: AI achieves goals in dangerous way

Post by jimbob » Sun Jun 04, 2023 1:24 pm

Woodchopper wrote:
Sun Jun 04, 2023 12:33 pm
bjn wrote:
Sun Jun 04, 2023 7:14 am
Well worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Well yes and no.

Yes, there's a lot of hype. I also agree that the subtext of the doomsayers is the often self-serving message that 'this is very powerful'.

But, it doesn't seem to be a repeat of crypto. To start with, no one ever figured out a widespread use for crypto (aside from speculation, money laundering and other types of criminality). But I'm already seeing people using generative artificial intelligence as part of their jobs. For example, I know people who use it at work as a code writing assistant or to create images.

As mentioned in the article, the most important social consequence may well be that some people have to look for other jobs. That makes it look more significant than crypto.
Yes, and it was definitely underplaying it, with the comparison of ChatGPT to a phone's autocomplete.
Have you considered stupidity as an explanation

IvanV
Stummy Beige
Posts: 2663
Joined: Mon May 17, 2021 11:12 am

Re: AI achieves goals in dangerous way

Post by IvanV » Sun Jun 04, 2023 4:48 pm

Woodchopper wrote:
Sun Jun 04, 2023 12:33 pm
But I'm already seeing people using generative artificial intelligence as part of their jobs. For example, I know people who use it at work as a code writing assistant or to create images.

As mentioned in the article, the most important social consequence may well be that some people have to look for other jobs. That makes it look more significant than crypto.
So it's another round of production mechanisation. Labourers potentially displaced from their jobs have always been worried by mechanisation gains, worrying there may not be alternative jobs for them, all the way back to the initial mechanisation of agriculture. They usually failed to appreciate that the economy would expand and create new jobs. Though it is difficult to reskill later in your career, and new jobs can be in different places. So it does at least disrupt their lives, and they are generally not the ones that will gain much from it in their lifetimes.

But ultimately this mechanisation has made us better off. So far the IT revolution has not produced productivity gains on the scale of pre-1960 mechanisation, etc, gains. Though it seems likely that the established measures of such things have failed to encompass all the gains produced, for example in product quality.

Al Capone Junior
Clardic Fug
Posts: 221
Joined: Wed Dec 11, 2019 6:40 pm

Re: AI achieves goals in dangerous way

Post by Al Capone Junior » Mon Jun 12, 2023 8:03 am

Grumble wrote:
Sun Jun 04, 2023 6:44 am
FlammableFlower wrote:
Sat Jun 03, 2023 10:26 pm
I thought this thread was going to be about Al Capone Jr and some mad thing that denizen of this site had committed...
What goals does Al have? The overthrow of the Republican Party in Texas?
Don't mind if I do!
(Monster unleashed, mayhem ensues)

Al Capone Junior
Clardic Fug
Posts: 221
Joined: Wed Dec 11, 2019 6:40 pm

Re: AI achieves goals in dangerous way

Post by Al Capone Junior » Mon Jun 12, 2023 8:09 am

Actually overthrowing the repugs in tx would be difficult as the mentality of being an entitled, self-aggrandizing prick is pretty deeply ingrained in a bunch of mother f.ckers around here. :roll:

User avatar
Grumble
Light of Blast
Posts: 4746
Joined: Mon Nov 11, 2019 1:03 pm

Re: AI achieves goals in dangerous way

Post by Grumble » Wed Sep 13, 2023 3:56 pm

IMG_0772.jpeg
IMG_0772.jpeg (303.26 KiB) Viewed 736 times
where once I used to scintillate
now I sin till ten past three

Al Capone Junior
Clardic Fug
Posts: 221
Joined: Wed Dec 11, 2019 6:40 pm

Re: AI achieves goals in dangerous way

Post by Al Capone Junior » Sun Sep 17, 2023 2:28 pm

Grumble wrote:
Wed Sep 13, 2023 3:56 pm
IMG_0772.jpeg
Now wait, this is only true in a relative sense. My own personal troof can have any number of sisters, never mind your logic or facts. It's my own 'personal troof' and that makes it real for me and therefore also for everyone else too who must then respect my troof as being a newly created fact and give me a huge twitter platform to spew my troof to everyone else regardless of the consequences.

And also, nobody can say anything that will hurt my precious little fragile feelings, otherwise I'll unleash a fiery hellstorm upon them directly from twitter, because that's how it works now.

[Al starts to wake up...]

Omfg I'm having a nightmare! I dreamed I was a modern day American. OH f.ck!! I *AM* A MODERN DAY AMERICAN!!! NOOOOOOO!!!!!!

[Scene fades...]

Actually it's my own personal Chinese hell. The Chinese have a lot of hells. :roll:

Allo V Psycho
Catbabel
Posts: 734
Joined: Sat Nov 16, 2019 8:18 am

Re: AI achieves goals in dangerous way

Post by Allo V Psycho » Sun Sep 17, 2023 5:05 pm

Question: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?

A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%

Correct answer: C) 37.5%

Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.

Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097

Post Reply