As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
AI achieves goals in dangerous way
AI achieves goals in dangerous way
https://www.aerosociety.com/news/highli ... es-summit/
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
This makes Asimov remarkably prescient doesn’t it? I must read I, Robot and the Foundation series again.
where once I used to scintillate
now I sin till ten past three
now I sin till ten past three
Re: AI achieves goals in dangerous way
Yup.
I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
A blind global optimizer (which is what a trained AI is) will happily explore and find any minimum in the fitness landscape, no matter how unstable, undesirable, unethical, or unphysical it is.jimbob wrote: ↑Thu Jun 01, 2023 10:08 pmYup.
I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
This is also why parasitic wasps, viruses and cancer exists.
- Brightonian
- Dorkwood
- Posts: 1595
- Joined: Mon Nov 11, 2019 3:16 pm
- Location: Usually UK, often France and Ireland
Re: AI achieves goals in dangerous way
A while ago, after reading claims that these AI engines can code anything in an instant, I decided to experiment:
(paraphrasing)
Me: If I gave you the connection parameters, would you be able to create a table in a database?
AI engine: Yeah sure buddy, no probs.
Me: Please create an empty table in [this database] with properties of your choosing.
AI engine: Done, yer good to go mate!
Me: No, there's no new table in the database.
AI engine: I am most terribly sorry. I cannot understand what has gone wrong. I most humbly apologize.
A few days later it occurred to me that probably they don't allow the AI engine to write to the Internet! A bored 14-year-old might think it fun to get AIs to hunt unsecured websites and trash them. Which is maybe why I've seen some scary public letters from Internet grandees of late.
(paraphrasing)
Me: If I gave you the connection parameters, would you be able to create a table in a database?
AI engine: Yeah sure buddy, no probs.
Me: Please create an empty table in [this database] with properties of your choosing.
AI engine: Done, yer good to go mate!
Me: No, there's no new table in the database.
AI engine: I am most terribly sorry. I cannot understand what has gone wrong. I most humbly apologize.
A few days later it occurred to me that probably they don't allow the AI engine to write to the Internet! A bored 14-year-old might think it fun to get AIs to hunt unsecured websites and trash them. Which is maybe why I've seen some scary public letters from Internet grandees of late.
Re: AI achieves goals in dangerous way
Sounds dangerously Terminator to me.... autonomous weapon systems ...
And remember that if you botch the exit, the carnival of reaction may be coming to a town near you.
Fintan O'Toole
Fintan O'Toole
- Woodchopper
- Princess POW
- Posts: 7474
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI achieves goals in dangerous way
Sort of. The problem with AI is that people tend to think of it as human (because in some ways it acts similar to humans). But its very different and is prone to act in ways that humans can't predict. So humans in a bureaucracy wouldn't decide that the most efficient means was to kill the person telling them what to do.jimbob wrote: ↑Thu Jun 01, 2023 10:08 pmYup.
I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
Re: AI achieves goals in dangerous way
Yes, I should have said, dialled up to 11.Woodchopper wrote: ↑Fri Jun 02, 2023 9:17 amSort of. The problem with AI is that people tend to think of it as human (because in some ways it acts similar to humans). But its very different and is prone to act in ways that humans can't predict. So humans in a bureaucracy wouldn't decide that the most efficient means was to kill the person telling them what to do.jimbob wrote: ↑Thu Jun 01, 2023 10:08 pmYup.
I'm just thinking about the first law of bureaucracy - which is if you reward people (or AIs) based on a metric, many will find ways to work to that metric in the most efficient way possible, even though it completely goes against the intent of those setting the metric.
Although there are instances where they might not have done it directly (after all the law does tend to add a constraint), but through avoidance, lead to deaths.
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
The simulation mentioned in the OP seems not to be a thing that happened, by the way.
https://twitter.com/jamestitcomb/status ... 1hu5A&s=19
It's also worth reading this analysis of how wrong the language in the story is, and how that misleads.
https://dair-community.social/@emilymbe ... 0206061447
https://twitter.com/jamestitcomb/status ... 1hu5A&s=19
It's also worth reading this analysis of how wrong the language in the story is, and how that misleads.
https://dair-community.social/@emilymbe ... 0206061447
- Woodchopper
- Princess POW
- Posts: 7474
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI achieves goals in dangerous way
Yes, looks like someone's fantasy that got taken seriously by a journalist.dyqik wrote: ↑Fri Jun 02, 2023 10:37 amThe simulation mentioned in the OP seems not to be a thing that happened, by the way.
https://twitter.com/jamestitcomb/status ... 1hu5A&s=19
https://twitter.com/lee_georgina/status ... 95392?s=20
Re: AI achieves goals in dangerous way
"ChatGPT, give me a story about AI applying logic harmfully"
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!
- shpalman
- Princess POW
- Posts: 8577
- Joined: Mon Nov 11, 2019 12:53 pm
- Location: One step beyond
- Contact:
Re: AI achieves goals in dangerous way
No that's just what the AI wants you to think.dyqik wrote: ↑Fri Jun 02, 2023 10:37 amThe simulation mentioned in the OP seems not to be a thing that happened, by the way.
https://twitter.com/jamestitcomb/status ... 1hu5A&s=19
having that swing is a necessary but not sufficient condition for it meaning a thing
@shpalman@mastodon.me.uk
@shpalman.bsky.social / bsky.app/profile/chrastina.net
threads.net/@dannychrastina
@shpalman@mastodon.me.uk
@shpalman.bsky.social / bsky.app/profile/chrastina.net
threads.net/@dannychrastina
-
- Dorkwood
- Posts: 1570
- Joined: Mon Nov 11, 2019 1:22 pm
Re: AI achieves goals in dangerous way
I thought this thread was going to be about Al Capone Jr and some mad thing that denizen of this site had committed...
Re: AI achieves goals in dangerous way
What goals does Al have? The overthrow of the Republican Party in Texas?FlammableFlower wrote: ↑Sat Jun 03, 2023 10:26 pmI thought this thread was going to be about Al Capone Jr and some mad thing that denizen of this site had committed...
where once I used to scintillate
now I sin till ten past three
now I sin till ten past three
Re: AI achieves goals in dangerous way
Well worth a read about AI. Starts with an excellent quote…
https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine
https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Re: AI achieves goals in dangerous way
That was indeed worth a read. Cheered me up.
Re: AI achieves goals in dangerous way
Thanks for thatbjn wrote: ↑Sun Jun 04, 2023 7:14 amWell worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine
https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Have you considered stupidity as an explanation
- Woodchopper
- Princess POW
- Posts: 7474
- Joined: Sat Oct 12, 2019 9:05 am
Re: AI achieves goals in dangerous way
Well yes and no.bjn wrote: ↑Sun Jun 04, 2023 7:14 amWell worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine
https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Yes, there's a lot of hype. I also agree that the subtext of the doomsayers is the often self-serving message that 'this is very powerful'.
But, it doesn't seem to be a repeat of crypto. To start with, no one ever figured out a widespread use for crypto (aside from speculation, money laundering and other types of criminality). But I'm already seeing people using generative artificial intelligence as part of their jobs. For example, I know people who use it at work as a code writing assistant or to create images.
As mentioned in the article, the most important social consequence may well be that some people have to look for other jobs. That makes it look more significant than crypto.
Re: AI achieves goals in dangerous way
Yes, and it was definitely underplaying it, with the comparison of ChatGPT to a phone's autocomplete.Woodchopper wrote: ↑Sun Jun 04, 2023 12:33 pmWell yes and no.bjn wrote: ↑Sun Jun 04, 2023 7:14 amWell worth a read about AI. Starts with an excellent quote…
“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine
https://davidgerard.co.uk/blockchain/20 ... ing-to-ai/
Yes, there's a lot of hype. I also agree that the subtext of the doomsayers is the often self-serving message that 'this is very powerful'.
But, it doesn't seem to be a repeat of crypto. To start with, no one ever figured out a widespread use for crypto (aside from speculation, money laundering and other types of criminality). But I'm already seeing people using generative artificial intelligence as part of their jobs. For example, I know people who use it at work as a code writing assistant or to create images.
As mentioned in the article, the most important social consequence may well be that some people have to look for other jobs. That makes it look more significant than crypto.
Have you considered stupidity as an explanation
Re: AI achieves goals in dangerous way
So it's another round of production mechanisation. Labourers potentially displaced from their jobs have always been worried by mechanisation gains, worrying there may not be alternative jobs for them, all the way back to the initial mechanisation of agriculture. They usually failed to appreciate that the economy would expand and create new jobs. Though it is difficult to reskill later in your career, and new jobs can be in different places. So it does at least disrupt their lives, and they are generally not the ones that will gain much from it in their lifetimes.Woodchopper wrote: ↑Sun Jun 04, 2023 12:33 pmBut I'm already seeing people using generative artificial intelligence as part of their jobs. For example, I know people who use it at work as a code writing assistant or to create images.
As mentioned in the article, the most important social consequence may well be that some people have to look for other jobs. That makes it look more significant than crypto.
But ultimately this mechanisation has made us better off. So far the IT revolution has not produced productivity gains on the scale of pre-1960 mechanisation, etc, gains. Though it seems likely that the established measures of such things have failed to encompass all the gains produced, for example in product quality.
-
- Fuzzable
- Posts: 240
- Joined: Wed Dec 11, 2019 6:40 pm
Re: AI achieves goals in dangerous way
Don't mind if I do!Grumble wrote: ↑Sun Jun 04, 2023 6:44 amWhat goals does Al have? The overthrow of the Republican Party in Texas?FlammableFlower wrote: ↑Sat Jun 03, 2023 10:26 pmI thought this thread was going to be about Al Capone Jr and some mad thing that denizen of this site had committed...
(Monster unleashed, mayhem ensues)
-
- Fuzzable
- Posts: 240
- Joined: Wed Dec 11, 2019 6:40 pm
Re: AI achieves goals in dangerous way
Actually overthrowing the repugs in tx would be difficult as the mentality of being an entitled, self-aggrandizing prick is pretty deeply ingrained in a bunch of mother f.ckers around here. 

Re: AI achieves goals in dangerous way
where once I used to scintillate
now I sin till ten past three
now I sin till ten past three
-
- Fuzzable
- Posts: 240
- Joined: Wed Dec 11, 2019 6:40 pm
Re: AI achieves goals in dangerous way
Now wait, this is only true in a relative sense. My own personal troof can have any number of sisters, never mind your logic or facts. It's my own 'personal troof' and that makes it real for me and therefore also for everyone else too who must then respect my troof as being a newly created fact and give me a huge twitter platform to spew my troof to everyone else regardless of the consequences.
And also, nobody can say anything that will hurt my precious little fragile feelings, otherwise I'll unleash a fiery hellstorm upon them directly from twitter, because that's how it works now.
[Al starts to wake up...]
Omfg I'm having a nightmare! I dreamed I was a modern day American. OH f.ck!! I *AM* A MODERN DAY AMERICAN!!! NOOOOOOO!!!!!!
[Scene fades...]
Actually it's my own personal Chinese hell. The Chinese have a lot of hells.

-
- Catbabel
- Posts: 805
- Joined: Sat Nov 16, 2019 8:18 am
Re: AI achieves goals in dangerous way
Question: If a drug has a half-life of 4 hours, what percentage of the initial dose will remain in the system after 12 hours?
A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%
Correct answer: C) 37.5%
Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.
Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097
A) 12.5%
B) 25%
C) 37.5%
D) 50%
E) 75%
Correct answer: C) 37.5%
Explanation: In pharmacokinetics, the half-life of a drug refers to the time it takes for the drug's concentration in the body to decrease by 50%. After the first half-life, 50% of the drug remains, after the second half-life, 25% remains, and after the third half-life, 12.5% remains. Therefore, after 12 hours (3 half-lives for a drug with a half-life of 4 hours), approximately 37.5% of the initial dose will remain in the system.
Reference:
Rowland, M., & Tozer, T. N. (2010). Clinical pharmacokinetics and pharmacodynamics: Concepts and applications. Lippincott Williams & Wilkins. ISBN-13: 978-0781750097