Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
Province/Territory:
@ISIDEWITH9mos9MO
No
@9MG74RS7mos7MO
If you just happen to fit a vague description of a target, hour would you feel? Would you trust a drone not to take you out just in case?
@ISIDEWITH9mos9MO
Yes
@9MG74RS7mos7MO
AI doesn't care about people. It doesn't truly understand. This is a slippery slope. Moreover it's still only as good as the training model that is human made.
Only if AI is advanced enough, it has a mind of its own. Plus, it's computer based so I expect someone tinkering with the A
Make this question more specific. I.e. A.I guidance systems? AI simulated targets? what are we talking about here folks? lets get a little more specific.
@9ZN65GC1mo1MO
Yes, against my better judgement. We will need to use AI to stay abreast of our adversaries, or be left behind.
@9ZDCX9T1mo1MO
No, there is not enough testing or information to say that an AI can distinguish civilians, military personnel and threats.
@9XCDJK22mos2MO
It depends how well the tech can be trusted, it would have to go through years and years of tests before official use since it may have ''Glitches''.
@9WKLBWB2mos2MO
yes, but only after trials, research and contingencies to ensure it is not a threat to ourselves.
Who controls the AI, was it created by our government or a private company or worse by another country.
@9WFR74Q 2mos2MO
Absolutely not. It is vital that we as humans realize and understand the gravity of using weapons to harm other humans. AI may be able to provide functionality and statistics, but it cannot understand the weight of using weapons to harm. Humans themselves are flawed when it pertains to utilizing harmful weaponry, especially in a militaristic setting. It is a slippery slope to utilize AI for weapons.
@9W2QDRF2mos2MO
Yes, but there should be checks and balances to it
@9W222F22mos2MO
yes, once the artificial intelligence has advanced enough.
@9VY8CNN2mos2MO
No, this could too easily lead to a doomsday scenario. Keep AI out of the military!
@9VRM7F43mos3MO
Yes, if there is a person to monitor the weapon incase of ai malfunction
@9VJLT3Z3mos3MO
Its going to happen inevitably anyways. We need an AI ethics commission... because its going to be extremely dangerous if AI goes rogue or can be hijacked in any way.
@9VJ6C4K3mos3MO
Yes but only if it is proven to not glitch or be in risk of being hacked.
@9VF4NS93mos3MO
Yes, but only is it is more accurate than it is under human's control.
@9V7JKBZConservative3mos3MO
No, weapons of mass destruction should not be built to begin with
@9V4JT2C3mos3MO
No there also needs to be a person could use it to help
@9TYFLTG3mos3MO
No. Keep a human in the loop for all lethal engagements.
@9TTHJVP3mos3MO
Yes, but in a highly controlled and scrutinized manner.
@9TRP8FJ3mos3MO
Yes, but for defensive systems only. There should always be a human in the loop pulling the trigger for offensive systems.
@9RBYBX65mos5MO
In belief that Artificial Intelligence will be a downfall to mankind, as most of the world will have access, it makes this question difficult to answer. It is important that Artificial Intelligence be used with caution.
@9RBY87R5mos5MO
The military should maintain technological pace with our allies
@9RBVDVT5mos5MO
There should always be humans in the lethal force decision making process.
@9QZCYDN5mos5MO
Not entirely guided. And also not complete total AI that can think for itself like a human. Otherwise I think it'd be effective
@9QSV5BH6mos6MO
Yes, as long as it is pretty much guaranteed they will not fail.. Like ever.
@9QRJNMW6mos6MO
Yes, but only if there is always a human kept in the decision making loop.
Yes, if only it will decrease the risk of hurting civilians.
@9Q7YMJZ6mos6MO
Yes, but only with appropriate oversight and against specific military targets
@9PRH44K6mos6MO
Not a yes or no answer. There needs to be more clarification on whether the AI is making all decisions up to firing the weapon or just controlling it to the target.
@9P8NRFMNew Democratic 6mos6MO
Instead of artificial intelligence, military technology should have advanced programs/technology that can be controlled by professionals.
@9P7NSTC6mos6MO
No, not until AI has been perfected for Military uses
@9LT2W3W8mos8MO
Not right now, the technology is not fully developed enough yet
@9LHXK8GConservative9mos9MO
Not at this time, and not until unbiased third parties review the technology further and there is more scientific consensus.
@9LGCYKF9mos9MO
we will eventually just be creating insane fighting robots we would need to use nuculear weapons to destroy them just to be safe. this will definitely mislead wars and be way to unsafe.
@9TLDMJL3mos3MO
According to the safety of canadians and trustability of the tech.
@9TGDVKNIndependent3mos3MO
yes, but only when it is safe to use. if it's a ranged weapon where there aren't people who would be in front of it then by all means, AI doesn't have emotions they're more accurate and safer when operated correctly.
@9TF5F5Z3mos3MO
I believe missile guidance systems should use AI, but AI should never choose where to target
@9T6X9HJ4mos4MO
NO, and even the current use of AI should be strictly supervised and limited by public decisions and not privatized.
@9T6GQ6F4mos4MO
The military must operate under the guidance of the revolutionary working class and only until the class divide is abolished worldwide.
@9T2Z7Y54mos4MO
Yes, but ethical research needs to be conducted and needs to be at the forefront of the militaries AI investment
@9SNJQRW4mos4MO
Yes, but not fully AI. There needs to be constant human review and extensive research beforehand.
@9MC4BQL8mos8MO
Depends on how good it's gotten. I'd have to see some damn good examples of it being better than human hands.
@9LW6J338mos8MO
Any weapon with the potential to kill or injure should not be 100% AI autonomous.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.