Try the political quiz
+

Filter by author

Narrow down the conversation to these participants:

47 Replies

 @ISIDEWITHDiscuss this answer...9mos9MO

No

 @9MG74RSfrom Ontario  agreed…7mos7MO

If you just happen to fit a vague description of a target, hour would you feel? Would you trust a drone not to take you out just in case?

 @ISIDEWITHDiscuss this answer...9mos9MO

Yes

 @9MG74RSfrom Ontario  disagreed…7mos7MO

AI doesn't care about people. It doesn't truly understand. This is a slippery slope. Moreover it's still only as good as the training model that is human made.

 @B2574KFNew Democraticfrom Manitoba  answered…1wk1W

Only if AI is advanced enough, it has a mind of its own. Plus, it's computer based so I expect someone tinkering with the A

 @B224W3XPeople’sfrom Ontario  answered…3wks3W

Make this question more specific. I.e. A.I guidance systems? AI simulated targets? what are we talking about here folks? lets get a little more specific.

 @9ZN65GCfrom Ontario  answered…1mo1MO

Yes, against my better judgement. We will need to use AI to stay abreast of our adversaries, or be left behind.

 @9ZDCX9Tfrom Washington  answered…1mo1MO

No, there is not enough testing or information to say that an AI can distinguish civilians, military personnel and threats.

 @9XCDJK2from Alberta  answered…2mos2MO

It depends how well the tech can be trusted, it would have to go through years and years of tests before official use since it may have ''Glitches''.

 @9WKLBWBfrom British Columbia  answered…2mos2MO

yes, but only after trials, research and contingencies to ensure it is not a threat to ourselves.

 @9WK3LGVLiberalfrom British Columbia  answered…2mos2MO

Who controls the AI, was it created by our government or a private company or worse by another country.

 @9WFR74Q from Alberta  answered…2mos2MO

Absolutely not. It is vital that we as humans realize and understand the gravity of using weapons to harm other humans. AI may be able to provide functionality and statistics, but it cannot understand the weight of using weapons to harm. Humans themselves are flawed when it pertains to utilizing harmful weaponry, especially in a militaristic setting. It is a slippery slope to utilize AI for weapons.

 @9W2QDRFfrom Ontario  answered…2mos2MO

 @9W222F2from Ontario  answered…2mos2MO

 @9VY8CNNfrom British Columbia  answered…2mos2MO

No, this could too easily lead to a doomsday scenario. Keep AI out of the military!

 @9VRM7F4from Alberta  answered…3mos3MO

 @9VJLT3Zfrom Alberta  answered…3mos3MO

Its going to happen inevitably anyways. We need an AI ethics commission... because its going to be extremely dangerous if AI goes rogue or can be hijacked in any way.

 @9VJ6C4Kfrom British Columbia  answered…3mos3MO

Yes but only if it is proven to not glitch or be in risk of being hacked.

 @9VF4NS9from Ontario  answered…3mos3MO

 @9V7JKBZConservativefrom British Columbia  answered…3mos3MO

 @9V4JT2Cfrom Alberta  answered…3mos3MO

 @9TYFLTGfrom British Columbia  answered…3mos3MO

 @9TTHJVPfrom Nova Scotia  answered…3mos3MO

 @9TRP8FJfrom British Columbia  answered…3mos3MO

Yes, but for defensive systems only. There should always be a human in the loop pulling the trigger for offensive systems.

 @9RBYBX6from Nova Scotia  answered…5mos5MO

In belief that Artificial Intelligence will be a downfall to mankind, as most of the world will have access, it makes this question difficult to answer. It is important that Artificial Intelligence be used with caution.

 @9RBY87Rfrom Ontario  answered…5mos5MO

 @9RBVDVTfrom Nova Scotia  answered…5mos5MO

 @9QZCYDNfrom Ontario  answered…5mos5MO

Not entirely guided. And also not complete total AI that can think for itself like a human. Otherwise I think it'd be effective

 @9QSV5BHfrom California  answered…6mos6MO

Yes, as long as it is pretty much guaranteed they will not fail.. Like ever.

 @9QRJNMWfrom Ontario  answered…6mos6MO

 @9QQMHB3Liberalfrom Ontario  answered…6mos6MO

 @9Q7YMJZfrom Ontario  answered…6mos6MO

Yes, but only with appropriate oversight and against specific military targets

 @9PRH44Kanswered…6mos6MO

Not a yes or no answer. There needs to be more clarification on whether the AI is making all decisions up to firing the weapon or just controlling it to the target.

 @9P8NRFMNew Democratic from Alberta  answered…6mos6MO

Instead of artificial intelligence, military technology should have advanced programs/technology that can be controlled by professionals.

 @9P7NSTCfrom Alberta  answered…6mos6MO

 @9LT2W3Wfrom Saskatchewan  answered…8mos8MO

 @9LHXK8GConservativefrom Ontario  answered…9mos9MO

Not at this time, and not until unbiased third parties review the technology further and there is more scientific consensus.

 @9LGCYKFfrom Ontario  answered…9mos9MO

we will eventually just be creating insane fighting robots we would need to use nuculear weapons to destroy them just to be safe. this will definitely mislead wars and be way to unsafe.

 @9TLDMJLfrom Ontario  answered…3mos3MO

 @9TGDVKNIndependentfrom Alberta  answered…3mos3MO

yes, but only when it is safe to use. if it's a ranged weapon where there aren't people who would be in front of it then by all means, AI doesn't have emotions they're more accurate and safer when operated correctly.

 @9TF5F5Zfrom Alberta  answered…3mos3MO

I believe missile guidance systems should use AI, but AI should never choose where to target

 @9T6X9HJfrom Ontario  answered…4mos4MO

NO, and even the current use of AI should be strictly supervised and limited by public decisions and not privatized.

 @9T6GQ6Ffrom New York  answered…4mos4MO

The military must operate under the guidance of the revolutionary working class and only until the class divide is abolished worldwide.

 @9T2Z7Y5from Alberta  answered…4mos4MO

Yes, but ethical research needs to be conducted and needs to be at the forefront of the militaries AI investment

 @9SNJQRWfrom Ontario  answered…4mos4MO

Yes, but not fully AI. There needs to be constant human review and extensive research beforehand.

 @9MC4BQLfrom Alberta  answered…8mos8MO

Depends on how good it's gotten. I'd have to see some damn good examples of it being better than human hands.

 @9LW6J33from Ontario  answered…8mos8MO

Any weapon with the potential to kill or injure should not be 100% AI autonomous.

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...