OT: the development of AI, games and UAVs (Full Version)

All Forums >> [New Releases from Matrix Games] >> War in the Pacific: Admiral's Edition



Message


Pascal_slith -> OT: the development of AI, games and UAVs (10/9/2011 10:55:30 PM)

herwin, fcharton and JWE have had some interesting discussions about the development of AI for games/simulations.

Just the other day, the Economist ran an article about UAVs, their development, use and future in military conflict.

Here is the article: http://www.economist.com/node/21531433

Given the discussions about AI, there is a real chance that these UAVs may become 'autonomous', up to the point of making firing decisions on their own.... Asimov's rules from "I, Robot" come to mind....and visions of SkyNet....




herwin -> RE: OT: the development of AI, games and UAVs (10/10/2011 7:19:47 AM)


quote:

ORIGINAL: Pascal

herwin, fcharton and JWE have had some interesting discussions about the development of AI for games/simulations.

Just the other day, the Economist ran an article about UAVs, their development, use and future in military conflict.

Here is the article: http://www.economist.com/node/21531433

Given the discussions about AI, there is a real chance that these UAVs may become 'autonomous', up to the point of making firing decisions on their own.... Asimov's rules from "I, Robot" come to mind....and visions of SkyNet....


The smarts of current UAVs are not that great. Would you give a bat a missile launcher?




Pascal_slith -> RE: OT: the development of AI, games and UAVs (10/10/2011 8:40:28 AM)

Well, current UAVs no (except for landing, takeoff and normal autopilot waypoint operations, they are all flown by a ground-based 'pilot'). The article infers that in a few years, AI will be advanced enough to allow more 'automous' operations, including allowing on the spot 'engagement' decisions.

As to the vulnerability of the satellite communications network also inferred in the article, I know that there is work being done on 'micro' communications satellites in order to create a large 'internet-like' network in orbit, with the redundancy to attack losses similar to the internet.




noguaranteeofsanity -> RE: OT: the development of AI, games and UAVs (10/10/2011 9:14:16 AM)

They are already under attack or having security problems, with a keylogger being found on the system (most reports simply call it is a virus), that is apparently proving rather hard to get rid of.

http://www.dailytech.com/Computer+Network+Controlling+UAVs+Infected+by+Computer+Virus/article22960.htm




herwin -> RE: OT: the development of AI, games and UAVs (10/10/2011 9:31:25 AM)


quote:

ORIGINAL: noguaranteeofsanity

They are already under attack or having security problems, with a keylogger being found on the system (most reports simply call it is a virus), that is apparently proving rather hard to get rid of.

http://www.dailytech.com/Computer+Network+Controlling+UAVs+Infected+by+Computer+Virus/article22960.htm


Sounds like the product of a state agent.

I teach security, am supervising a PhD student working in security, have designed the security architecture of a couple of government systems, one big, and monitor events on a daily basis. There's a quiet war on that's producing a lot of advances on both sides. Most people are oblivious, but it's quite a bit like the Cold War for those who remember the past.




LoBaron -> RE: OT: the development of AI, games and UAVs (10/10/2011 9:47:49 AM)


quote:

ORIGINAL: Pascal

herwin, fcharton and JWE have had some interesting discussions about the development of AI for games/simulations.

Just the other day, the Economist ran an article about UAVs, their development, use and future in military conflict.

Here is the article: http://www.economist.com/node/21531433

Given the discussions about AI, there is a real chance that these UAVs may become 'autonomous', up to the point of making firing decisions on their own.... Asimov's rules from "I, Robot" come to mind....and visions of SkyNet....


TBH I find this perspective more frightening than nuclear weapons.

Not because I am afraid of AIs going rampage matrix stile, more because the more remote
killing gets, the easier it becomes. Systems deciding whethere to kill or not on a couple
of autonomeous processes are on the far end of that scale...

Although I also believe we are still far from that, as herwin said.




janh -> RE: OT: the development of AI, games and UAVs (10/10/2011 10:00:16 AM)

quote:

ORIGINAL: Pascal
Well, current UAVs no (except for landing, takeoff and normal autopilot waypoint operations, they are all flown by a ground-based 'pilot'). The article infers that in a few years, AI will be advanced enough to allow more 'automous' operations, including allowing on the spot 'engagement' decisions.


There is a political and technical level to this issue of allowing true autonomous behavior involving killings. The technical part can be solved, but the other is likely always going to be a high barrier. I would be very surprised if that kind of autonomy would ever be allowed in a western democracy -- the risks for anyones head, be it soldier or politician, would be very high. Wars aren't as clean cut anymore as they used to be, and battlefields no longer "empty enough" to avoid civilian casualties, or collateral damage. And it is already for a human very hard to tell friend from foe in a situation like Afghanistan or Iraq.

Also the technical development cycle of these things is usually some 5-10 years ahead of what we do know about its state. Getting a drone to identify "unambiguous" targets like a certain type of tank, OPFOR vs. friendly, is already easily possible with todays computer performances, for example using Radar imaging techniques (e.g. using SAR techniques in mm-wavelength range), so likely is also attacking. The people working at the arms labs are probably well past the basic technical problems.




fcharton -> RE: OT: the development of AI, games and UAVs (10/10/2011 10:12:16 AM)

Hi,

I don't think AI development is really the problem here. UAV already have a large part of the intelligence they need. To me, the difficulty is the rules of engagement: can you trust a machine with them? I have the impression the answer is "no", and will remain so for a long time. Armies are very serious about RoE, especially in low intensity situations, where drones are typically used. And worries about "Nintendo warfare" would probably make decision makers even more cautious.

About the keylogger, the one thing I'm wondering about such stories is "how/why did this make it to the media?" You'd expect such an event would be kept very quiet, and once it gets out, most of the details probably are wild guesses from various 'experts'.

Francois




herwin -> RE: OT: the development of AI, games and UAVs (10/10/2011 10:39:05 AM)


quote:

ORIGINAL: LoBaron


quote:

ORIGINAL: Pascal

herwin, fcharton and JWE have had some interesting discussions about the development of AI for games/simulations.

Just the other day, the Economist ran an article about UAVs, their development, use and future in military conflict.

Here is the article: http://www.economist.com/node/21531433

Given the discussions about AI, there is a real chance that these UAVs may become 'autonomous', up to the point of making firing decisions on their own.... Asimov's rules from "I, Robot" come to mind....and visions of SkyNet....


TBH I find this perspective more frightening than nuclear weapons.

Not because I am afraid of AIs going rampage matrix stile, more because the more remote
killing gets, the easier it becomes. Systems deciding whethere to kill or not on a couple
of autonomeous processes are on the far end of that scale...

Although I also believe we are still far from that, as herwin said.


The US did design a series of anti-ballistic missile systems (Nike-Zeus/Sentinel/Safeguard/Site Defense/LoAD) that were to use endoatmospheric nuclear weapons. Congress got nervous about them being deployed as a city defense system.




Pascal_slith -> RE: OT: the development of AI, games and UAVs (10/12/2011 7:54:02 AM)


quote:

ORIGINAL: janh

quote:

ORIGINAL: Pascal
Well, current UAVs no (except for landing, takeoff and normal autopilot waypoint operations, they are all flown by a ground-based 'pilot'). The article infers that in a few years, AI will be advanced enough to allow more 'automous' operations, including allowing on the spot 'engagement' decisions.


There is a political and technical level to this issue of allowing true autonomous behavior involving killings. The technical part can be solved, but the other is likely always going to be a high barrier. I would be very surprised if that kind of autonomy would ever be allowed in a western democracy -- the risks for anyones head, be it soldier or politician, would be very high. Wars aren't as clean cut anymore as they used to be, and battlefields no longer "empty enough" to avoid civilian casualties, or collateral damage. And it is already for a human very hard to tell friend from foe in a situation like Afghanistan or Iraq.

Also the technical development cycle of these things is usually some 5-10 years ahead of what we do know about its state. Getting a drone to identify "unambiguous" targets like a certain type of tank, OPFOR vs. friendly, is already easily possible with todays computer performances, for example using Radar imaging techniques (e.g. using SAR techniques in mm-wavelength range), so likely is also attacking. The people working at the arms labs are probably well past the basic technical problems.


Unfortunately this depends on having faith in the minders of the minders.... and on this kind of technology not becoming widespread, for if the other side is using it (whether developed or stolen), what limits using it in return? Weapons development and usage has always been a case of development-counterdevelopment, usage-counterusage. The kind of potential use alluded to (rules of engagement change) will occur. It is only a matter of time.




Page: [1]

Valid CSS!




Forum Software © ASPPlayground.NET Advanced Edition 2.4.5 ANSI
0.59375