“It lets R2D2 talk to C3P0," Keven Gambold, Droidish’s mastermind and the CEO of government contractor Unmanned Experts, explained to Forbes, recalling the iconic robot duo from Star Wars.
When researchers or government contractors crack the code, these advanced drone systems will launch together, work out amongst themselves how best to achieve their goals and land in tandem — with human pilots intervening only should something go awry. Spurred on by Ukraine’s extensive use of drones to defend against Russian invasion, and by fears of China’s advancing technological prowess, America’s best-funded agency is spending big across research labs, academia and AI tech companies to ensure the U.S. is at the bleeding edge of next-generation drone warfare.
This is a very convulated way to say you’re going to make a common API
Likely because the higher ups or media facing members of the project don’t understand what it is or how it works and had it described to them with an overly simplistic analogy
I mean, it can be an API using a format easily put into human speech, and then machine-recognized. Said format would be a language, or even a code, intended for human-machine interaction via speech, like there are codes intended for error correction in various media with varying nature of errors.
So that humans would be able to give voice commands almost in natural language.
Only this wouldn’t be such groundshaking news, older Internet protocols like SMTP and FTP already are human-readable.
This also wouldn’t cost nearly as much as the title implies.
They can just start with Logo and add a few keywords (like grenade, death ray, etc.), easy.
Amazing how easy it is to sell the US Gov new toys it doesn’t need.
“…ensure the U.S. is at the bleeding edge of next-generation drone warfare.”
Translation:
Pay threw the nose for expensive proprietary software that will eventually be made obsolete by it’s open-source equivalent.
From a security perspective would open source be less secure? I’m legit curious about this.
Some software is absolutely more secure for being open source. There’s a reason why popular cryptographic libraries tend to be open, even those used in military applications.
If the security of your software component relies on an attacker not having access to your source, then your component is only secure until someone reverse engineers it and figures out how it works, at which point it is entirely compromised on all systems it’s deployed to.
So you need something else to provide security besides obscuring how the software works. In cryptography, that comes from a large, highly random encryption key. The reason that your online bank transactions are safe from an attacker snooping on your network is because, even having the full source code to the crypto libraries, it would take a computer longer than the age of the universe to guess the encryption key through brute force.
The benefit of open source is that it gets a lot more eyes on the code to find flaws and vulnerabilities - and to verify that the software does what the vendor claims, which is very much not always a given.
A whole lot of war drones are using https://en.wikipedia.org/wiki/ArduPilot
If your software relies on being closed source for security, you have no security. It’s that simple.
Having your thing open source enables people from pointing out it’s issues, which enables people to fix those issues. Of course, OSS can still have issues, but they can be discovered more easily.
Oh, great. I can’t see how this would lead to any adverse outcomes.
In an early AI experiment Facebook gave two AIs language so they could talk to each other. The AI quickly learned to communicate in a language the researchers couldn’t understand. Facebook pulled the plug.
Guess the military didn’t get the memo.
> communicate in a language the researchers couldn’t understand
The AI was speaking Dutch?
They invented their own machine language. The AI (and I know you’re jesting).
Might you have a link to an article about that? I’d be interesting in learning more, because it sounds a bit like an urban legend of the net.
So it did happen but Facebook didn’t shut the experiment down, but rather they changed the experiment parameters so the bots would stop using their own language.
The article I read said they shutdown the experiment. So, the article wasn’t 100% accurate and the article I read was published in late 2018 or early 2019. So, it was either recycled news or the experiment lasted several years before the bots made up their own language. As the experiment was started in 2017, according to the fact check article linked above.
Thank you, I appreciate it!
I read it in a reputable newspaper and it was a tiny blurb. I’ll see if I can find something more substantial than my memory for you.
Like robot apocalypse? There’s just no way it can happen!
Meanwhile, no universal healthcare
Didn’t microsoft shut down chatbots for doing that
The blurb here makes me think the main article is an “advertorial”.
It consists only of “gonk”
There’s very little in common with using off the shelf drone hardware and software to deliver munitions/act as loitering platforms and this stuff. The Ukraine war comparisons seeny silly.