In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots

Driven by the war with Russia, many Ukrainian companies are working on a major leap forward in the weaponization of consumer technology.

The New York Times has an important story on how, In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots. In short, the existential threat of the overwhelming Russian attack is creating a situation where Ukraine is developing a home-grown autonomous weapons industry that repurposes consumer technologies. Not only are all sorts of countries testing AI powered weapons in Ukraine, the Ukrainians are weaponizing cheap technologies and, in the process, removing a lot of the guardrails.

The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

There isn’t necessarily any “human in the loop” in the cheap systems they are developing. One wonders how the development of this industry will affect other conflicts. Could we see a proliferation of terrorist drone attacks put together following plans circulating on the internet?

ChatGPT is Bullshit.

The Hallucination Lie

Ignacio de Gregorio has a nice Medium essay about why ChatGPT is bullshit. The essay is essentially a short and accessible version of an academic article by Hicks, M. T., et al. (2024), ChatGPT is bullshit. They make the case that people make decisions based on their understanding about what LLMs are doing and that “hallucination” is the wrong word because ChatGPT is not misperceiving the way a human would. Instead they need to understand that LLMs are designed with no regard for the truth and are therefore bullshitting.

Because these programs cannot themselves be concerned with truth, and because they are designed to produce
text that looks truth-apt without any actual concern for truth,
it seems appropriate to call their outputs bullshit. (p. 1)

Given this process, it’s not surprising that LLMs have a
problem with the truth. Their goal is to provide a normal-
seeming response to a prompt, not to convey information
that is helpful to their interlocutor. (p. 2)

At the end the authors make the case that if we adopt Dennett’s intentional stance then we would do well to attribute to ChatGPT the intentions of a hard bullshitter as that would allow us to better diagnose what it was doing. There is also a discussion of the intentions of the developers. You could say that they made available a tool that bullshitted without care for the truth.

Are we, as a society, at risk of being led by these LLMs and their constant use, to confuse the simulacra “truthiness” for true knowledge?