Transformers obviously did not come from the . I can’t think of a single significant advancement in recent AI that can be attributed to the military.
I don’t like Alex Karp, he comes off as a sleazy conman and he often vastly oversells what his company does. Right now he’s saying some LLM his company recently deployed (which is almost certainly inferior to gpt4) should direct battlefield operations. Can you imagine even gpt4 directing battlefield operations? Unless he’s solved the hallucination problem, there is no way he can, in good faith, make that recommendation.
His interviews often have little substance as well.
I can’t think of a single significant advancement in recent AI that can be attributed to the military
It’s the nature of classified projects that you usually can’t attribute advances created in them.
Right now he’s saying some LLM his company recently deployed (which is almost certainly inferior to gpt4) should direct battlefield operations.
The software he sells uses an LLM but the LLM is only one aspect of it and the software seems to be free to let the user choose the LLM he wants to use.
I think a better description would be that he sells AI software for making battlefield targeting decisions that was used in the last Ukraine war and he recently added the use of LLM as a feature to the software.
Unless he’s solved the hallucination problem, there is no way he can, in good faith, make that recommendation.
The hallucination problem is mostly one where users want to get knowledge out of the LLM and not use the LLM to analyze other data sources.
I don’t think that’s a good assessment. The US Army wanted AI to analyze satellite images and other intelligence data. That was Project Maven:
Among its objectives, the project aims to develop and integrate “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DoD collects every day in support of counterinsurgency and counterterrorism operations,” according to the Pentagon.
When Google employees revolted, Palantir stepped up to build those computer-vision algorithms for the US military. Image recognition tasks aren’t LLM-type AI but they are AI.
That capability gets used in Ukraine to analyze enemy military movements to help decide when to strike.
I’m not so sure about this. Do any LLMs even have the context window to analyze large amounts of data?
You don’t have to interact with LLMs in a way where there’s one human text query and one machine-generated answer.
The way AutoGPT works an LLM can query a lot of text as it searches through existing data. The technology that Palantir develops seems more like a commercialized AutoGPT than a model like GPT4. Both AutoGPT and Palantir’s AIP allow you to select the language model that you want to use.
A good overview about how that works in a business case is at:
Transformers obviously did not come from the . I can’t think of a single significant advancement in recent AI that can be attributed to the military.
I don’t like Alex Karp, he comes off as a sleazy conman and he often vastly oversells what his company does. Right now he’s saying some LLM his company recently deployed (which is almost certainly inferior to gpt4) should direct battlefield operations. Can you imagine even gpt4 directing battlefield operations? Unless he’s solved the hallucination problem, there is no way he can, in good faith, make that recommendation.
His interviews often have little substance as well.
It’s the nature of classified projects that you usually can’t attribute advances created in them.
The software he sells uses an LLM but the LLM is only one aspect of it and the software seems to be free to let the user choose the LLM he wants to use.
I think a better description would be that he sells AI software for making battlefield targeting decisions that was used in the last Ukraine war and he recently added the use of LLM as a feature to the software.
The hallucination problem is mostly one where users want to get knowledge out of the LLM and not use the LLM to analyze other data sources.
You can attribute parts of space project developments as coming from the military. Same for nuclear power.
I’m not so sure about this. Do any LLMs even have the context window to analyze large amounts of data?
I don’t think that’s a good assessment. The US Army wanted AI to analyze satellite images and other intelligence data. That was Project Maven:
When Google employees revolted, Palantir stepped up to build those computer-vision algorithms for the US military. Image recognition tasks aren’t LLM-type AI but they are AI.
That capability gets used in Ukraine to analyze enemy military movements to help decide when to strike.
You don’t have to interact with LLMs in a way where there’s one human text query and one machine-generated answer.
The way AutoGPT works an LLM can query a lot of text as it searches through existing data. The technology that Palantir develops seems more like a commercialized AutoGPT than a model like GPT4. Both AutoGPT and Palantir’s AIP allow you to select the language model that you want to use.
A good overview about how that works in a business case is at: