Imagine an alternate history in which AI, in the sense of our current LLMs, had come first. This possibly means that things would have looked much more like they did in the 2001: A Space Odyssey movie: computers having a user interface so powerful that you don’t really need anything else (apart from a set of fancy looking dashboards maybe).
But this is not what we got. First we got something that looked quite a bit like an idealized “command box” (not vocal though, but textual), with which you could give instructions to the computer, almost as if you were talking to it. But not quite. These instructions had to be expressed in a “language” that the computer could understand, and of course it was far from human natural language. Very close to that command box was the notion of a written program that you could give to the machine, and which would be executed as a kind of chain of commands, in long form, with more elaborate logic and syntax.
Later came the graphical interface, and along with it video games, which completely transformed the way we communicate with computers. The language-based nature of the interaction with them became more secondary, and what gained prominence was the notion of the computer as a machine, almost in a physical sense, with virtual buttons, sliders and many similar metaphors.
But during that time, a multitude of programs kept being written by programmers, because ultimately, there’s no other way to specify a well defined set of complex logical interactions with a computer. If a computer is doing something, at some point in the process a program must be involved.
And ultimately these programs found themselves in the unfathomably vast training sets of modern LLMs, and that’s why they “know” how to program. This of course has opened the possibility of a new kind of interaction with the computer, where you simply describe the behavior you want, and the LLM generates a program which you can then run. It’s not such a stretch to imagine improved versions of our current AI which would dispense with the part where they give you a program (for you to run or study) and they would do everything themselves, thereby becoming an omniscient “black box”, button-less and cognitive-only AI, like HAL in 2001 for instance.
But if you think about it, it does not really make sense to imagine that an AI of that type could have come first, because an early AI would not have been able to know how to generate programs, if it had not be trained on decades of human programming artefacts for which AI assistance was not available.