he paper explores the relationship between technology and semiosis from the perspective of natural language processing, i.e. signs systems automated learning by deep neural networks. Two theoretical approaches to the artificial intelligence problem are compared: the internalist paradigm, which conceives the link between cognition and language as extrinsic, and the externalist paradigm, which understands cognitive human activity as constitutively linguistic. The basic assumptions of internalism are widely discussed. After witnessing its incompatibility with neural network implementations of verbal thinking, the paper goes on exploring the externalist paradigm and its consistency with neural network language modeling. After a thorough illustration of the Saussurian conception of the mechanism of language systems, and some insights into the functioning of verbal thinking according to Vygotsky, the externalist paradigm is established as the best verbal thinking representation to be implemented on deep neural networks. Afterwards, the functioning of deep neural networks for language modeling is illustrated. Firstly, a basic explanation of the multilayer perceptron is provided, then, the Word2Vec model is introduced, and finally the Transformer model, the current the state-of-the-art architecture for natural language processing, is illustrated. The consistency between the externalist representation of language systems and the vector representation employed by the transformer model, prove that only the externalist approach can provide an answer to the problem of modeling and replicating human cognition
Which Theory of Language for Deep Neural Networks? Speech and Cognition in Humans and Machines
Luca Capone
2021-01-01
Abstract
he paper explores the relationship between technology and semiosis from the perspective of natural language processing, i.e. signs systems automated learning by deep neural networks. Two theoretical approaches to the artificial intelligence problem are compared: the internalist paradigm, which conceives the link between cognition and language as extrinsic, and the externalist paradigm, which understands cognitive human activity as constitutively linguistic. The basic assumptions of internalism are widely discussed. After witnessing its incompatibility with neural network implementations of verbal thinking, the paper goes on exploring the externalist paradigm and its consistency with neural network language modeling. After a thorough illustration of the Saussurian conception of the mechanism of language systems, and some insights into the functioning of verbal thinking according to Vygotsky, the externalist paradigm is established as the best verbal thinking representation to be implemented on deep neural networks. Afterwards, the functioning of deep neural networks for language modeling is illustrated. Firstly, a basic explanation of the multilayer perceptron is provided, then, the Word2Vec model is introduced, and finally the Transformer model, the current the state-of-the-art architecture for natural language processing, is illustrated. The consistency between the externalist representation of language systems and the vector representation employed by the transformer model, prove that only the externalist approach can provide an answer to the problem of modeling and replicating human cognitionFile | Dimensione | Formato | |
---|---|---|---|
Which Theory of Language for Deep Neural Networks Speech and Cognition in Humans and Machines.pdf
accesso aperto
Tipologia:
Versione finale editoriale
Licenza:
Creative commons
Dimensione
1.24 MB
Formato
Adobe PDF
|
1.24 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.