There was incremental progress during this year in this line of research. We just published a text which will serve as a reference paper on the subject for some time:
Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets
Dataflow matrix machines generalize neural nets by replacing streams of numbers with streams of vectors (or other kinds of linear streams admitting a notion of linear combination of several streams) and adding a few more changes on top of that, namely arbitrary input and output arities for activation functions, countable-sized networks with finite dynamically changeable active part capable of unbounded growth, and a very expressive self-referential mechanism.
While recurrent neural networks are Turing-complete, they form an esoteric programming platform, not conductive for practical general-purpose programming. Dataflow matrix machines are more suitable as a general-purpose programming platform, although it remains to be seen whether this platform can be made fully competitive with more traditional programming platforms currently in use. At the same time, dataflow matrix machines retain the key property of recurrent neural networks: programs are expressed via matrices of real numbers, and continuous changes to those matrices produce arbitrarily small variations in the programs associated with those matrices.
Spaces of vector-like elements are of particular importance in this context. In particular, we focus on the vector space V of finite linear combinations of strings, which can be also understood as the vector space of finite prefix trees with numerical leaves, the vector space of "mixed rank tensors", or the vector space of recurrent maps.
This space, and a family of spaces of vector-like elements derived from it, are sufficiently expressive to cover all cases of interest we are currently aware of, and allow a compact and streamlined version of dataflow matrix machines based on a single space of vector-like elements and variadic neurons. We call elements of these spaces V-values. Their role in our context is somewhat similar to the role of S-expressions in Lisp.
V-values are based on nested dictionaries
Lisp introduced nested lists (S-expressions) in 1958. Computers were quite weak back then, and hash-tables were quite new (1953).
It might be the case that when computational resources are sufficient and one has to base a formalism on a single kind of nested data structure, this structure should be nested dictionaries which are more flexible than lists. I think that if Lisp were to emerge in mid-1960-s, it would probably be structured around nested dictionaries, rather than nested lists.
The simplest way to build a vector space of nested dictionaries is to require all atoms (leaves) to be numbers. We call nested dictionaries with numerical atoms V-values.
A more general construction of V-values involving nested dictionaries with more complicated atoms is considered in the Section 5.3 of the paper above.
Some of the links remaining on the wish list for next year
Academic page for this line of thought
A quite different thing (unrelated to the title of this post) which happened this year
Other than that, this was generally a crazy year for everyone, but even with all the weird political turbulence and instability going on, the really interesting and radical things were happening in neural networks and artificial intelligence, which kept unfolding at rapidly accelerating speed. I can include a selected set of items as comments, but it is becoming more and more difficult to keep track of everything important happening in this field. I am not sure any single person is able to keep track of all that matters in this field anymore...
Crosspost to https://anhinga-anhinga.dreamwidth.org/83104.html