Sciweavers

IJCNN
2006
IEEE

Knowledge Representation and Possible Worlds for Neural Networks

14 years 5 months ago
Knowledge Representation and Possible Worlds for Neural Networks
— The semantics of neural networks can be analyzed mathematically as a distributed system of knowledge and as systems of possible worlds expressed in the knowledge. Learning in a neural network can be analyzed as an attempt to acquire a representation of knowledge. We express the knowledge system, systems of possible worlds, and neural architectures at different stages of learning as categories. Diagrammatic constructs express learning in terms of pre-existing knowledge representations. Functors express structure-preserving associations between the categories. This analysis provides a mathematical vehicle for understanding connectionist systems and yields design principles for advancing the state of the art.
Michael J. Healy, Thomas P. Caudell
Added 11 Jun 2010
Updated 11 Jun 2010
Type Conference
Year 2006
Where IJCNN
Authors Michael J. Healy, Thomas P. Caudell
Comments (0)