Ubiquitous computing has established a vision of computation where computers are so deeply integrated into our lives that they become both invisible and everywhere. In order to have computers out of sight and out of mind, they will need a deeper understanding of human life. LifeNet [1] is a model that functions as a computational model of human life that attempts to anticipate and predict what humans do in the world from a first-person point of view. LifeNet utilizes a general knowledge storage [2] gathered from assertions about the world input by the web community at large. In this work, we extend this general knowledge with sensor data gathered in vivo. By adding these sensor-network data to LifeNet, we are enabling a bidirectional learning process: both bottom-up segregation of sensor data and top-down conceptual constraint propagation, thus correcting current metric assumptions in the LifeNet conceptual model by using sensor measurements. Also, in addition to having LifeNet learn...