In this paper we consider the problem of navigating an autonomous robot using primarily vision for localizing the robot, building a map of the environment, and navigating through viapoints to goals. A visual servo scheme is used that can steer the wheeled vehicle among images. Goal images do not necessarily correspond to images physically taken from the desired vehicle posture, as servoing to reconstructed virtual images is possible. A topological image map is constructed to support this, based on images grabbed by on-board cameras, along with a global feature-based metric map, using extended Kalman filter techniques. The method also enables a team of multiple vehicles to merge their information, and to coordinate navigation using each other's images. Realistic assumptions on limited communication bandwidth between agents and available memory storage are taken into account considering informative, memory-safe maps. Simulations and preliminary experimental results on a laboratory setup are reported.