In order to solve the pose estimation problem of mobile robots in visually degraded environments, a real-time, robust and low-drift depth vision SLAM (simultaneous localization and mapping) method is proposed by utilizing both dense range flow and sparse geometry features. The proposed method is mainly composed of three optimization layers, namely range flow based visual odometry layer, ICP (iterative closest point) based pose optimization layer and pose graph based optimization layer. In range flow based visual odometry layer, the constraint equation of range variation is used to solve fast 6 DOF (degree of freedom) frame-to-frame pose estimation of camera. Then, local drifts are reduced by applying local map in ICP based pose optimization layer. After that, the loop closure constraints are built by extracting and matching sparse geometric features from depth information and a pose graph is constructed for global pose optimization in pose graph based optimization layer. Performances of the proposed method are tested on TUM datasets and in real-world scenes. Finally, experiment results show that our front-end algorithm outperforms the state-of-the-art depth vision methods, and our back-end algorithm can robustly construct loop closures constraints and reduce the global drift caused by front-end pose estimation.