Most of RGB-D SLAM (simultaneous localization and mapping) methods assume that the environments are static. However, there are often dynamic objects in real world environments, which can degrade the SLAM performance. In order to solve this problem, a line feature-based RGB-D (RGB-depth) visual odometry is proposed. It calculates static weights of line features to filter out dynamic line features, and uses the rest of line features to estimate the camera pose. The proposed method not only reduces the influence of dynamic objects, but also avoids the tracking failure caused by few point features. The experiments are carried out on a public dataset. Compared with state-of-the-art methods like ORB (oriented FAST and rotated BRIEF) method, the results demonstrate that the proposed method reduces the tracking error by about 30% and improves the accuracy and robustness of visual odometry in dynamic environments.