Deep reinforcement learning (DRL) methods, which train a policy to obtain the sequence of actions required to complete a task, have achieved remarkable success across diverse applications. It is a long-standing open issue in the DRL community to make the trained policy gradually approach the theoretically globally optimal policy, and existing research has also explored several challenges, such as exploration-exploitation, to improve the quality of the obtained policy. However, most DRL methods rely solely on the current state for decision-making, leading to short-sightedness and suboptimal learning. To overcome this, we propose a neighboring state-aware policy that enhances existing DRL methods by incorporating a neighboring state sequence in the decision-making process. Specifically, our approach saves multiple past and future states and concatenates them as the neighboring state sequence, along with the current state, and inputs them to the actor to generate an action during the training process. This global perspective, provided by neighboring states, is similar to human decision-making and helps the agent better understand state evolution, leading to improved policy learning. We present two specific implementations of our approach and demonstrate through extensive experiments that it effectively enhances ten representative DRL methods across nine tasks, based on three metrics, including return.