Large language models (LLMs) such as ChatGPT have recently demonstrated significant potential in mathematical abilities, providing a valuable reasoning paradigm consistent with human natural language. However, LLMs currently have difficulty in bridging perception, language understanding, and reasoning (PLR) capabilities due to incompatibility of the underlying information flow among them, making their reasoning ability not fully elicited and challenging to accomplish complicated reasoning tasks autonomously. To resolve the above problem, a novel method called ChatABL is proposed by integrating LLMs into an abductive learning (ABL) framework, capable of unifying the three abilities effectively in a more user-friendly and understandable manner. Initially, the proposed method uses LLMs to correct the incomplete logical facts for optimizing the perception module, by summarizing and reorganizing domain knowledge represented in natural language format. Then, the perception module also provides necessary logical reasoning materials for feeding LLMs. Finally, these parts are integrated into a dynamic closed-loop system by introducing the feedback form and automatic learning strategies to mutually promote their performance. As a testbed, the variable-length handwritten equation decipherment (HED), an abstract expression of the Mayan calendar decoding, is used to demonstrate that ChatABL has reasoning ability beyond most existing state-of-the-art methods, which has been well-supported by comparative studies. To the best of authors' knowledge, the proposed ChatABL is the first attempt to explore a possible and novel avenue to approaching human-level cognitive ability via natural language interaction by means of ChatGPT.