Deep inference on a large-scale knowledge base (KB) need a mass of formulas, but it is almost impossible to create all formulas manually. Data-driven methods have been proposed to mine formulas from KBs automatically, where random sampling and approximate calculation are common techniques to handle big data. Thereinto, Random Walk is the most suitable for knowledge graph data. However, a pure random walk without goals has a poor efficiency of mining useful formulas, and even introduces lots of noise which may mislead inference. Although several heuristic rules have been proposed to direct random walks, they do not work well due to the diversity of formulas. To this end, we propose a novel goal-directed algorithm, which directs random walks by the specific inference target at each step. The algorithm is more inclined to visit beneficial structures to infer the target, so it can increase efficiency of random walks and avoid noise simultaneously. In our experiments, we prove our approach is high-efficiency and perform best on the KB link prediction task.