Abstract: Recently, Linear Complementary Dual (LCD) codes have garnered substantial interest within coding theory research due to their diverse applications and favorable attributes. This study centers around the generation of binary and ternary LCD codes via a curiosity-driven Reinforcement Learning (RL) approach, where custom-designed reward functions direct the AI in structuring new LCD codes. It particularly emphasizes optimizing action-state mappings to craft ternary LCD codes. Experimental results reveal that RL-based LCD codes demonstrate superior error correction properties compared to conventionally constructed LCD codes and those derived from general RL methods. The paper introduces novel binary and ternary LCD codes with enhanced minimum distance bounds. Finally, it showcases how Random Network Distillation aids agents in exploring beyond local optima, enhancing the overall performance of the models without compromising convergence.