Sleep plays a crucial role in restoring physical and mental health, making it essential to monitor sleep patterns objectively. Polysomnography is a standard method used to classify sleep stages, but it tends to be costly and requires specialist involvement. In this respect, many automatic sleep classification algorithms have recently been developed through practical and easy-to-measure wearable devices. However, further research is needed on how to combine biosignals from multichannels with different sampling rates measured through wearable devices. In this study, we proposed a sleep stage classification algorithm in multi-channel signals using an electrocardiogram, accelerometer, and gyroscope. Specifically, convolutional neural networks were used to compare the sleep stage classification performance according to the sampling rate. In the 4-class sleep stage classification for the wake, light sleep, deep sleep, and rapid eye movement, an accuracy of 80.23%, an F1-score of 0.8097, and a kappa value of 0.6711 were achieved when the sampling rate was adjusted based on electrocardiogram. On the other hand, when the sampling rate was based on an accelerometer and gyroscope, the accuracy was 64.33%, the F1-score was 0.6389, and the kappa value was 0.4708 in the 4-class sleep stage classification. These results could provide great insight into developing a sleep stage classification model using multi-channel signals based on wearable devices, and would also be available in other applications such as sleep apnea.