Objectives: Many deep learning-based predictive models evaluate the waveforms of electrocardiograms (ECGs). Because deep learning-based models are data-driven, large and labeled biosignal datasets are required. Most individual researchers find it difficult to collect adequate training data. We suggest that transfer learning can be used to solve this problem and increase the effectiveness of biosignal analysis. Methods: We applied the weights of a pretrained model to another model that performed a different task (i.e., transfer learning). We used 2,648,100 unlabeled 8.2-second-long samples of ECG II data to pretrain a convolutional autoencoder (CAE) and employed the CAE to classify 12 ECG rhythms within a dataset, which had 10,646 10-second-long 12-lead ECGs with 11 rhythm labels. We split the datasets into training and test datasets in an 8:2 ratio. To confirm that transfer learning was effective, we evaluated the performance of the classifier after the proposed transfer learning, random initialization, and two-dimensional transfer learning as the size of the training dataset was reduced. All experiments were repeated 10 times using a bootstrapping method. The CAE performance was evaluated by calculating the mean squared errors (MSEs) and that of the ECG rhythm classifier by deriving F1-scores. Results: The MSE of the CAE was 626.583. The mean F1-scores of the classifiers after bootstrapping of 100%, 50%, and 25% of the training dataset were 0.857, 0.843, and 0.835, respectively, when the proposed transfer learning was applied and 0.843, 0.831, and 0.543, respectively, after random initialization was applied. Conclusions: Transfer learning effectively overcomes the data shortages that can compromise ECG domain analysis by deep learning.