This research proposes an alphabetical hand sign language recognition system using Frequency Modulated Continuous Wave (FMCW) radar and Convolutional Neural Networks (CNN) to improve communication accessibility for individuals with hearing and speech impairments, with particular relevance in healthcare and assistive technology. Unlike traditional visionbased methods, which are highly dependent on lighting conditions and background clarity, FMCW radar provides robust recognition in low-light or visually obstructed environments while ensuring user privacy. The proposed system employs a Texas Instruments IWR6843ISK-ODS radar to capture raw point cloud data, which is then transformed into Doppler-X, Doppler-Y, and Doppler-Z heatmaps representing temporal-spatial features of hand gestures. These three-channel heatmaps serve as input to a lightweight CNN model optimized for classification accuracy and computational efficiency. A custom dataset of Sistem Isyarat Bahasa Indonesia (SIBI) alphabetical hand signs was collected under controlled conditions, ensuring that both static hand shapes and dynamic arm movements were represented. The CNN architecture was fine-tuned using regularization and dropout to minimize overfitting, achieving an average classification accuracy of 95% across 26 alphabetic hand signs. In addition to offline evaluation, the model was deployed in a real-time detection framework integrated with a graphical user interface (GUI), demonstrating reliable end-to-end performance. Overall, this approach highlights radar-based gesture recognition as a practical, privacy-preserving, and scalable solution for future assistive communication technologies.