Segmentation of retinal vessels is the basis of quantitative analysis in ophthalmology and is used for screening and diagnosis. Manual annotations for thin and tortuous vessels are error-prone, and the impact of position label noise on segmentation quality is understudied. The study proposes a lightweight U-Net-based framework with few shots for annotation correction and robust noise learning. Analyzes on the DRIVE dataset show performance degradation with increasing label displacement. Cross-dataset validation achieves 96.51% accuracy, 98.01 AUC, and 83.55 F1 on CHASEDB1. On STARE, the method shows an accuracy of 97.54%, an AUC of 98.45 and an F1 of 83.11, which is competitive with state-of-the-art methods. The results quantify the sensitivity of the segmentation to positional annotation errors and demonstrate robustness to noisy labels.