TY - GEN
T1 - Load-Adjusted Transfer Learning for Limited Video-On-Demand Data
AU - Kangogo, Kimeli
AU - De Frein, Ruairi
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Video-On-Demand (VoD) systems face critical challenges in resource allocation when operating under data-constrained situations. Traditional Deep Learning (DL) methods for managing VoD networks depend on large datasets for training, which are frequently unattainable due to network disruptions, system failures, or inadequate data collection methods. To handle this, we propose a Transfer Learning Load Adjusted (TLLA) algorithm that enhances VoD systems' performance under limited training conditions. The TLLA transfers knowledge from pre-trained models by freezing sections of the neural network layers during retraining, reducing the need for large datasets. We freeze 50% (partial frozen) and 100% (fully frozen) of the pre-trained neural layers, and evaluate the model against fully trainable neural layers. Findings show that completely freezing neural layers achieves ≈40% of baseline performance, while partial neural layer freezing (50%) achieves ≈60% of baseline performance when measured using Root Mean Squared Error (RMSE) and R2 metrics. These results demonstrate the success of transfer learning approaches in maintaining operability under severe freezing and limited VoD training data conditions. This study provides network managers and policy makers with actionable alternatives for monitoring Quality of Delivery (QoD) when input data is inadequate, enhancing robust resource allocation.
AB - Video-On-Demand (VoD) systems face critical challenges in resource allocation when operating under data-constrained situations. Traditional Deep Learning (DL) methods for managing VoD networks depend on large datasets for training, which are frequently unattainable due to network disruptions, system failures, or inadequate data collection methods. To handle this, we propose a Transfer Learning Load Adjusted (TLLA) algorithm that enhances VoD systems' performance under limited training conditions. The TLLA transfers knowledge from pre-trained models by freezing sections of the neural network layers during retraining, reducing the need for large datasets. We freeze 50% (partial frozen) and 100% (fully frozen) of the pre-trained neural layers, and evaluate the model against fully trainable neural layers. Findings show that completely freezing neural layers achieves ≈40% of baseline performance, while partial neural layer freezing (50%) achieves ≈60% of baseline performance when measured using Root Mean Squared Error (RMSE) and R2 metrics. These results demonstrate the success of transfer learning approaches in maintaining operability under severe freezing and limited VoD training data conditions. This study provides network managers and policy makers with actionable alternatives for monitoring Quality of Delivery (QoD) when input data is inadequate, enhancing robust resource allocation.
KW - Convolutional Neural Networks
KW - Load-Adjusted Learning
KW - RTP
KW - Transfer Learning
KW - Video-On-Demand
UR - https://www.scopus.com/pages/publications/105013679329
U2 - 10.1109/CITS65975.2025.11099226
DO - 10.1109/CITS65975.2025.11099226
M3 - Conference contribution
AN - SCOPUS:105013679329
T3 - Proceedings of the 2025 IEEE International Conference on Computer, Information, and Telecommunication Systems, CITS 2025
BT - Proceedings of the 2025 IEEE International Conference on Computer, Information, and Telecommunication Systems, CITS 2025
A2 - Obaidat, Mohammad S.
A2 - Lorenz, Pascal
A2 - Hsiao, Kuei-Fang
A2 - Hsiao, Kuei-Fang
A2 - Nicopolitidis, Petros
A2 - Guo, Yu
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE International Conference on Computer, Information, and Telecommunication Systems, CITS 2025
Y2 - 16 July 2025 through 18 July 2025
ER -