Published in International Journal of Advanced Research in Computer Science Engineering and Information Technology
ISSN: 2321-3337 Impact Factor:1.521 Volume:6 Issue:3 Year: 31 March,2026 Pages:2063-2067
Interview preparation often faces challenges such as limited real-time interactivity, generic feedback, and high subscription costs, reducing accessibility and effectiveness. The intelligent interview preparation system is implemented as a web-based application using Python IDE for backend processing and SQLite3 for database management. The system employs Natural Language Processing (NLP) to analyze candidate responses and emotion recognition using the InceptionV3 model to evaluate behavioral aspects, providing automated, personalized feedback. The system improves accessibility and effectiveness by offering personalized, automated, and institution controlled interview training with minimal reliance on paid third-party platforms. It includes features for administrators to efficiently manage question banks, review responses, and monitor candidate performance. In the future, the project will support mobile application usage, enable multilingual interviews, incorporate advanced AI-based evaluation techniques, and integrate with recruitment platforms to further enhance accessibility, effectiveness, candidates and adaptability for
Web Application, NLP, Inception V3 Model,
[1]. F. Nagasawa, S. Okada, T. Ishihara and K. Nitta, "Adaptive Interview Strategy Based on Interviewees’ Speaking Willingness Recognition for Interview Robots," in IEEE Transactions on Affective Computing, vol. 15, no. 3, pp. 942-957, JulySept. 2024, doi: 10.1109/TAFFC.2023.3309640. [2]. C. Kim, J. Choi, J. Yoon, D. Yoo and W. Lee, "Fairness-Aware Multimodal Learning Automatic Video Interview Assessment," in IEEE Access, vol. 11, pp. 122677-122693, 2023, doi: 10.1109/ACCESS.2023.3325891. [3]. M. Baslyman, "Digital Transformation From the Industry Perspective: Definitions, Goals, Conceptual Model, and Processes," in IEEE Access, vol. 10, pp. 42961-42970, 10.1109/ACCESS.2022.3166937. 2022, [4]. S. Ryan, C. Nadal and G. Doherty, "Integrating Fairness in the Software Design Process: An Interview Study With HCI and ML Experts," in IEEE Access, vol. 11, pp. 2929629313, 2023, doi: 10.1109/ACCESS.2023.3260639. [5]. D. Ceneda, A. Arleo, T. Gschwandtner and S. Miksch, "Show Me Your Face: Towards an Automated Method to Provide Timely Guidance in Visual Analytics," in IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 12, pp. 4570-4581, 1 10.1109/TVCG.2021.3094870. Dec. 2022, doi: [6]. E.-R. Lukacik, J. S. Bourdage, and N. Roulin, ‘‘Into the void: A conceptual model and research agenda for the design and use of asynchronous video interviews,’’ Hum. Resource Manage. Rev., vol. 32, no. 1, Mar. 2022, Art. no. 100789. [Online]. Available: https://www.sciencedirect. com/science/article/pii/S1053482220300620 [7]. HireVue. About the Company: Leadership & Ceo: Hirevue. Accessed: May 29, 2024. [Online]. Available: https://www.hirevue.com/about [8]. A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, ‘‘Robust speech recognition via large-scale weak supervision,’’ in Proc. Int. Conf. Mach. Learn., vol. 202, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., Honolulu, HI, USA,2023,pp.2849228518 [Online].Available:https://proceedings.mlr.press/v202/radford23a.ml [9]. H. J. Escalante, H. Kaya, A. A. Salah, S. Escalera, Y. Güçlütürk, U. Güçlütürk, X. Baró, I. Guyon, J. C. S. Júnior, M. Madadi, S. Ayache, E. Viegas, F. Gürpinar, A. S. Wicaksana, C. C. S. Liem, M. A. J. van Gerven, and R. van Lier, ‘‘Explaining first impressions: Modeling, recognizing [10]. L. Chen, G. Feng, C. W. Leong, B. Lehman, M. P. Martin-Raugh, H. Kell, C. M. Lee, and S. Yoon, ‘‘Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm,’’ in in doi: Proc. 18th ACM Int. Conf. Multimodal Interact., Y. I. Nakano, E. André, T. Nishida, L. Morency, C. Busso, and C. Pelachaud, Eds., Tokyo, Japan, Nov. 2016, pp. 161–168, doi: 10.1145/2993148.2993203