
복합 낙상 시나리오 영상에서의 Transformer 모델 최적화 및 성능 평가
Copyright 2025 THE KOREAN ACADEMIC SOCIETY OF BUSINESS ADMINISTRATION
This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted, distribution, and reproduction in any medium, provided the original work is properly cited.
초록
초고령 사회의 도래와 함께 낙상은 노인의 생명과 건강을 위협하는 중대한 사회적 위험 요인으로 부상하고 있다. 본 연구는 복잡한 실제 환경에서 노인의 낙상을 효과적으로 감지하기 위한 실용적 영상 기반 시스템 구축의 기반이 되는 종단간(End-to-End) 낙상 감지 모델을 제안한다. 구체적으로, 본 연구는 기존 스켈레톤 추출 방식의 전처리 의존성과 오류 전파 문제를 극복하고자, 원본 RGB 영상만으로 작동하는 UniFormer 아키텍처를 개선하고 최신 학습 전략을 결합하였다. 특히 다양한 장소, 보조기구 사용, 촬영 각도 등을 포함한 AIHUB 데이터셋을 통해 모델의 일반화 가능성과 실효성을 평가하였으며, 복잡한 환경에서도 기존 모델 대비 높은 정확도(96.5%)와 F1-Score(93.2%)를 기록하였다. 본 연구의 사회적 함의는 다음과 같다. 첫째, 별도의 장비나 센서 없이 기존 CCTV 인프라를 활용할 수 있어 낙상 감지 기술의 보편적 확산 가능성을 제시한다. 둘째, 고비용 설비나 전문 인력 없이도 요양 시설 및 가정에서의 고령자 안전 모니터링 체계를 구축할 수 있어 사회적 돌봄 비용 절감과 돌봄 공백 최소화에 기여할 수 있다. 셋째, 장비 착용이나 인위적 개입 없이도 일상생활 속에서 자연스럽게 낙상을 감지할 수 있도록 설계되어, 고령자가 감시받는 존재가 아닌 자율적 주체로 존중받을 수 있는 환경 조성에 기여한다. 마지막으로 정보기술이 사회적 약자를 위한 안전망 구축에 기여할 수 있는 새로운 방향성을 제시하며, 노인복지정책과 스마트케어 인프라 설계에도 유의미한 방향성을 제공할 수 있다.
Abstract
As the global population ages, falls represent a significant health risk for the elderly. This study aims to propose a high-performance, end-to-end fall detection model designed to serve as a core component for practical, vision-based monitoring systems in real-world environments. We introduce an optimized Transformer-based architecture that detects falls directly from raw RGB video streams, thereby obviating the need for extensive data pre-processing or wearable sensors. The model's generalizability and effectiveness were rigorously evaluated using the AIHUB dataset, which encompasses diverse scenarios, including varied locations and the use of assistive devices. The proposed model achieved an accuracy of 96.5% and an F1-score of 93.2%, demonstrating robust performance even under challenging conditions. The implications of this work are threefold. First, the system can be deployed on existing camera infrastructure, offering a scalable and cost-effective solution for continuous monitoring. Second, by enabling automated monitoring in residential and care facilities, it has the potential to reduce caregiving costs and address service gaps. Third, the non-intrusive nature of the system preserves the privacy and autonomy of individuals. This research contributes significantly to the development of technology-driven safety nets for vulnerable populations and offers practical considerations for senior welfare policies and the design of smart care infrastructure.
Keywords:
Elderly Safety, Fall Detection, Transformer Architecture, UniFormer, Video Analysis키워드:
노인 안전, 낙상 감지, 트랜스포머 아키텍처, 유니포머, 영상 분석Acknowledgments
This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2023S1A5A8079952).
References
-
박영석, 이종섭, 연지영, 최정일 (2021). “노인장기요양시설에서의 웰니스 IT서비스 특성과 이용의도와의 관계,” 경영학연구, 제50권 1호, pp.143-171.
Park, Y S., Lee J. S., Yeon, J. Y. and Choi J. I. (2021). “The Relationship of Wellness IT Service Characteristics and Intention to Use in Long-Term Care Facilities for the Elderly,” Korean Management Review, 50 (1), 143-171. [ https://doi.org/10.17287/kmr.2021.50.1.143 ]
-
이한신 김판수 (2019). “소비자의 기술수용과 저항이 인공지능(AI) 사용의도에 미치는 영향,” 경영학연구, 제48권 5호, pp.1195-1219.
Yi, H. S. and Kim, P. S. (2019). “The Effect of Consumer’s Technology Acceptance and Resistance on Intention to Use of Artificial Intelligence (AI),” Korean Management Review, 48(5), pp.1195-1219. [ https://doi.org/10.17287/kmr.2019.48.5.1195 ]
-
정옥경, 이중원, 박철 (2024). “디지털 헬스케어 고객경험이 서비스 만족과 주관적 웰빙에 미치는 영향: 원격진료 서비스를 중심으로,” 경영학연구, 제53권 3호, pp.729-759.
Jung, O. K., Lee, J. W. and Park, C. (2024). “Effects of Customer Experience of the Digital Healthcare on Service Satisfaction and Subjective Well-Being: Focusing on Telemedicine Services,” Korean Management Review, 53(3), pp.729-759. [ https://doi.org/10.17287/kmr.2024.53.3.729 ]
-
한국지능정보사회진흥원. “낙상사고 위험동작 영상-센서 쌍 데이터,” AI-Hub, https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71641, , 2025년 4월 6일 접속.
National Information Society Agency. “Video-Sensor Paired Data for Fall Risk Behaviors,” AI-Hub, https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71641, , retrieved April 6, 2025. -
한국지능정보사회진흥원. “AI 허브,” AI-Hub, https://www.aihub.or.kr/, , 2025년 4월 2일 접속.
National Information Society Agency. “AI Hub,” AI-Hub, https://www.aihub.or.kr/, , retrieved April 2, 2025. -
Aderinola, T. B., Palmerini, L., D’Ascanio, I., Chiari, L., Klenk, J., Becker, C., Caulfield, B., & Ifrim, G. (2025). “Accurate and efficient real-world fall detection using time series techniques. In G. Ifrim, T. B. Aderinola, & B. Caulfield (Eds.),” Advanced Analytics and Learning on Temporal Data (pp.52-79). Springer Nature Switzerland.
[https://doi.org/10.1007/978-3-031-53317-2_4]
-
Alanazi, T., Babutain, K., and Muhammad, G. (2024). “Mitigating Human Fall Injuries: A Novel System Utilizing 3D 4-Stream Convolutional Neural Networks and Image Fusion,” Image and Vision Computing, 148, 105153.
[https://doi.org/10.1016/j.imavis.2024.105153]
-
Alam, E., Sufian, A., Dutta, P., and Leo, M. (2022). “Vision-based human fall detection systems using deep learning: A review,” Computers in Biology and Medicine, 146, 105626.
[https://doi.org/10.1016/j.compbiomed.2022.105626]
-
Al-qaness, M. A., Dahou, A., Abd Elaziz, M., and Helmi, A. M. (2024). “Human activity recognition and fall detection using convolutional neural network and transformer-based architecture,” Biomedical Signal Processing and Control, 95, 106412.
[https://doi.org/10.1016/j.bspc.2024.106412]
-
Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lučić, M., and Schmid, C. (2021). “Vivit: A video vision transformer,” in Proceedings of the IEEE/CVF international conference on computer vision, pp.6836-6846.
[https://doi.org/10.1109/ICCV48922.2021.00676]
-
Ashfaq, M., Yun, J., Yu, S., and Loureiro, S. M. C. (2020). “I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents,” Telematics and Informatics, 54, 101473.
[https://doi.org/10.1016/j.tele.2020.101473]
-
Assanovich, B., & Kosarava, K. (2025). “Vision-Based Fall Detector for Elderly Based on Sliding Window Approach and Feature Engineering,” Journal of Data Science and Intelligent Systems, 3(1), pp.27-34.
[https://doi.org/10.47852/bonviewJDSIS42024100]
- Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv. https://arxiv.org/abs/1607.06450, .
- Bertasius, G., Wang, H., and Torresani, L. (2021). “Is space-time attention all you need for video understanding?,” in proceeding of the International Conference on Machine Learning (ICML), Virtual.
-
Durga Bhavani, K. and Ferni Ukrit, M. (2024). “Design of inception with deep convolutional neural network based fall detection and classification model,” Multimedia Tools and Applications, 83(8), pp.23799-23817.
[https://doi.org/10.1007/s11042-023-16476-6]
-
Blackburn, J., Ousey, K., Stephenson, J., and Lui, S. (2022). “Exploring the impact of experiencing a long lie fall on physical and clinical outcomes in older people requiring an ambulance: A systematic review,” International Emergency Nursing, 62, 101148.
[https://doi.org/10.1016/j.ienj.2022.101148]
-
Bui, T., Liu, J., Cao, J., Wei, G., and Zeng, Q. (2024). “Elderly fall detection in complex environment based on improved YOLOv5s and LSTM,” Applied Sciences, 14(19), 9028.
[https://doi.org/10.3390/app14199028]
-
Cao, Y., Guo, M., Sun, J., Chen, X., and Qiu, J. (2024). “Fall detection based on LCNN and fusion model of weights using human skeleton and optical flow,” Signal, Image and Video Processing, 18(1), pp.833-841.
[https://doi.org/10.1007/s11760-023-02776-9]
-
Carreira, J. and Zisserman, A. (2017). “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset,” in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.6299-6308, Honolulu, Hawaii.
[https://doi.org/10.1109/CVPR.2017.502]
-
Chawan, V. R., Huber, M., Burns, N., and Daniel, K. (2022). “Person identification and tinetti score prediction using balance parameters: A machine learning approach to determine fall risk,” in proceeding of the 15th international conference on PErvasive technologies related to assistive environments, pp.203-212.
[https://doi.org/10.1145/3529190.3529223]
-
Chatterjee, S., Rana, N. P., Dwivedi, Y. K., and Baabdullah, A. M. (2021). “Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model,” Technological Forecasting and Social Change, 170, 120880.
[https://doi.org/10.1016/j.techfore.2021.120880]
- Chen, X., Liang, C., Huang, D., Real, E., Wang, K., Pham, H., Dong, X., Luong, T., Hsieh, C. J., Lu, Y., and Le, Q. V. (2023). “Symbolic Discovery of Optimization Algorithms,” Advances in Neural Information Processing Systems, 36, pp.49205-49233.
- Chu, X., Tian, Z., Zhang, B., Wang, X., & Shen, C. (2021). Conditional positional encodings for vision transformers. arXiv. https://arxiv.org/abs/2102.10882
-
Chutimawattanakul, P. and Samanpiboon, P. (2022). “Fall detection for the elderly using yolov4 and lstm,” in proceeding of the 2022 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), pp.1-5.
[https://doi.org/10.1109/ECTI-CON54298.2022.9795534]
- Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv. https://arxiv.org/abs/2010.11929
-
El-Bendary, N., Tan, Q., Pivot, F. C., & Lam, A. (2013). “Fall Detection and Prevention for the Elderly: A Review of Trends and Challenges,” International Journal on Smart Sensing and Intelligent Systems, 6(3), pp. 1230.
[https://doi.org/10.21307/ijssis-2017-588]
-
Ergüder, H., Uzun, T., & Baday, M. (2024). “Advancing Fall Detection Utilizing Skeletal Joint Image Representation and Deformable Layers,” Image Analysis and Stereology, 43(1), pp. 97-107.
[https://doi.org/10.5566/ias.3087]
-
Espinosa, R., Ponce, H., Gutiérrez, S., Martínez-Villaseñor, L., Brieva, J., & Moya-Albor, E. (2019). “A Vision-Based Approach for Fall Detection Using Multiple Cameras and Convolutional Neural Networks: A Case Study Using the UP-Fall Detection Dataset,” Computers in Biology and Medicine, 115, pp. 103520.
[https://doi.org/10.1016/j.compbiomed.2019.103520]
-
Feichtenhofer, C. (2020). “X3d: Expanding architectures for efficient video recognition,” in proceeding of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, WA, USA.
[https://doi.org/10.1109/CVPR42600.2020.00028]
-
Florence, C. S., Bergen, G., Atherly, A., Burns, E., Stevens, J., and Drake, C. (2018). “Medical costs of fatal and nonfatal falls in older adults,” Journal of the American Geriatrics Society, 66(4), pp.693-698.
[https://doi.org/10.1111/jgs.15304]
-
Gao, M., Li, J., Zhou, D., Zhi, Y., Zhang, M., and Li, B. (2023). “Fall detection based on OpenPose and MobileNetV2 network,” IET Image Processing, 17(3), pp.722-732.
[https://doi.org/10.1049/ipr2.12667]
-
Gutiérrez, J., Rodríguez, V., and Martin, S. (2021). “Comprehensive review of vision-based fall detection systems,” Sensors, 21(3), pp.947.
[https://doi.org/10.3390/s21030947]
- Hawley-Hague, H., Boulton, E., Hall, A., Pfeiffer, K., and Todd, C. (2014). “Human factors and adherence to home-based exercise programs in older adults at risk of falling: A systematic review,” Physical Therapy, 94(3), pp.319-336.
-
Hoang, V. H., Lee, J. W., Piran, M. J., and Park, C. S. (2023). “Advances in skeleton-based fall detection in RGB videos: From handcrafted to deep learning approaches,” IEEE Access, 11, pp.92322-92352.
[https://doi.org/10.1109/ACCESS.2023.3307138]
- Ioffe, S. and Szegedy, C. (2015). “Batch normalization : Accelerating deep network training by reducing internal covariate shift,” in proceeding of the International conference on machine learning, Lille, France.
-
Inturi, A. R., Manikandan, V. M., and Garrapally, V. (2023). “A novel vision-based fall detection scheme using keypoints of human skeleton with long short-term memory network,” Arabian Journal for Science and Engineering, 48(2), pp.1143-1155.
[https://doi.org/10.1007/s13369-022-06684-x]
- Islam, M. A., Jia, S., & Bruce, N. D. (2020). How much position information do convolutional neural networks encode?. arXiv. https://arxiv.org/abs/2001.08248, .
-
Kaur, N., Rani, S., and Kaur, S. (2024). “Real-time video surveillance based human fall detection system using hybrid haar cascade classifier,” Multimedia Tools and Applications, 83(28), pp.71599-71617.
[https://doi.org/10.1007/s11042-024-18305-w]
-
Keskes, O. and Noumeir, R. (2021). “Vision-based fall detection using st-gcn,” IEEE Access, 9, pp.28224-28236.
[https://doi.org/10.1109/ACCESS.2021.3058219]
-
Kim, S., Kim, S., Woo, S., Oh, J., Son, Y., Jacob, L., Soysal, P., Park, J., Chen, L-K., and Yon, D. K. (2025). “Temporal trends and patterns in mortality from falls across 59 high-income and upper-middle-income countries, 1990–2021, with projections up to 2040: a global time-series analysis and modelling study,” The Lancet Healthy Longevity, 6(1), p.100672.
[https://doi.org/10.1016/j.lanhl.2024.100672]
- Kingma, D. P. and Ba, J. L. (2015). “Adam: A method for stochastic optimization,” in Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA.
-
Knowles, B. and Hanson, V. L. (2018). “The wisdom of older technology (non) users,” Communications of the ACM, 61(3), pp.72-77.
[https://doi.org/10.1145/3179995]
-
Kwolek, B. and Kepski, M. (2014). “Human fall detection on embedded platform using depth maps and wireless accelerometer,” Computer Methods and Programs in Biomedicine, 117 (3), pp.489-501.
[https://doi.org/10.1016/j.cmpb.2014.09.005]
- Li, K., Wang, Y., Gao, P., Song, G., Liu, Y., Li, H., & Qiao, Y. (2022). Uniformer: unified transformer for efficient spatiotemporal representation learning. arXiv. https://arxiv.org/abs/2201.04676
-
Li, X., Wang, Y., Zhou, Z., and Qiao, Y. (2020). “Smallbignet: Integrating core and contextual views for video classification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
[https://doi.org/10.1109/CVPR42600.2020.00117]
-
Lin, T. Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017). “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, Venice, Italy.
[https://doi.org/10.1109/ICCV.2017.324]
-
Liu, W., Liu, X., Hu, Y., Shi, J., Chen, X., Zhao, J., Wang, S., and Hu, Q. (2022). “Fall detection for shipboard seafarers based on optimized BlazePose and LSTM,” Sensors, 22(14), p. 5449.
[https://doi.org/10.3390/s22145449]
- Loshchilov, I., & Hutter, F. (2017). Fixing weight decay regularization in Adam. arXiv. https://arxiv.org/abs/1711.05101
-
Luo, B. (2023). “Human fall detection for smart home caring using Yolo networks,” International Journal of Advanced Computer Science and Applications, 14(4), pp.53-58.
[https://doi.org/10.14569/IJACSA.2023.0140409]
-
Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J., and Peñafort-Asturiano, C. (2019). “UP-fall detection dataset: A multimodal approach,” Sensors, 19(9), p.1988.
[https://doi.org/10.3390/s19091988]
-
McCall, S., Kolawole, S. S., Naz, A., Gong, L., Ahmed, S. W., Prasad, P. S., and Ardakani, S. P. (2024). “Computer Vision Based Transfer Learning-Aided Transformer Model for Fall Detection and Prediction,” IEEE Access, 12, pp.28798-28809.
[https://doi.org/10.1109/ACCESS.2024.3368065]
-
Mobsite, S., Alaoui, N., Boulmalf, M., and Ghogho, M. (2023). “Semantic segmentation-based system for fall detection and post-fall posture classification,” Engineering Applications of Artificial Intelligence, 117, 105616.
[https://doi.org/10.1016/j.engappai.2022.105616]
-
Mudiyanselage, S. P. K., Yao, C. T., Maithreepala, S. D., and Lee, B. O. (2024). “Emerging Digital Technologies Used for Fall Detection in Older Adults in Aged Care: A Scoping Review,” Journal of the American Medical Directors Association, 26(1), 105330.
[https://doi.org/10.1016/j.jamda.2024.105330]
-
Mujirishvili, T., Maidhof, C., Florez-Revuelta, F., Ziefle, M., Richart-Martinez, M., and Cabrero-García, J. (2023). “Acceptance and Privacy Perceptions Toward Video-based Active and Assisted Living Technologies: Scoping Review,” Journal of Medical Internet Research, 25, e45297.
[https://doi.org/10.2196/45297]
-
Nevi, G., Pizzichini, L., Bastone, A., and Dezi, L. (2025). “Adoption of AI by micro and small health enterprises: effects of entrepreneurial orientation on the TOE model,” European Journal of Innovation Management, forthcoming.
[https://doi.org/10.1108/EJIM-07-2024-0770]
-
Núñez-Marcos, A. and Arganda-Carreras, I. (2024). “Transformer-based fall detection in videos,” Engineering Applications of Artificial Intelligence, 132, 107937.
[https://doi.org/10.1016/j.engappai.2024.107937]
-
Ramirez, H., Velastin, S. A., Meza, I., Fabregas, E., Makris, D., and Farias, G. (2021). “Fall detection and activity recognition using human skeleton features,” IEEE Access, 9, pp.33532-33542.
[https://doi.org/10.1109/ACCESS.2021.3061626]
-
Ramirez, H., Velastin, S. A., Cuellar, S., Fabregas, E., and Farias, G. (2023). “BERT for activity recognition using sequences of skeleton features and data augmentation with GAN,” Sensors, 23(3), 1400.
[https://doi.org/10.3390/s23031400]
-
Ren, L. and Peng, Y. (2019). “Research of fall detection and fall prevention technologies: A systematic review,” IEEE Access, 7, pp. 77702-77722.
[https://doi.org/10.1109/ACCESS.2019.2922708]
-
Richardson, S., Lawrence, K., Schoenthaler, A., and Mann, D. (2022). “A framework for digital health equity,” NPJ Digital Medicine, 5(1), 119.
[https://doi.org/10.1038/s41746-022-00663-0]
-
Salimi, M., Machado, J. J., and Tavares, J. M. R. (2022). “Using deep neural networks for human fall detection based on pose estimation,” Sensors, 22(12), 4544.
[https://doi.org/10.3390/s22124544]
-
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. C. (2018). “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, Utah, USA.
[https://doi.org/10.1109/CVPR.2018.00474]
- Shazeer, N. (2020). Glu variants improve transformer. arXiv. https://arxiv.org/abs/2002.05202
- Smith, L. N. (2018). A disciplined approach to neural network hyper-parameters: Part 1--learning rate, batch size, momentum, and weight decay. arXiv. https://arxiv.org/abs/1803.09820
-
Su, C., Wei, J., Lin, D., Kong, L., & Guan, Y. L. (2024). A novel model for fall detection and action recognition combined lightweight 3D-CNN and convolutional LSTM networks. Pattern Analysis and Applications, 27(1), Article 3.
[https://doi.org/10.1007/s10044-023-01181-z]
-
Suarez, J. J. P., Orillaza, N., and Naval, P. (2022). “AFAR: A real-time vision-based activity monitoring and fall detection framework using 1D convolutional neural networks,” in Proceedings of the 2022 14th International Conference on Machine Learning and Computing, pp.555-559.
[https://doi.org/10.1145/3529836.3529916]
-
Sykes, E. R. (2025). “Next-generation fall detection: harnessing human pose estimation and transformer technology,” Health Systems, 14(2), pp.85-103.
[https://doi.org/10.1080/20476965.2024.2395574]
-
Tran, D., Bourdev, L., Fergus, R., Torresani, L., and Paluri, M. (2015). “Learning spatiotemporal features with 3D convolutional networks,” in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489-4497, Santiago, Chile.
[https://doi.org/10.1109/ICCV.2015.510]
-
Tran, D., Wang, H., Torresani, L., and Feiszli, M. (2019). “Video classification with channel-separated convolutional networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, IEEE, pp.5552-5561.
[https://doi.org/10.1109/ICCV.2019.00565]
-
Ursul, I. (2024). “Elderly Fall Detection Using Unsupervised Transformer Model,” Electronics and information technologies/Електроніка та інформаційні технології, 26.
[https://doi.org/10.30970/eli.26.7]
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). “Attention is all you need,” in Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, Curran Associates, Inc., pp.5998-6008.
-
Wamba-Taguimdje, S. L., Wamba, S. F., Kamdjoug, J. R. K., and Wanko, C. E. T. (2020). “Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects,” Business Process Management Journal, 26(7), pp.1893-1924.
[https://doi.org/10.1108/BPMJ-10-2019-0411]
- Wang, J., Lan, C., Liu, C., Ouyang, Y., Qin, T., Lu, W., Hou, W., Chen, Y., and Yu, P. S. (2023). “Generalizing to Unseen Domains: A Survey on Domain Generalization,” IEEE Transactions on Knowledge and Data Engineering, 35(8), pp.8052-8072.
-
Wang, H., Xu, S., Chen, Y., and Su, C. (2025). “LFD-YOLO: a lightweight fall detection network with enhanced feature extraction and fusion,” Scientific Reports, 15(1), pp. 5069.
[https://doi.org/10.1038/s41598-025-89214-7]
-
Wang, X., Ellul, J., and Azzopardi, G. (2020). “Elderly fall detection systems: A literature survey,” Frontiers in Robotics and AI, 7, pp.71.
[https://doi.org/10.3389/frobt.2020.00071]
-
Wang, X., Girshick, R., Gupta, A., and He, K. (2018). “Non-local neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, Utah, USA, pp.7794-7803.
[https://doi.org/10.1109/CVPR.2018.00813]
- World Health Organization, “Falls,” World Health Organization, https://www.who.int/news-room/fact-sheets/detail/falls, , retrieved July 2025.
-
Wu, Y. and He, K. (2018). “Group normalization,” in Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, pp.3-19.
[https://doi.org/10.1007/978-3-030-01261-8_1]
-
Xu, D., Wang, Y., Zhu, S., Zhao, M., and Wang, K. (2024). “Relationship between fear of falling and quality of life in nursing home residents: The role of activity restriction,” Geriatric Nursing, 57, pp.45-50.
[https://doi.org/10.1016/j.gerinurse.2024.03.006]
-
Xun, J., Wang, X., Wang, X., Fan, X., Yang, P., and Zhang, Z. (2025). “An efficient algorithm for pedestrian fall detection in various image degradation scenarios based on YOLOv8n,” Scientific Reports, 15, pp.9036.
[https://doi.org/10.1038/s41598-025-93667-1]
-
Yadav, S. K., Luthra, A., Tiwari, K., Pandey, H. M., and Akbar, S. A. (2022). “ARFDNet: An efficient activity recognition & fall detection system using latent feature pooling,” Knowledge-Based Systems, 239, pp.107948.
[https://doi.org/10.1016/j.knosys.2021.107948]
-
Yu, M., Gong, L., and Kollias, S. (2017). “Computer vision based fall detection by a convolutional neural network,” in Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, UK, Association for Computing Machinery, pp.416-420.
[https://doi.org/10.1145/3136755.3136802]
-
Yu, X., Wang, C., Wu, W., and Xiong, S. (2025). “A Real-time Skeleton-based Fall Detection Algorithm based on Temporal Convolutional Networks and Transformer Encoder,” Pervasive and Mobile Computing, 102016.
[https://doi.org/10.1016/j.pmcj.2025.102016]
-
Zahan, S., Hassan, G. M., and Mian, A. (2022). “Sdfa: Structure-aware discriminative feature aggregation for efficient human fall detection in video,” IEEE Transactions on Industrial Informatics, 19(8), pp.8713-8721.
[https://doi.org/10.1109/TII.2022.3221208]
- Zhang, M., Lucas, J., Ba, J., and Hinton, G. E. (2019). “Lookahead optimizer: k steps forward, 1 step back,” in Advances in Neural Information Processing Systems 32, Vancouver, BC, Canada, Curran Associates, Inc., pp.9188-9198.
∙ 저자 김준석은 현재 경북대학교 경영대학 경영정보 전공 석사과정에 재학 중이다. 경북대학교 IT대학 컴퓨터학부를 졸업하였다. 주요연구분야는 인공지능, 정보보안 등이다.
∙ 저자 이새롬은 2010년 부산대학교에서 학사 학위를 받았으며, 2016년 서울대학교에서 경영정보시스템 박사학위를 받았다. 2018년부터 경북대학교 경영대학에서 부교수로 재직하고 있으며 주요 연구 관심사는 개방형 협업 및 온라인 성희롱이다.
∙ 저자 박종화는 현재 경북대학교 경영학부에서 조교수로 재직 중이다. 울산과학기술원 테크노경영학부 학사 및 경영과학부 박사를 취득하였다. 박사 학위 취득 이후에는 공주대학교 상업정보교육과에 재직하였다. 주요연구분야는 인공지능에 대한 사회적 영향, 플랫폼 비즈니스 등이다.