Established in July 2022, the KAIST Graduate School of Metaverse is an emerging hub of research that applies an understanding of humanities, enterprise, and STEM-based problem-solving to create and share new experiences in games, education, cultural arts, healthcare, and manufacturing fields. The school places the largest emphasis on the theme of “convergence”: the crossing of academic disciplines, the intersection between real and virtual time and space, and the connection and collaboration between researchers from different fields. The KAIST Graduate School of Metaverse is now home to diverse labs that aim to stretch the boundaries of metaverse research. 

Led by Professor Cha Seung Hyun from the Graduate School of Culture Technology, the Future Space Lab investigates the optimal construction of urban environments and space planning. Researchers have studied the use of the metaverse for architectural design and analysing human life patterns. Their past studies have involved designing virtual office spaces, a national library, virtual reality (VR) gallery exhibitions, interactive 3D hotel room experiences, and incorporating tactile and olfactory senses into virtual architectural design. Their recent project was centered on augmented reality (AR) spatial technology that allows the effective design of long-term human residences in an underground city. 

Professor Sung-Hee Lee’s Lifelike Avatar and Agent Laboratory at the Graduate School of Culture Technology has set its sights on reproducing human characteristics in artificial humans such as avatars and virtual characters. Their research ranges from developing improved motion sensors, VR controllers, and remote avatar-based human interaction to translating the user’s appearance and garments to the avatar. Their most recent work, “DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced Three-Point Trackers”, proposes a novel deep learning framework that enables real-time full-body presence in avatars and accommodates diverse body sizes and challenging movements, such as lunges and hula-hooping. The Lifelike Avatar and Agent Laboratory continues to work toward leaving long-lasting developments in computer graphics, games, animation, and fashion.

The Visual AI Group, led by Professor Minhyuk Sung at the School of Computing, advances AI technology that processes and generates visual data, extending from 2D images to 4D animations. The laboratory has found Posterior Distillation Sample (PDS), a novel diffusion model-based image editing method that functions across diverse parameter spaces and maximizes original details of source content at the same time. The model has been tested to edit a moving human figure into superhero figures, such as Spiderman, or notable celebrities, such as Leonardo DiCaprio. PDS was also shown to superimpose delicate motions and objects into 3D scenes. Researchers have also proposed SyncDiffusion, a new module that synchronises multiple diffusions to create coherent panoramas or montages that involve more than two input requirements. For instance, SyncDiffusion could generate illustrations of a beach in La La Land style or a natural landscape in anime style. The lab’s findings of PDS and SyncDiffusion were presented at renowned conferences Computer Vision and Pattern Recognition Conference (CVPR) 2024 and the Conference on Neural Information Processing Systems (NeurIPS) 2023 respectively. 

The Applied and Innovative Research for Immersive Sound LAB (AIRIS LAB) led by Professor Sungyoung Kim at the KAIST Graduate School of Culture Technology strives to integrate sound and audio-related technologies into metaverse for a realistic user experience. The lab has investigated how individuals' cultural backgrounds affect their sound cognition in a virtual auditory environment, the results of which were featured in the peer-reviewed journal Virtual Reality. The lab members have recently hosted the International Workshop on Immersive 3D Sound on February 26 together with the Smart Sound Systems Lab at the Department of Electrical and Electronic Engineering, which is also listed as a member of the KAIST Graduate School of Metaverse. Given that current virtual reality technologies mostly prioritise visual experience, the AIRIS LAB aims to continue devising immersive sound reproducing methods for use in the metaverse. Their prospective projects include improving virtual concert hall experiences and delivering auditory experiences to hard-of-hearing individuals. Furthermore, their neurocognitive research on individual differences and factors that influence auditory perception is still ongoing. 

The KAIST Graduate School of Metaverse brings together the talents of researchers from the KAIST Graduate School of Culture Technology, School of Digital Humanities and Computational Social Sciences, Moon Soul Graduate School of Future Strategy, School of Computing, Department of Electrical and Electronic Engineering, and School of Business and Technology Management. It continues to grow with the active contribution of 25 faculty members and approximately 40 Master’s and PhD students who strive to pioneer the dynamically changing industry with their creativity. 

Copyright © The KAIST Herald Unauthorized reproduction, redistribution prohibited