Advancing AI technologies, such as large language models and diffusion-based text-to-image models, have enabled new creation experiences as AI can generate a part of the artifacts. With these models, natural language prompts have become a baseline interaction for AI-based generative creations. While they are useful for exploring the space of generative creations, only using natural language prompts would not be enough for user expressiveness and pushing the boundary of creation experiences. This talk provides an alternative perspective that how we interact with physical creative materials can be an inspiration for novel AI-powered creative experiences. With that perspective, I introduce a series of interactions I investigated: 1) story generation through sketching, 2) using prompts like paint medium in image generation, 3) story world element generation in semantic magnet space, and 4) story unfolding through toy-playing.
John Joon Young Chung is a research scientist at Midjourney. He completed his Computer Science and Engineering Ph.D. at the University of Michigan, Ann Arbor, and his Electrical Engineering BS at Seoul National University. He interned at Microsoft Research, Naver AI Lab, and Adobe Research during his Ph.D. years, and was a full-time researcher at Kixlab KAIST before he started his Ph.D. John’s main research interests are AI-powered creativity support tools for art-making and storytelling, while his interests also span educational technologies, crowdsourcing, and natural language processing. He is a recipient of paper awards from ACM CHI and CSCW.