VR storyteller is a project created during the 2016 MIT Media Lab VR Hackathon. The team and I develop an algorithm that can read a story that the user types in or says orally to the device, extracts key elements in the story using semantic analysis, and predicts the mood of the story using pre-trained machine learning classifiers. The VR generative algorithm then matches the key elements extracted from the story to the 3D objects in the library, places them in the scene by looking at the context, and stylizing through ambient light and sound based on the mood of the story.
This project was designed for variety of applications, for example: For educational purposes to encourage children to write and visualize stories; For screenwriters to rapidly render a scene from a script; For movie producers to visualize the cost of producing a film.
This project received two awards from the hackathon: the most refined VR experience and the best up and coming hackers, and also gained interests from investors and Lucas Films.
Awards
- Most refined VR experience
- Best up and coming hackers
Team
- Pat Pataranutaporn
- Nabanita De
- Adrian Babilinski
- Biswaraj Kar
- Yuta Toga
Links
Hackathon Challenges
Integrating and mapping all the components to each other: Cloud, Machine Learning, VR and Web Development especially since its our first hackathon for most of us and none of us had any experience with prior AWS and most of us had no prior VR experience and how we learnt everything on the fly at the Hackathon and implemented it end-to-end within one and half days was a tough job, done well, with extreme dedication.