Michel Erler is an interaction designer and creative developer, currently working at Framestore's R&D department in Los Angeles. He has exhibited at Ars Electronica AI, NeurIPS Machine Learning for Design and Creativity Workshop among others, and will publish a piece on games, simulations and AI with MIT's Journal of Design and Science in March 2019.
Michel holds an Honors degree in BA Interaction Design from the London College of Communication, University of the Arts London, and a MA Fiction and Entertainment from SCI-Arc, the Southern California Institute of Architecture.
Design: Interaction Design, Design Futures, and XR.
Software: Adobe CS, Unity Engine, Unreal Engine, Maya.
By nature: strong conceptual thinker and communicator, adapting quickly to new tools and workflows, hungry for problems.
Currently getting into: Unity's Machine Learning Agents Toolkit, decentralized games and dapps, and mobile development.
Type: Design research about assistants and agents in the automotive context. Design fiction and UX prototypes for Zensei, a project in collaboration with the MIT Media Lab.
Team and Client: Takram for Takram, MIT Media Lab and clients under NDA.
I was involved in design research and prototyping sessions on AI representations in the automotive context, as well as design fictions on real-time translation technology.
As an intern designer at Red Badger London I was involved in the UX and UI design of several web and mobile projects. Among them was a complete overhaul of Fortnum and Mason's website and online store. For the Haller charity I was part of a team that undertook design research in Kenya; the result was an app for Kenyan farmers optimized to work on any phone in rural areas.
Type: Virtual Reality experience of the Smithsonian American Art Museum, revised for publication on the Oculus and Steam stores.
Team and Client: Framestore for Intel.
Responsibilities: Improving UX and interactions, scripting of state machines, interactions and UI elements in C#, and final integration in Unity with Steam VR and Oculus SDK.
Type: Personal project.
Responsibilities: Conceptual development, modelling and asset preperation in 3DS Max, assembling and blueprint scripting (animation blendtrees, NPC AIs, UI widgets etc.) in Unreal Engine for an interactive game environment seen through machine vision.
Ways of Seeing is a game environment to engender forms of empathy and understanding for how machines construct the world. By clicking on different subject positions you are able to switch between different cameras, different algorithms and ultimately different ways of seeing.
Type: IoT design fiction, smart furniture, NUI.
Team and Client: Personal project, revised at Arduino's Casa Jasmina IoT Residency.
Responsibilities: Conceptual development, programming and physical implementation making use of an Arduino, a Raspberry Pi, Node.js, HTML and CSS.
Spimeio is a design fiction in which every object is ID-tagged, trackable through time and space, and linked to a database holding a vast range of information abouts its history, market value, sentimental attachments etc. On the Spimeio platform users are able to overlook and organise all their belongings in a digital inventory, giving them the option to directly store, lend or sell objects.