Chatting with my digital clone in 4 languages, using python, audio2face and Unreal Engine

Chatting with my digital clone in 4 languages, using python, audio2face and Unreal Engine

Don't like interviews, send your digital clone to do it :)

GitHub repository

This is the output of my AI avatar chat project, using Google speech recognition, OpenAI chat completion and Elevenlabs voice cloning. The lipsync is done using NVIDIA audio2face and the avatar is a metahuman rendered in the Unreal Engine.

The code repo is covering the AI-chat interface only (yellow blocks), not the lipsync generation (in green) or the avatar creation and rendering (in red), which are done in 3rd-party software.