2023-12-28 13:11:18 +08:00
|
|
|
|
A streaming digital human based on the Ernerf model, realize audio video synchronous dialogue. It can basically achieve commercial effects.
|
|
|
|
|
基于ernerf模型的流式数字人,实现音视频同步对话。基本可以达到商用效果
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-10 15:14:22 +08:00
|
|
|
|
[![Watch the video]](/assets/demo.mp4)
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
## 1. Installation
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
Tested on Ubuntu 20.04, Python3.10, Pytorch 1.12 and CUDA 11.3
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
### 1.1 Install dependency
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2023-12-28 13:11:18 +08:00
|
|
|
|
```bash
|
2024-01-06 21:17:34 +08:00
|
|
|
|
conda create -n nerfstream python=3.10
|
|
|
|
|
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.3 -c pytorch
|
|
|
|
|
conda activate nerfstream
|
2023-12-19 09:41:52 +08:00
|
|
|
|
pip install -r requirements.txt
|
|
|
|
|
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
|
2023-12-28 13:11:18 +08:00
|
|
|
|
pip install tensorflow-gpu==2.8.0
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
2023-12-28 13:11:18 +08:00
|
|
|
|
linux cuda环境搭建可以参考这篇文章 https://zhuanlan.zhihu.com/p/674972886
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-13 20:15:09 +08:00
|
|
|
|
### 1.2 安装rtmpstream库
|
2023-12-28 13:11:18 +08:00
|
|
|
|
参照 https://github.com/lipku/python_rtmpstream
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
## 2. Run
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
### 2.1 运行rtmpserver (srs)
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
2023-12-28 13:11:18 +08:00
|
|
|
|
docker run --rm -it -p 1935:1935 -p 1985:1985 -p 8080:8080 registry.cn-hangzhou.aliyuncs.com/ossrs/srs:5
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
### 2.2 启动数字人:
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2023-12-28 13:11:18 +08:00
|
|
|
|
```python
|
|
|
|
|
python app.py
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
|
|
|
|
|
2023-12-28 13:11:18 +08:00
|
|
|
|
如果访问不了huggingface,在运行前
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
2023-12-28 13:11:18 +08:00
|
|
|
|
export HF_ENDPOINT=https://hf-mirror.com
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
|
|
|
|
|
2023-12-28 13:11:18 +08:00
|
|
|
|
运行成功后,用vlc访问rtmp://serverip/live/livestream
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-13 20:12:08 +08:00
|
|
|
|
### 2.3 网页端数字人播报输入文字
|
2023-12-28 13:11:18 +08:00
|
|
|
|
安装并启动nginx
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
2023-12-28 13:11:18 +08:00
|
|
|
|
apt install nginx
|
|
|
|
|
nginx
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```
|
2024-01-27 11:13:16 +08:00
|
|
|
|
将echo.html和mpegts-1.7.3.min.js拷到/var/www/html下
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-01-28 07:58:46 +08:00
|
|
|
|
用浏览器打开http://serverip/echo.html, 在文本框输入任意文字,提交。数字人播报该段文字
|
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
## 3. 更多使用
|
|
|
|
|
### 3.1 使用LLM模型进行数字人对话
|
2024-01-28 07:58:46 +08:00
|
|
|
|
|
2024-01-28 08:00:31 +08:00
|
|
|
|
目前借鉴数字人对话系统[LinlyTalker](https://github.com/Kedreamix/Linly-Talker)的方式,LLM模型支持Chatgpt,Qwen和GeminiPro。需要在app.py中填入自己的api_key。
|
2024-01-28 07:58:46 +08:00
|
|
|
|
安装并启动nginx,将chat.html和mpegts-1.7.3.min.js拷到/var/www/html下
|
|
|
|
|
|
|
|
|
|
用浏览器打开http://serverip/chat.html
|
2024-02-25 19:00:10 +08:00
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
### 3.2 使用本地tts服务,支持声音克隆
|
2024-02-25 19:00:10 +08:00
|
|
|
|
运行xtts服务,参照 https://github.com/coqui-ai/xtts-streaming-server
|
|
|
|
|
```
|
|
|
|
|
docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 9000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest
|
|
|
|
|
```
|
2024-02-25 19:10:12 +08:00
|
|
|
|
然后运行,其中ref.wav为需要克隆的声音文件
|
2024-02-25 19:00:10 +08:00
|
|
|
|
```
|
|
|
|
|
python app.py --tts xtts --ref_file data/ref.wav
|
|
|
|
|
```
|
2024-03-23 18:15:35 +08:00
|
|
|
|
|
|
|
|
|
### 3.3 音频特征用hubert
|
|
|
|
|
如果训练模型时用的hubert提取音频特征,用如下命令启动数字人
|
|
|
|
|
```
|
|
|
|
|
python app.py --asr_model facebook/hubert-large-ls960-ft
|
|
|
|
|
```
|
2024-03-23 21:13:21 +08:00
|
|
|
|
|
|
|
|
|
### 3.4 设置背景图片
|
|
|
|
|
```
|
|
|
|
|
python app.py --bg_img bg.jpg
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 3.5 全身视频拼接
|
|
|
|
|
#### 3.5.1 切割训练用的视频
|
|
|
|
|
```
|
|
|
|
|
ffmpeg -i fullbody.mp4 -vf crop="400:400:100:5" train.mp4
|
|
|
|
|
```
|
|
|
|
|
用train.mp4训练模型
|
|
|
|
|
#### 3.5.2 提取全身图片
|
|
|
|
|
```
|
|
|
|
|
ffmpeg -i fullbody.mp4 -vf fps=25 -qmin 1 -q:v 1 -start_number 0 data/fullbody/img/%d.jpg
|
|
|
|
|
```
|
|
|
|
|
#### 3.5.2 启动数字人
|
|
|
|
|
```
|
|
|
|
|
python app.py --fullbody --fullbody_img data/fullbody/img --fullbody_offset_x 100 --fullbody_offset_y 5 --fullbody_width 580 --fullbody_height 1080 --W 400 --H 400
|
|
|
|
|
```
|
|
|
|
|
- --fullbody_width、--fullbody_height 全身视频的宽、高
|
|
|
|
|
- --W、--H 训练视频的宽、高
|
2024-01-13 20:15:09 +08:00
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
## 4. Docker Run
|
2024-01-13 20:12:08 +08:00
|
|
|
|
不需要第1步的安装,直接运行。
|
2024-01-07 14:19:21 +08:00
|
|
|
|
```
|
2024-01-13 20:12:08 +08:00
|
|
|
|
docker run --gpus all -it --network=host --rm registry.cn-hangzhou.aliyuncs.com/lipku/nerfstream:v1.3
|
2024-01-07 14:19:21 +08:00
|
|
|
|
```
|
2024-01-13 20:12:08 +08:00
|
|
|
|
srs和nginx的运行同2.1和2.3
|
2024-01-07 14:19:21 +08:00
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
## 5. Data flow
|
2023-12-28 13:11:18 +08:00
|
|
|
|
![](/assets/dataflow.png)
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
## 6. 数字人模型文件
|
2023-12-28 13:11:18 +08:00
|
|
|
|
可以替换成自己训练的模型(https://github.com/Fictionarry/ER-NeRF)
|
2023-12-19 09:41:52 +08:00
|
|
|
|
```python
|
|
|
|
|
.
|
|
|
|
|
├── data
|
2024-02-25 19:00:10 +08:00
|
|
|
|
│ ├── data_kf.json
|
|
|
|
|
│ ├── au.csv
|
2023-12-19 09:41:52 +08:00
|
|
|
|
│ ├── pretrained
|
2024-01-28 07:58:46 +08:00
|
|
|
|
│ └── └── ngp_kf.pth
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
## 7. 性能分析
|
2024-01-13 20:12:08 +08:00
|
|
|
|
1. 帧率
|
2024-03-02 11:30:53 +08:00
|
|
|
|
在Tesla T4显卡上测试整体fps为18左右,如果去掉音视频编码推流,帧率在20左右。用4090显卡可以达到40多帧/秒。
|
2024-01-13 20:12:08 +08:00
|
|
|
|
优化:新开一个线程运行音视频编码推流
|
|
|
|
|
2. 延时
|
|
|
|
|
整体延时5s多
|
|
|
|
|
(1)tts延时2s左右,目前用的edgetts,需要将每句话转完后一次性输入,可以优化tts改成流式输入
|
|
|
|
|
(2)wav2vec延时1s多,需要缓存50帧音频做计算,可以通过-m设置context_size来减少延时
|
2024-01-28 07:58:46 +08:00
|
|
|
|
(3)srs转发延时,设置srs服务器减少缓冲延时。具体配置可看 https://ossrs.net/lts/zh-cn/docs/v5/doc/low-latency, 配置了一个低延时版本
|
|
|
|
|
```python
|
|
|
|
|
docker run --rm -it -p 1935:1935 -p 1985:1985 -p 8080:8080 registry.cn-hangzhou.aliyuncs.com/lipku/srs:v1.1
|
|
|
|
|
```
|
2024-01-13 20:12:08 +08:00
|
|
|
|
|
2024-03-23 18:15:35 +08:00
|
|
|
|
## 8. TODO
|
2024-01-28 07:58:46 +08:00
|
|
|
|
- [x] 添加chatgpt实现数字人对话
|
2024-02-25 19:00:10 +08:00
|
|
|
|
- [x] 声音克隆
|
2024-01-28 07:58:46 +08:00
|
|
|
|
- [ ] 数字人静音时用一段视频代替
|
2023-12-19 09:41:52 +08:00
|
|
|
|
|
2023-12-28 13:11:18 +08:00
|
|
|
|
如果本项目对你有帮助,帮忙点个star。也欢迎感兴趣的朋友一起来完善该项目。
|
2024-03-02 11:30:53 +08:00
|
|
|
|
Email: lipku@foxmail.com
|
2024-03-02 11:52:25 +08:00
|
|
|
|
微信公众号:数字人技术
|
|
|
|
|
![](https://mmbiz.qpic.cn/sz_mmbiz_jpg/l3ZibgueFiaeyfaiaLZGuMGQXnhLWxibpJUS2gfs8Dje6JuMY8zu2tVyU9n8Zx1yaNncvKHBMibX0ocehoITy5qQEZg/640?wxfrom=12&tp=wxpic&usePicPrefetch=1&wx_fmt=jpeg&from=appmsg)
|