forked from NVIDIA/synthda
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathsetup-instructions
More file actions
266 lines (155 loc) · 10.1 KB
/
setup-instructions
File metadata and controls
266 lines (155 loc) · 10.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
# 🧠 AutoSynthDa – Main Page
This section provides a step-by-step guide to setting up the AutoSynthDa pipeline. It covers installing dependencies, placing pretrained models in the correct directories, and testing each component individually. Additionally, common errors and troubleshooting tips are included to help you quickly resolve any setup issues.
📦 Step 1: Install Required Packages
Use `pip` to install the required packages and libraries:
Install all necessary dependencies with:
``pip install -r requirements.txt``
📁 Step 2: Clone Repositories
AutoSynthDa relies on the following repositories.
Make sure you clone and place them correctly in your project structure:
* [StridedTransformer-Pose3D](https://github.com/Vegetebird/StridedTransformer-Pose3D)
* [text-to-motion](https://github.com/EricGuo5513/text-to-motion)
* [joints2smpl](https://github.com/wangsen1312/joints2smpl)
* [Blender](https://download.blender.org/release/Blender3.0/)
* [SlowFast](https://github.com/facebookresearch/SlowFast)
⚙️ Step 3: Configuration Setup
Create a .env file in the root directory and add the following paths and keys:
- Your OpenAI GPT API key
- Folder paths to each of the cloned repositories

💾 Step 4: Download Pretrained Models
Each repository requires specific pretrained models or checkpoints. Download all required files from the link below and place them according to the instructions in each repo's section:
👉 [Download Pretrained Files](https://drive.google.com/drive/folders/1E-ZslWCYv07YeGJbQxSFM3GiVt6tKrNk?usp=sharing)
⚠️ Before running the full pipeline, test each repository individually to ensure everything is working.
---
## 🎯 `StridedTransformer-Pose3D`
The StridedTransformer is used to generate Human3.6M (H36M) 3D pose estimations from a real-world video input.
### 📁 Setup & Testing
- Place the **2 `.pth` files** in the following directory:
`./checkpoint/pretrained/`

- Place another **2 `.pth` files** in the following directory:
`./demo/lib/checkpoint/`

**How to Test the Repository**
Run the following command:
`python demo/vis.py --video sample_video.mp4`
Video Path:
`StridedTransformer-Pose3D/demo/video/sample_video.mp4`
Output Folder Created:
`StridedTransformer-Pose3D/demo/output/sample_video/`
### 🔧 Troubleshooting
❌ **Error:** `model_path` referenced before assignment
This error might occur due to one or more of the following reasons:
- 📹 **Video Quality Issues**
- The video resolution is too low
- The person is not clearly visible or does not appear in the video
- 🎬 **Video Transition**
- A scene change or transition disrupts the tracking
- 📁 **Model Path Error**
- The pretrained model is missing or not placed in the correct folder
📷 **Screenshot of Error**

✅ **Suggested Solution**
To resolve this issue, ensure the video shows a **single continuous action** with the **full body of the person visible throughout**. You may need to crop or split videos that contain transitions or multiple angles.
✂️ **Example Fix: **
**❌ Original Video (with angle transition):**
The original video contains a shift in camera angle, which can disrupt pose tracking.
https://github.com/user-attachments/assets/95ece067-74c9-48a5-a96c-077a5f512be6
**✅ Fixed Videos (split by angle):**
Splitting the video into segments with consistent camera angles can significantly improve pose tracking accuracy.
1. First angle:
https://github.com/user-attachments/assets/f4ad8cfd-bec4-4c9f-a8d7-1b98d5b460f6
2. Second angle:
https://github.com/user-attachments/assets/f243c2b7-34c4-47dd-b3d9-cf9e29b981fc
---
## 🎬 `text-to-motion`
The text-to-motion is used to generate a 3D pose estimations from a text input.
### 📁 Setup & Testing
- Replace the existing `checkpoints` file with a **folder** named `checkpoints`.
- Inside that folder, place the two pre-trained models:
`./checkpoints/kit/` and `./checkpoints/t2m/`

**How to Test the Repository**
Run the following command:
`python gen_motion_script.py --name Comp_v6_KLD01 --text_file input.txt --repeat_time 1`
The expected output should create a folder called C000 containing a .npy file in the eval_results folder:
`text-to-motion/eval_results/t2m/Comp_v6_KLD01/default/animations/C000/gen_motion_00_L044_00_a.npy`
📌 Note: The folder name (e.g., C000) and filename may differ.
### 🔧 Troubleshooting
❌ **Error:** Deprecated NumPy version
This error occurs due to the use of deprecated attributes in newer versions of NumPy.
📷 **Screenshot of Error**

✅ **Suggested Solution**
Replace both instances of `np.float` in `./common/quaternion.py` — use `float` on **Line 11**, and `np.float64` on **Line 13**.

❌ **Error:** Error faced with ax.lines
This error occurs because `ax.lines` and `ax.collections` are **read-only** properties and cannot be directly assigned.
📷 **Screenshot of Error**

✅ **Suggested Solution**
Adjust the `ax.lines` assignment lines inside the `update(index)` function located in `./utils/plot_script.py`.

❌ **Error:** Unable to Recognize `spacy` Package
This error occurs when the required SpaCy language model is not installed. It will state that it is "unable to find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory. "
✅ **Suggested Solution**
python -m spacy download en_core_web_sm
❌ **Error:** Unable to Recognize `ffmpeg` Package
This error occurs when the required ffmpeg package is not installed
✅ **Suggested Solution**
conda install -c conda-forge ffmpeg
---
## ⚙️ `joints2smpl`
The joints2smpl is used to attach the generated joints to the SMPL character body.
### 📁 Setup & Testing
- **SMPL models** must be placed inside the following path:
`./smpl_models/smpl/`

**How to Test the Repository**
Run the following command:
`python fit_seq.py --files test_motion2.npy`
Input .npy File Location:
`joints2smpl/demo/demo_data/test_motion2.npy`
Output Folder with .obj Files:
`joints2smpl/demo/demo_results/test_motion2/`
### 🔧 Troubleshooting
❌ **Error:** ImportError: cannot import name 'bool' from 'numpy'
This occurs due to deprecated usage of `np.bool`, which has been removed in recent versions of NumPy.
✅ **Suggested Solution**
Navigate to the location where your `chumpy` package is installed by typing:
`pip show chumpy`

Go to the `__init__.py` file inside the `chumpy` directory and replace line 11 with lines 13 to 19, as shown in the image below:

---
## 🎬 `Blender`
The Blender library is used to generate a video from the `.ply` objects.
### 📁 Setup & Testing
Download **Blender version 3.0.0** from the [official Blender website](https://download.blender.org/release/Blender3.0/) or use the command below to download it directly:
`wget https://download.blender.org/release/Blender3.0/blender-3.0.0-linux-x64.tar.xz`

After downloading, unzip the folder using:
`tar -xf blender-3.0.0-linux-x64.tar.xz`
**Blender Setup Files**
- Place your animation_pose.py script into the Blender directory (where ./blender is run).
- Create a angleInput.txt file in the same directory. Inside this file, specify the desired camera angle (e.g., "front"). This determines the camera viewpoint used when rendering the video.

**How to Test the Blender Tool**
Navigate into the extracted folder and launch Blender using `./blender`. If Blender launches successfully, the installation is complete.

Alternatively, if the `.ply` object folder has already been created, you can convert it into a video directly using the following command:
`./blender -b -P animation_pose.py -- --name <name_of_folder_containing_ply_files>`
---
📌 For detailed model download instructions, refer to the **documentation pages** of each repository.
## 🔧 General Troubleshooting
❌ Error: File path of the different repos are not found
This error occurs due to the codebase's inability to retrieve the variables from the `.env` file.
**✅ Suggested Solution**
Add the variables directly into the codebase or create a separate Python file to store the folder path variables.
---
❌ Error: Python not recognized
This error occurs when the environment is set up to run python files using `python3 test.py` instead of `python test.py`. The repository expects Python files to be run with the `python` command. If `python` is not mapped to `python3`, the script may fail to execute correctly.
**✅ Suggested Solution**
Reassign the `python` command to point to `python3` using the following command:
`sudo ln -s $(which python3) /usr/local/bin/python`