solve merge
This commit is contained in:
commit
64b22fd0f4
192
Readme.md
Normal file
192
Readme.md
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
# Next Best View for Reconstruction
|
||||||
|
|
||||||
|
## 1. Setup Environment
|
||||||
|
### 1.1 Install Main Project
|
||||||
|
```bash
|
||||||
|
mkdir nbv_rec
|
||||||
|
cd nbv_rec
|
||||||
|
git clone https://git.hofee.top/hofee/nbv_reconstruction.git
|
||||||
|
```
|
||||||
|
### 1.2 Install PytorchBoot
|
||||||
|
the environment is based on PytorchBoot, clone and install it from [PytorchBoot](https://git.hofee.top/hofee/PyTorchBoot.git)
|
||||||
|
```bash
|
||||||
|
git clone https://git.hofee.top/hofee/PyTorchBoot.git
|
||||||
|
cd PyTorchBoot
|
||||||
|
pip install .
|
||||||
|
cd ..
|
||||||
|
```
|
||||||
|
### 1.3 Install Blender (Optional)
|
||||||
|
If you want to render your own dataset as described in [section 2. Render Datasets](#2-render-datasets), you'll need to install Blender version 4.0 from [Blender Release](https://download.blender.org/release/Blender4.0/). Here is an example of installing Blender on Ubuntu:
|
||||||
|
```bash
|
||||||
|
wget https://download.blender.org/release/Blender4.0/blender-4.0.2-linux-x64.tar.xz
|
||||||
|
tar -xvf blender-4.0.2-linux-x64.tar.xz
|
||||||
|
```
|
||||||
|
If blender is not in your PATH, you can add it by:
|
||||||
|
```bash
|
||||||
|
export PATH=$PATH:/path/to/blender/blender-4.0.2-linux-x64
|
||||||
|
```
|
||||||
|
To run the blender script, you need to install the `pyyaml` and `scipy` package into your blender python environment. Run the following command to print the python path of your blender:
|
||||||
|
```bash
|
||||||
|
./blender -b --python-expr "import sys; print(sys.executable)"
|
||||||
|
```
|
||||||
|
Then copy the python path `/path/to/blender_python` shown in the output and run the following command to install the packages:
|
||||||
|
```bash
|
||||||
|
/path/to/blender_python -m pip install pyyaml scipy
|
||||||
|
```
|
||||||
|
### 1.4 Install Blender Render Script (Optional)
|
||||||
|
Clone the script from [nbv_rec_blender_render](https://git.hofee.top/hofee/nbv_rec_blender_render.git) and rename it to `blender`:
|
||||||
|
```bash
|
||||||
|
git clone https://git.hofee.top/hofee/nbv_rec_blender_render.git
|
||||||
|
mv nbv_rec_blender_render blender
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.5 Check Dependencies
|
||||||
|
Switch to the project root directory and run `pytorch-boot scan` or `ptb scan` to check if all dependencies are installed:
|
||||||
|
```bash
|
||||||
|
cd nbv_reconstruction
|
||||||
|
pytorch-boot scan
|
||||||
|
# or
|
||||||
|
ptb scan
|
||||||
|
```
|
||||||
|
If you see project structure information in the output, it means all dependencies are correctly installed. Otherwise, you may need to run `pip install xxx` to install the missing packages.
|
||||||
|
|
||||||
|
## 2. Render Datasets (Optional)
|
||||||
|
### 2.1 Download Object Mesh Models
|
||||||
|
Download the mesh models divided into three parts from:
|
||||||
|
- [object_meshes_part1.zip](None)
|
||||||
|
- [object_meshes_part2.zip](https://pan.baidu.com/s/1pBPhrFtBwEGp1g4vwsLIxA?pwd=1234)
|
||||||
|
- [object_meshes_part3.zip](https://pan.baidu.com/s/1peE8HqFFL0qNFhM5OC69gA?pwd=1234)
|
||||||
|
|
||||||
|
or download the whole dataset from [object_meshes.zip](https://pan.baidu.com/s/1ilWWgzg_l7_pPBv64eSgzA?pwd=1234)
|
||||||
|
|
||||||
|
Download the table model from [table.obj](https://pan.baidu.com/s/1sjjiID25Es_kmcdUIjU_Dw?pwd=1234)
|
||||||
|
|
||||||
|
### 2.2 Set Render Configurations
|
||||||
|
Open file `configs/local/view_generate_config.yaml` and modify the parameters to fit your needs. You are required to at least set the following parameters in `runner-generate`:
|
||||||
|
- `object_dir`: the directory of the downloaded object mesh models
|
||||||
|
- `output_dir`: the directory to save the rendered dataset
|
||||||
|
- `table_model_path`: the path of the downloaded table model
|
||||||
|
|
||||||
|
### 2.3 Render Dataset
|
||||||
|
|
||||||
|
There are two ways to render the dataset:
|
||||||
|
|
||||||
|
#### 2.3.1 Render with Visual Monitoring
|
||||||
|
|
||||||
|
If you want to visually monitor the rendering progress and machine resource usage:
|
||||||
|
|
||||||
|
1. In the terminal, run:
|
||||||
|
```
|
||||||
|
ptb ui
|
||||||
|
```
|
||||||
|
2. Open your browser and visit http://localhost:5000
|
||||||
|
3. Navigate to `Project Dashboard - Project Structure - Applications - generate_view`
|
||||||
|
4. Click the `Run` button to execute the rendering script
|
||||||
|
|
||||||
|
#### 2.3.2 Render in Terminal
|
||||||
|
|
||||||
|
If you don't need visual monitoring and prefer to run the rendering process directly in the terminal, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ptb run generate_view
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will start the rendering process without launching the UI.
|
||||||
|
|
||||||
|
## 3. Preprocess
|
||||||
|
|
||||||
|
⚠️ The preprocessing code is currently not managed by `PytorchBoot`. To run the preprocessing:
|
||||||
|
|
||||||
|
1. Open the `./preprocess/preprocessor.py` file.
|
||||||
|
2. Locate the `if __name__ == "__main__":` block at the bottom of the file.
|
||||||
|
3. Specify the dataset folder by setting `root = "path/to/your/dataset"`.
|
||||||
|
4. Run the preprocessing script directly:
|
||||||
|
|
||||||
|
```
|
||||||
|
python ./preprocess/preprocessor.py
|
||||||
|
```
|
||||||
|
|
||||||
|
This will preprocess the data in the specified dataset folder.
|
||||||
|
|
||||||
|
## 4. Generate Strategy Label
|
||||||
|
|
||||||
|
### 4.1 Set Configuration
|
||||||
|
|
||||||
|
Open the file `configs/local/strategy_generate_config.yaml` and modify the parameters to fit your needs. You are required to at least set the following parameter:
|
||||||
|
|
||||||
|
- `datasets.OmniObject3d.root_dir`: the directory of your dataset
|
||||||
|
|
||||||
|
### 4.2 Generate Strategy Label
|
||||||
|
|
||||||
|
There are two ways to generate the strategy label:
|
||||||
|
|
||||||
|
#### 4.2.1 Generate with Visual Monitoring
|
||||||
|
|
||||||
|
If you want to visually monitor the generation progress and machine resource usage:
|
||||||
|
|
||||||
|
1. In the terminal, run:
|
||||||
|
```
|
||||||
|
ptb ui
|
||||||
|
```
|
||||||
|
2. Open your browser and visit http://localhost:5000
|
||||||
|
3. Navigate to Project Dashboard - Project Structure - Applications - generate_strategy
|
||||||
|
4. Click the `Run` button to execute the generation script
|
||||||
|
|
||||||
|
#### 4.2.2 Generate in Terminal
|
||||||
|
|
||||||
|
If you don't need visual monitoring and prefer to run the generation process directly in the terminal, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ptb run generate_strategy
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will start the strategy label generation process without launching the UI.
|
||||||
|
|
||||||
|
## 5. Train
|
||||||
|
|
||||||
|
### 5.1 Set Configuration
|
||||||
|
|
||||||
|
Open the file `configs/local/train_config.yaml` and modify the parameters to fit your needs. You are required to at least set the following parameters in the `experiment` section:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
experiment:
|
||||||
|
name: your_experiment_name
|
||||||
|
root_dir: path/to/your/experiment_dir
|
||||||
|
use_checkpoint: False # if True, the checkpoint will be loaded
|
||||||
|
epoch: 600 # specific epoch to load, -1 stands for last epoch
|
||||||
|
max_epochs: 5000 # maximum epochs to train
|
||||||
|
save_checkpoint_interval: 1 # save checkpoint interval
|
||||||
|
test_first: True # if True, test process will be performed before training at each epoch
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust these parameters according to your training requirements.
|
||||||
|
|
||||||
|
|
||||||
|
### 5.2 Start Training
|
||||||
|
|
||||||
|
There are two ways to start the training process:
|
||||||
|
|
||||||
|
#### 5.2.1 Train with Visual Monitoring
|
||||||
|
|
||||||
|
If you want to visually monitor the training progress and machine resource usage:
|
||||||
|
|
||||||
|
1. In the terminal, run:
|
||||||
|
```
|
||||||
|
ptb ui
|
||||||
|
```
|
||||||
|
2. Open your browser and visit http://localhost:5000
|
||||||
|
3. Navigate to Project Dashboard - Project Structure - Applications - train
|
||||||
|
4. Click the `Run` button to start the training process
|
||||||
|
|
||||||
|
#### 5.2.2 Train in Terminal
|
||||||
|
|
||||||
|
If you don't need visual monitoring and prefer to run the training process directly in the terminal, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ptb run train
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will start the training process without launching the UI.
|
||||||
|
|
||||||
|
## 6. Evaluation
|
||||||
|
...
|
22
TODO.md
22
TODO.md
@ -1,22 +0,0 @@
|
|||||||
# TODO
|
|
||||||
## 预处理数据
|
|
||||||
### 1. 生成view阶段
|
|
||||||
**input**: 物体mesh
|
|
||||||
|
|
||||||
### 2. 生成label阶段
|
|
||||||
**input**: 目标物体点云、目标物体点云法线、桌面扫描点、被拍到的桌面扫描点
|
|
||||||
|
|
||||||
**可以删掉的数据**: mask、normal
|
|
||||||
|
|
||||||
### 3. 训练阶段
|
|
||||||
**input**: 完整点云、pose、label
|
|
||||||
|
|
||||||
**可以删掉的数据**:depth
|
|
||||||
|
|
||||||
### view生成后
|
|
||||||
预处理目标物体点云、目标物体点云法线、桌面扫描点、被拍到的桌面扫描点、完整点云
|
|
||||||
|
|
||||||
删除depth、mask、normal
|
|
||||||
|
|
||||||
### label生成后
|
|
||||||
只上传:完整点云、pose、label
|
|
@ -14,12 +14,6 @@ runner:
|
|||||||
voxel_threshold: 0.003
|
voxel_threshold: 0.003
|
||||||
soft_overlap_threshold: 0.3
|
soft_overlap_threshold: 0.3
|
||||||
hard_overlap_threshold: 0.6
|
hard_overlap_threshold: 0.6
|
||||||
filter_degree: 75
|
|
||||||
to_specified_dir: True # if True, output_dir is used, otherwise, root_dir is used
|
|
||||||
save_points: True
|
|
||||||
load_points: True
|
|
||||||
save_best_combined_points: False
|
|
||||||
save_mesh: True
|
|
||||||
overwrite: False
|
overwrite: False
|
||||||
seq_num: 15
|
seq_num: 15
|
||||||
dataset_list:
|
dataset_list:
|
||||||
@ -27,11 +21,8 @@ runner:
|
|||||||
|
|
||||||
datasets:
|
datasets:
|
||||||
OmniObject3d:
|
OmniObject3d:
|
||||||
#"/media/hofee/data/data/temp_output"
|
|
||||||
root_dir: /media/hofee/repository/full_data_output
|
root_dir: /media/hofee/repository/full_data_output
|
||||||
model_dir: /media/hofee/data/data/scaled_object_meshes
|
|
||||||
from: 0
|
from: 0
|
||||||
to: -1 # -1 means end
|
to: -1 # -1 means end
|
||||||
#output_dir: "/media/hofee/data/data/label_output"
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -7,12 +7,21 @@ runner:
|
|||||||
name: debug
|
name: debug
|
||||||
root_dir: experiments
|
root_dir: experiments
|
||||||
generate:
|
generate:
|
||||||
|
<<<<<<< HEAD
|
||||||
port: 5002
|
port: 5002
|
||||||
from: 600
|
from: 600
|
||||||
to: -1 # -1 means all
|
to: -1 # -1 means all
|
||||||
object_dir: /media/hofee/data/data/object_meshes_part1
|
object_dir: /media/hofee/data/data/object_meshes_part1
|
||||||
table_model_path: "/media/hofee/data/data/others/table.obj"
|
table_model_path: "/media/hofee/data/data/others/table.obj"
|
||||||
output_dir: /media/hofee/repository/data_part_1
|
output_dir: /media/hofee/repository/data_part_1
|
||||||
|
=======
|
||||||
|
port: 5000
|
||||||
|
from: 0
|
||||||
|
to: -1 # -1 means all
|
||||||
|
object_dir: H:\\AI\\Datasets\\object_meshes_part2
|
||||||
|
table_model_path: "H:\\AI\\Datasets\\table.obj"
|
||||||
|
output_dir: C:\\Document\\Datasets\\nbv_rec_part2
|
||||||
|
>>>>>>> c55a398b6d5c347497b528bdd460e26ffdd184e8
|
||||||
binocular_vision: true
|
binocular_vision: true
|
||||||
plane_size: 10
|
plane_size: 10
|
||||||
max_views: 512
|
max_views: 512
|
||||||
|
@ -1,22 +0,0 @@
|
|||||||
|
|
||||||
runner:
|
|
||||||
general:
|
|
||||||
seed: 0
|
|
||||||
device: cpu
|
|
||||||
cuda_visible_devices: "0,1,2,3,4,5,6,7"
|
|
||||||
|
|
||||||
experiment:
|
|
||||||
name: debug
|
|
||||||
root_dir: "experiments"
|
|
||||||
|
|
||||||
split: #
|
|
||||||
root_dir: "/home/data/hofee/project/nbv_rec/data/nbv_rec_data_512_preproc_npy"
|
|
||||||
type: "unseen_instance" # "unseen_category"
|
|
||||||
datasets:
|
|
||||||
OmniObject3d_train:
|
|
||||||
path: "../data/sample_for_training_preprocessed/OmniObject3d_train.txt"
|
|
||||||
ratio: 0.9
|
|
||||||
|
|
||||||
OmniObject3d_test:
|
|
||||||
path: "../data/sample_for_training_preprocessed/OmniObject3d_test.txt"
|
|
||||||
ratio: 0.1
|
|
@ -1,37 +0,0 @@
|
|||||||
|
|
||||||
runner:
|
|
||||||
general:
|
|
||||||
seed: 0
|
|
||||||
device: cpu
|
|
||||||
cuda_visible_devices: "0,1,2,3,4,5,6,7"
|
|
||||||
|
|
||||||
|
|
||||||
experiment:
|
|
||||||
name: debug
|
|
||||||
root_dir: "experiments"
|
|
||||||
|
|
||||||
generate:
|
|
||||||
voxel_threshold: 0.003
|
|
||||||
soft_overlap_threshold: 0.3
|
|
||||||
hard_overlap_threshold: 0.6
|
|
||||||
filter_degree: 75
|
|
||||||
to_specified_dir: True # if True, output_dir is used, otherwise, root_dir is used
|
|
||||||
save_points: True
|
|
||||||
load_points: True
|
|
||||||
save_best_combined_points: False
|
|
||||||
save_mesh: True
|
|
||||||
overwrite: False
|
|
||||||
seq_num: 15
|
|
||||||
dataset_list:
|
|
||||||
- OmniObject3d
|
|
||||||
|
|
||||||
datasets:
|
|
||||||
OmniObject3d:
|
|
||||||
#"/media/hofee/data/data/temp_output"
|
|
||||||
root_dir: /data/hofee/data/packed_preprocessed_data
|
|
||||||
model_dir: /media/hofee/data/data/scaled_object_meshes
|
|
||||||
from: 0
|
|
||||||
to: -1 # -1 means end
|
|
||||||
#output_dir: "/media/hofee/data/data/label_output"
|
|
||||||
|
|
||||||
|
|
@ -9,8 +9,6 @@ from utils.reconstruction import ReconstructionUtil
|
|||||||
from utils.data_load import DataLoadUtil
|
from utils.data_load import DataLoadUtil
|
||||||
from utils.pts import PtsUtil
|
from utils.pts import PtsUtil
|
||||||
|
|
||||||
# scan shoe 536
|
|
||||||
|
|
||||||
def save_np_pts(path, pts: np.ndarray, file_type="txt"):
|
def save_np_pts(path, pts: np.ndarray, file_type="txt"):
|
||||||
if file_type == "txt":
|
if file_type == "txt":
|
||||||
np.savetxt(path, pts)
|
np.savetxt(path, pts)
|
||||||
@ -24,6 +22,12 @@ def save_target_points(root, scene, frame_idx, target_points: np.ndarray, file_t
|
|||||||
os.makedirs(os.path.join(root,scene, "pts"))
|
os.makedirs(os.path.join(root,scene, "pts"))
|
||||||
save_np_pts(pts_path, target_points, file_type)
|
save_np_pts(pts_path, target_points, file_type)
|
||||||
|
|
||||||
|
def save_target_normals(root, scene, frame_idx, target_normals: np.ndarray, file_type="txt"):
|
||||||
|
pts_path = os.path.join(root,scene, "nrm", f"{frame_idx}.{file_type}")
|
||||||
|
if not os.path.exists(os.path.join(root,scene, "nrm")):
|
||||||
|
os.makedirs(os.path.join(root,scene, "nrm"))
|
||||||
|
save_np_pts(pts_path, target_normals, file_type)
|
||||||
|
|
||||||
def save_scan_points_indices(root, scene, frame_idx, scan_points_indices: np.ndarray, file_type="txt"):
|
def save_scan_points_indices(root, scene, frame_idx, scan_points_indices: np.ndarray, file_type="txt"):
|
||||||
indices_path = os.path.join(root,scene, "scan_points_indices", f"{frame_idx}.{file_type}")
|
indices_path = os.path.join(root,scene, "scan_points_indices", f"{frame_idx}.{file_type}")
|
||||||
if not os.path.exists(os.path.join(root,scene, "scan_points_indices")):
|
if not os.path.exists(os.path.join(root,scene, "scan_points_indices")):
|
||||||
@ -137,7 +141,7 @@ def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
|||||||
has_points = target_points.shape[0] > 0
|
has_points = target_points.shape[0] > 0
|
||||||
|
|
||||||
if has_points:
|
if has_points:
|
||||||
target_points = PtsUtil.filter_points(
|
target_points, target_normals = PtsUtil.filter_points(
|
||||||
target_points, sampled_target_normal_L, cam_info["cam_to_world"], theta_limit = filter_degree, z_range=(min_z, max_z)
|
target_points, sampled_target_normal_L, cam_info["cam_to_world"], theta_limit = filter_degree, z_range=(min_z, max_z)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -151,6 +155,7 @@ def save_scene_data(root, scene, scene_idx=0, scene_total=1,file_type="txt"):
|
|||||||
target_points = np.zeros((0, 3))
|
target_points = np.zeros((0, 3))
|
||||||
|
|
||||||
save_target_points(root, scene, frame_id, target_points, file_type=file_type)
|
save_target_points(root, scene, frame_id, target_points, file_type=file_type)
|
||||||
|
save_target_normals(root, scene, frame_id, target_normals, file_type=file_type)
|
||||||
save_scan_points_indices(root, scene, frame_id, scan_points_indices, file_type=file_type)
|
save_scan_points_indices(root, scene, frame_id, scan_points_indices, file_type=file_type)
|
||||||
|
|
||||||
save_scan_points(root, scene, scan_points) # The "done" flag of scene preprocess
|
save_scan_points(root, scene, scan_points) # The "done" flag of scene preprocess
|
||||||
|
@ -22,13 +22,7 @@ class StrategyGenerator(Runner):
|
|||||||
"app_name": "generate_strategy",
|
"app_name": "generate_strategy",
|
||||||
"runner_name": "strategy_generator"
|
"runner_name": "strategy_generator"
|
||||||
}
|
}
|
||||||
self.to_specified_dir = ConfigManager.get("runner", "generate", "to_specified_dir")
|
|
||||||
self.save_best_combined_pts = ConfigManager.get("runner", "generate", "save_best_combined_points")
|
|
||||||
self.save_mesh = ConfigManager.get("runner", "generate", "save_mesh")
|
|
||||||
self.load_pts = ConfigManager.get("runner", "generate", "load_points")
|
|
||||||
self.filter_degree = ConfigManager.get("runner", "generate", "filter_degree")
|
|
||||||
self.overwrite = ConfigManager.get("runner", "generate", "overwrite")
|
self.overwrite = ConfigManager.get("runner", "generate", "overwrite")
|
||||||
self.save_pts = ConfigManager.get("runner","generate","save_points")
|
|
||||||
self.seq_num = ConfigManager.get("runner","generate","seq_num")
|
self.seq_num = ConfigManager.get("runner","generate","seq_num")
|
||||||
|
|
||||||
|
|
||||||
|
@ -14,19 +14,12 @@ class DataLoadUtil:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def load_exr_image(file_path):
|
def load_exr_image(file_path):
|
||||||
# 打开 EXR 文件
|
|
||||||
exr_file = OpenEXR.InputFile(file_path)
|
exr_file = OpenEXR.InputFile(file_path)
|
||||||
|
|
||||||
# 获取 EXR 文件的头部信息,包括尺寸
|
|
||||||
header = exr_file.header()
|
header = exr_file.header()
|
||||||
dw = header['dataWindow']
|
dw = header['dataWindow']
|
||||||
width = dw.max.x - dw.min.x + 1
|
width = dw.max.x - dw.min.x + 1
|
||||||
height = dw.max.y - dw.min.y + 1
|
height = dw.max.y - dw.min.y + 1
|
||||||
|
|
||||||
# 定义通道,通常法线图像是 RGB
|
|
||||||
float_channels = ['R', 'G', 'B']
|
float_channels = ['R', 'G', 'B']
|
||||||
|
|
||||||
# 读取 EXR 文件中的每个通道并转化为浮点数数组
|
|
||||||
img_data = []
|
img_data = []
|
||||||
for channel in float_channels:
|
for channel in float_channels:
|
||||||
channel_data = exr_file.channel(channel)
|
channel_data = exr_file.channel(channel)
|
||||||
|
@ -84,14 +84,14 @@ class PtsUtil:
|
|||||||
theta = np.arccos(cos_theta) * 180 / np.pi
|
theta = np.arccos(cos_theta) * 180 / np.pi
|
||||||
idx = theta < theta_limit
|
idx = theta < theta_limit
|
||||||
filtered_sampled_points = points[idx]
|
filtered_sampled_points = points[idx]
|
||||||
|
filtered_normals = normals[idx]
|
||||||
|
|
||||||
""" filter with z range """
|
""" filter with z range """
|
||||||
points_cam = PtsUtil.transform_point_cloud(filtered_sampled_points, np.linalg.inv(cam_pose))
|
points_cam = PtsUtil.transform_point_cloud(filtered_sampled_points, np.linalg.inv(cam_pose))
|
||||||
idx = (points_cam[:, 2] > z_range[0]) & (points_cam[:, 2] < z_range[1])
|
idx = (points_cam[:, 2] > z_range[0]) & (points_cam[:, 2] < z_range[1])
|
||||||
z_filtered_points = filtered_sampled_points[idx]
|
z_filtered_points = filtered_sampled_points[idx]
|
||||||
|
z_filtered_normals = filtered_normals[idx]
|
||||||
return z_filtered_points[:, :3]
|
return z_filtered_points[:, :3], z_filtered_normals
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def point_to_hash(point, voxel_size):
|
def point_to_hash(point, voxel_size):
|
||||||
|
@ -128,10 +128,10 @@ class visualizeUtil:
|
|||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
root = r"/home/yan20/nbv_rec/project/franka_control/temp"
|
root = r"/home/yan20/nbv_rec/project/franka_control/temp"
|
||||||
model_dir = r"H:\\AI\\Datasets\\scaled_object_box_meshes"
|
model_dir = r"H:\\AI\\Datasets\\scaled_object_box_meshes"
|
||||||
scene = "cad_model_world"
|
scene = "box"
|
||||||
output_dir = r"/home/yan20/nbv_rec/project/franka_control/temp/output"
|
output_dir = r"C:\Document\Local Project\nbv_rec\nbv_reconstruction\test"
|
||||||
|
|
||||||
visualizeUtil.save_all_cam_pos_and_cam_axis(root, scene, output_dir)
|
#visualizeUtil.save_all_cam_pos_and_cam_axis(root, scene, output_dir)
|
||||||
visualizeUtil.save_all_combined_pts(root, scene, output_dir)
|
visualizeUtil.save_all_combined_pts(root, scene, output_dir)
|
||||||
visualizeUtil.save_target_mesh_at_world_space(root, model_dir, scene)
|
visualizeUtil.save_target_mesh_at_world_space(root, model_dir, scene)
|
||||||
#visualizeUtil.save_points_and_normals(root, scene,"10", output_dir, binocular=True)
|
#visualizeUtil.save_points_and_normals(root, scene,"10", output_dir, binocular=True)
|
Loading…
x
Reference in New Issue
Block a user