Skip to content

Writing support, error handling and simplifications #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 25 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
3613896
Update dependencies, use las::Bounds and only keep copc-reader
oyhj1801 Nov 30, 2024
e59a612
update dependencies
oyhj1801 Dec 1, 2024
324f157
Simplifying, using more of las crate instead of copc-rs
oyhj1801 Dec 2, 2024
434e54e
copc reader simplified
oyhj1801 Dec 2, 2024
8dc0151
Update README.md
oyhj1801 Dec 2, 2024
9e5f95c
Update README.md
oyhj1801 Dec 2, 2024
f36ac40
make open pub again
oyhj1801 Dec 2, 2024
915ac5d
understanding the decompressor
oyhj1801 Dec 2, 2024
c542fd6
pre-writer branching
oyhj1801 Dec 3, 2024
72286e4
first implementation of writer
oyhj1801 Dec 9, 2024
5ad9ad9
Merge pull request #1 from oyhj1801/writer
oyhj1801 Dec 9, 2024
f76f173
qgis cant read the written files still
oyhj1801 Dec 11, 2024
d4b0a92
works, but the sub layers are not representative for the entire cloud
oyhj1801 Dec 12, 2024
b67ed09
pre rewrite
oyhj1801 Jan 13, 2025
a4ef54b
to do: merge small leaves for greedy and implement stochastic strat
oyhj1801 Jan 13, 2025
ffa6bc2
visability changes, still needed to revise writing strategie. How to …
oyhj1801 Jan 22, 2025
8267082
random adding funciton added, does not yet respect min node size
oyhj1801 Jan 24, 2025
8fa08c2
bug fix
oyhj1801 Jan 24, 2025
04bcc9a
converts geotiff crs to wkt crs, but still bad no distribution when w…
oyhj1801 Jan 24, 2025
8f23e69
wrong stochastics
oyhj1801 Jan 25, 2025
5b75323
Update README.md
oyhj1801 Feb 3, 2025
f8281c5
wkt crs vlr forwarding correction
oyhj1801 Feb 4, 2025
7b40306
Merge branch 'main' of https://github.com/oyhj1801/copc-rs
oyhj1801 Feb 4, 2025
01bb7d8
readme and default node max size update
oyhj1801 Apr 2, 2025
8d51f8c
readme update on writing WIP
oyhj1801 Apr 2, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 11 additions & 18 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,29 +1,22 @@
[workspace]
members = [".", "viewer"]
default-members = ["."]

[package]
name = "copc-rs"
version = "0.3.0"
authors = ["Pirmin Kalberer <[email protected]>"]
version = "1.0.0"
authors = ["Pirmin Kalberer <[email protected]>", "Øyvind Hjermstad @oyhj1801"]
edition = "2021"

description = "Cloud Optimized Point Cloud (COPC) reader."
description = "Cloud Optimized Point Cloud (COPC) reader and writer."
homepage = "https://github.com/pka/copc-rs"
repository = "https://github.com/pka/copc-rs"
readme = "README.md"
license = "MIT/Apache-2.0"
keywords = ["lidar", "pointcloud", "copc", "las", "geo"]
categories = ["science::geo", "rendering::data-formats"]
exclude = [
"tests/data",
]
keywords = ["lidar", "pointcloud", "copc", "las", "laz", "geo"]

[dependencies]
byteorder = "1.4.3"
las = "0.8.1"
laz = "0.8.2"

[dev-dependencies]
http-range-client = "0.7.0"
env_logger = "0.10.0"
fastrand = "2.3.0"
las-crs = { git = "https://github.com/oyhj1801/las-crs" }
las = {version = "0.9.2", features = ["laz"]}
laz = "0.9.2"
log = "0.4.25"
thiserror = "2.0.6"
crs-definitions = "0.3.0"
63 changes: 50 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,65 @@
## Writing is still a WIP
Writing of the octree structure works, so spatial queries in full resolution on copc-rs written files are good.
BUT the octree levels does not yet contain a similar point distribution as the whole cloud so results from resolution queries on copc-rs written files are wrong.
This means the written files will look bad in viewers. I will look into it when I find time, for now I only need full resolution spatial queries anyway.

# copc-rs

[![crates.io version](https://img.shields.io/crates/v/copc-rs.svg)](https://crates.io/crates/copc-rs)
[![docs.rs docs](https://docs.rs/copc-rs/badge.svg)](https://docs.rs/copc-rs)

copc-rs is a rust library for reading and writing Cloud Optimized Point Cloud ([COPC](https://copc.io/)) data.
It utilizes the las and laz crates heavily and tries to offer a similiar API to las.

## Usage examples

copc-rs is a library for reading Cloud Optimized Point Cloud ([COPC](https://copc.io/)) data.
### reader
```rust
use copc_rs::{Bounds, BoundsSelection, CopcReader, LodSelection, Vector};

fn main() {
let mut copc_reader = CopcReader::from_path("./lidar.copc.laz").unwrap();

## Usage example
let bounds = Bounds {
min: Vector {
x: 698_100.,
y: 6_508_100.,
z: 0.,
},
max: Vector {
x: 698_230.,
y: 6_508_189.,
z: 2_000.,
},
};

```rust
let laz_file = BufReader::new(File::open("autzen-classified.copc.laz")?);
let mut copc_reader = CopcReader::open(laz_file)?;
for point in copc_reader.points(LodSelection::Level(0), BoundsSelection::All)?.take(5) {
println!("Point coordinates: ({}, {}, {})", point.x, point.y, point.z);
for point in copc_reader
.points(LodSelection::Resolution(1.), BoundsSelection::Within(bounds))
.unwrap()
{
// do something with the points
}
}
```
### writer
```rust
use copc_rs::CopcWriter;
use las::Reader;

Run an example:
```
cargo run --example copc_http
```
fn main() {
let mut las_reader = Reader::from_path("./lidar.las").unwrap();

let header = las_reader.header().clone();
let num_points = header.number_of_points() as i32;
let points = las_reader.points().filter_map(las::Result::ok);

## Credits
let mut copc_writer = CopcWriter::from_path("./lidar.copc.laz", header, -1, -1).unwrap();

copc_writer.write(points, num_points).unwrap();

println!("{:#?}", copc_writer.copc_info());
}
```

This library depends heavily on the work of Thomas Montaigu (@tmontaigu) and Pete Gadomski (@gadomski).
## Credits
This library depends heavily on the work of Thomas Montaigu (@tmontaigu) and Pete Gadomski (@gadomski), the authors of the laz and las crates.
21 changes: 0 additions & 21 deletions examples/copc_http.rs

This file was deleted.

22 changes: 0 additions & 22 deletions examples/copc_to_xyz.rs

This file was deleted.

23 changes: 0 additions & 23 deletions examples/print_points.rs

This file was deleted.

117 changes: 0 additions & 117 deletions src/bounds.rs

This file was deleted.

91 changes: 91 additions & 0 deletions src/compressor.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
use byteorder::{LittleEndian, WriteBytesExt};
use laz::laszip::{ChunkTable, ChunkTableEntry, LazVlr};
use laz::record::{LayeredPointRecordCompressor, RecordCompressor};

use std::io::{Seek, SeekFrom, Write};

pub(crate) struct CopcCompressor<'a, W: Write + Seek + 'a> {
vlr: LazVlr,
record_compressor: LayeredPointRecordCompressor<'a, W>,
/// Position where LasZipCompressor started
start_pos: u64,
/// Position where the current chunk started
chunk_start_pos: u64,
/// Entry for the chunk we are currently compressing
current_chunk_entry: ChunkTableEntry,
/// Table of chunks written so far
chunk_table: ChunkTable,
}

impl<'a, W: Write + Seek + 'a> CopcCompressor<'a, W> {
/// Creates a compressor using the provided vlr.
pub(crate) fn new(write: W, vlr: LazVlr) -> crate::Result<Self> {
let mut record_compressor = LayeredPointRecordCompressor::new(write);
record_compressor.set_fields_from(vlr.items())?;
let stream = record_compressor.get_mut();

let start_pos = stream.stream_position()?;
// reserve 8 bytes for the offset to the chunk table
stream.write_i64::<LittleEndian>(-1)?;

Ok(Self {
vlr,
record_compressor,
chunk_start_pos: start_pos + 8, // size of the written i64
start_pos,
chunk_table: ChunkTable::default(),
current_chunk_entry: ChunkTableEntry::default(),
})
}

/// Compress a chunk
pub(crate) fn compress_chunk<Chunk: AsRef<[u8]>>(
&mut self,
chunk: Chunk,
) -> std::io::Result<(ChunkTableEntry, u64)> {
for point in chunk.as_ref().chunks_exact(self.vlr.items_size() as usize) {
self.record_compressor.compress_next(point)?;
self.current_chunk_entry.point_count += 1;
}

// finish the chunk
self.record_compressor.done()?;
self.record_compressor.reset();
self.record_compressor
.set_fields_from(self.vlr.items())
.unwrap();

// update the chunk table
let current_pos = self.record_compressor.get_mut().stream_position()?;
self.current_chunk_entry.byte_count = current_pos - self.chunk_start_pos;
self.chunk_table.push(self.current_chunk_entry);

// store chunk entry and chunk start pos for returning
let old_chunk_start_pos = self.chunk_start_pos;
let written_chunk_entry = self.current_chunk_entry;

// reset the chunk
self.chunk_start_pos = current_pos;
self.current_chunk_entry = ChunkTableEntry::default();

Ok((written_chunk_entry, old_chunk_start_pos))
}

/// Must be called when you have compressed all your points.
pub(crate) fn done(&mut self) -> std::io::Result<()> {
self.record_compressor.done()?;

// update the offset to the chunk table
let stream = self.record_compressor.get_mut();
let start_of_chunk_table_pos = stream.stream_position()?;
stream.seek(SeekFrom::Start(self.start_pos))?;
stream.write_i64::<LittleEndian>(start_of_chunk_table_pos as i64)?;
stream.seek(SeekFrom::Start(start_of_chunk_table_pos))?;

self.chunk_table.write_to(stream, &self.vlr)
}

pub(crate) fn get_mut(&mut self) -> &mut W {
self.record_compressor.get_mut()
}
}
Loading