Curious about RMSE improvement approach #77
Replies: 5 comments 2 replies
-
|
hey @fwitmer @rawann31 @Ritika-K7 |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for sharing the analysis ..the low Green and NIR values look like
shadow is affecting the NDWI results.
Since we are using PlanetScope 4-band images (Blue, Green, Red, NIR) and we
don’t have the SWIR band, it is harder to separate dark water from shadows.
So trying another simple index, like a Blue/Red or Green/Blue ratio, might
help identify shadowed water better.
HINT For you : if these pixels are caused by cloud shadows, you could try
using the UDM2 shadow mask before applying the NDWI threshold to avoid
wrong classification.
It would be helpful to first check whether these are cloud shadows or
shadows from buildings/terrain, because UDM2 can only help with cloud
shadows.
regards,
Ritika
…On Sun, Feb 22, 2026 at 5:04 PM SIDDHANT ***@***.***> wrote:
hey @fwitmer <https://github.com/fwitmer> @rawann31
<https://github.com/rawann31> @Ritika-K7 <https://github.com/Ritika-K7>
Following up on Q1: I ran an initial spectral analysis on the September
2016 sample data. Here is the B-G-R-NIR signature of the misclassified
pixels.You can see below that the 'Missed Water' pixels (orange) cluster in
a very low-reflectance zone for both Green and NIR suggesting they are
shadow pixels pulling the NDWI down into the land range.
ndwi_histogram.png (view on web)
<https://github.com/user-attachments/assets/bb76c6db-3f36-4775-841f-1a16bd67c7b0>
spectral_scatter.png (view on web)
<https://github.com/user-attachments/assets/70e6f2a3-8375-4763-a644-85a43c451220>
Has there been any thought given to adding a secondary spectral index
(like a simple Blue/Red ratio) specifically to catch these low-reflectance
shadow pixels, rather than relying purely on adaptive NDWI thresholds?
—
Reply to this email directly, view it on GitHub
<#77 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BD5MT6LR5TYUPJ42TNICGYL4NGH4NAVCNFSM6AAAAACVNYF5WCVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKOBYG43DSOA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
|
Thanks @Ritika-K7 I investigated both the shadow source and the spectral indices you suggested. The shadows appear to be terrain/building shadows, not clouds. The sample data only has UDM v1 (no cloud-shadow flag), and the metadata shows 0% cloud cover with sun elevation at 27.5°. I also noticed On the modeling side, the sliding window already solves the shadow issue. When I ran The bigger issue is False Water over-classification:
So while shadows are recovered, about 91% of land pixels are being classified as water. The magnitude (1.57M pixels) seems too large to be just edge effects. Questions:
|
Beta Was this translation helpful? Give feedback.
-
|
Hi,
Yes, generally they mention that the approval process can take around 3 - 4
weeks, but in most cases it gets approved within 2–3 days. So you might
receive access sooner than expected.
Regarding the dataset, the repository already contains a folder named
*raw_data*, where you can find the imagery we downloaded from PlanetLabs.
You can use those images for now and continue your experimentation.
Link : https://github.com/fwitmer/CoastlineExtraction/tree/master/raw_data
Let me know if you face any issues.
regards,
Ritika
…On Sat, Feb 28, 2026 at 6:20 PM SIDDHANT ***@***.***> wrote:
Hi @Ritika-K7 <https://github.com/Ritika-K7>, I’ve applied for student
registration for Planet Labs, but I got to know that the approval process
may take around 3 to 4 weeks, is there a way for you to provide the needed
dataset. In the meantime, I’ve started drafting my proposal and would
really appreciate some guidance on how to refine and strengthen it. Also,
once I’ve completed the draft, would it be possible for you to review it
before I make the final submission?
—
Reply to this email directly, view it on GitHub
<#77 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BD5MT6OOBACZCZLGCGHHAOL4OGFIZAVCNFSM6AAAAACVNYF5WCVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKOJVGQ3TMMI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi Siddhant,
Regarding the UDM data, the images in the sample data folder were manually
downloaded from Planet Labs. That is why they only contain UDM v1. However,
for the full dataset we want to use imagery with UDM2, which includes
additional information ........like cloud and shadow flags. This is the
reason the download script is written for the analytic_sr_udm2 bundle.
About the shadow issue, this is also something we have noticed earlier. In
some areas, especially near cliffs , the terrain can cast dark shadows
over the water. Because of this, the model or NDWI sometimes misclassifies
those pixels as land instead of water.
We also saw... this when comparing the coastline shapefiles generated from
different images of the same region. When we overlaid the shapefiles...
there were noticeable differences in some areas particularly near cliff
regions where shadows fall on the water.
Your observation about the sliding window helping recover water pixels in
shadowed regions makes sense.
Regarding the False Water increase it is likely related to how the sliding
window results are combined in the current pipeline ...Since the final
label uses OR operation across windows ... a pixel can be marked as
water if any window classifies it as water. This makes the labeling more
inclusive but can also increase false positives.If you would like to
experiment with that approach and share the results, it would be meaningful
for improving the pipeline.
looking forward to your updates .
regards,
Ritika
…On Sun, Mar 1, 2026 at 7:36 PM SIDDHANT ***@***.***> wrote:
Hey, Thanks for the reply can you please guide on my other questions
regarding water over classification and proposal related as well
—
Reply to this email directly, view it on GitHub
<#77 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BD5MT6PF5J2PHFEXGTBYCZL4OQ7URAVCNFSM6AAAAACVNYF5WCVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKOJWGE4TMMI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey @fwitmer , @rawann31 , @Ritika-K7👋
I've been exploring the project idea and existing implementation, specifically the NDWI labeling pipeline. The sliding window approach used to improve RMSE with localized otsu is crazy nice. But I wanted to explore it from a spectral and temporal perspective
Here’s what I’m digging into:
Q1 Why do specific pixels get misclassified?
NDWI collapses 4 bands into 1 index. My hypothesis is that different failure modes (shadows, wet sand, buildings) have distinct signatures in the full B-G-R-NIR spectral space that NDWI alone can't separate. I'm planning to extract spectral profiles of misclassified pixels using the existing RMSE regions and visualize where they cluster.
Q2 How stable is the extracted coastline across dates?
The ground truth is from 2016-09-06, but we have imagery across years. I'd love to measure how much the coastline shifts per transect across those dates, this would tell us which regions are physically dynamic (tidal/intertidal) vs. where errors are truly from the algorithm.
I'll share plots and findings as I work through these. A couple of questions I wanted to know your insights on:
Looking forward to your thoughts thanks!
Beta Was this translation helpful? Give feedback.
All reactions