-
|
I am running the application inside a docker container, so the network visible inside the container has a local ip address. When I send an offer to the client, the candidate should contain the server's public ip address (not local one). How can I correctly specify the server's ip address in the sent offer using the str0m library? My assumption is to modify the answer before sending it. I don't understand if I can do this using the Sdp method or if I need to edit the string. let candidate =
Candidate::host(LOCAL_DOCKER_IP_ADDRESS_WITH_PORT_HERE, "udp").expect("a host candidate");
rtc.add_local_candidate(candidate).unwrap();
let mut str0m_answer = rtc
.sdp_api()
.accept_offer(str0m_offer)
.expect("str0m_offer to be accepted");
tracing::warn!("str0m answer - {:#?}", str0m_answer);
// Can I somehow to exchange ip address here to real public IP address?
// str0m_answer.set_nat_1to1_ips(PUBLIC_IP_ADDRESS); // I would like to make it worksIf library doesn't have this feature, what do you think if I'll implement it? |
Beta Was this translation helpful? Give feedback.
Replies: 17 comments 8 replies
-
|
Hey. As long as the docker's network is set up correctly all you have to do is change the local candidate to the servers public IP address instead of the the local docker IP. let candidate =
Candidate::host(public_addr, "udp").expect("a host candidate");
rtc.add_local_candidate(candidate).unwrap(); |
Beta Was this translation helpful? Give feedback.
-
|
Hey! Thank you for the answer! So, my socket and candidate will has different IP addresses such way? let socket =
UdpSocket::bind(format!("{local_address}:50000")).expect("binding a random UDP port");
let candidate =
Candidate::host(public_addr, "udp").expect("a host candidate"); |
Beta Was this translation helpful? Give feedback.
-
|
It depends on how the networking for Docker is configured. If you just want it to work without restricting to a specific interface bind to |
Beta Was this translation helpful? Give feedback.
-
|
This is my part of docker config: networks:
- main // this is just default internal Docker network (not binded to public IP address)
ports:
- "50000-50200:50000-50200/udp"Unfortunately, I can't bind docker to public IP address, because of project restrictions. Previously I'm used webrtc rs, where I was able to configure it like this: se.set_nat_1to1_ips(
vec![public_ip_str.clone().into()],
RTCIceCandidateType::Host,
);If I correctly understand, how it works, it just replace internal ip address only for SDP answer. Everywhere else it use internal IP address |
Beta Was this translation helpful? Give feedback.
-
|
As long as traffic is correctly routed from your server's public IP on port 50000 to docker on an interface you bind it should be okay. Binding to |
Beta Was this translation helpful? Give feedback.
-
|
Sory, I don't understand, where I have to bind to I'm trying this way let socket =
UdpSocket::bind(format!("0.0.0.0:50000")).expect("binding a random UDP port");
let socket_addr = socket.local_addr().expect("a local socket address");
let candidate =
Candidate::host(socket_addr, "udp").expect("a host candidate");
rtc.add_local_candidate(candidate).unwrap();But it doesn't work even locally. Perhaps I dont understand something very simple but important... |
Beta Was this translation helpful? Give feedback.
-
|
You don't want So if you are working locally like this, not on the server, you should set the host candidate ip to your computer's IP on the local network e.g. |
Beta Was this translation helpful? Give feedback.
-
|
I'm sorry, I don't understand what do you mean and what I can to do... Anyway, thank you very much for answering me... It should work both locally and on the serever, and obviously I tested both. Right now I am using such settings: let socket =
UdpSocket::bind(format!("{docker_network_address}:50000")).expect("binding a random UDP port");
let socket_addr = socket.local_addr().expect("a local socket address");
let mut candidate_address = socket_addr.clone();
candidate_address.set_ip(public_ip_address);
let candidate = Candidate::host(candidate_address, "udp").expect("a host candidate");
rtc.add_local_candidate(candidate).unwrap();Locally everything works properly and connection established. But connection on the server doesn't happen. It try to connect, but never SDP answer created correctly, with public server ip, but connection doesn't happen. I don't know why, but I know, that:
So, I still don't understand, what I'm making wrong, how to debug and where I have to find the bug...
|
Beta Was this translation helpful? Give feedback.
-
|
That looks good to me. It's similar to how we run our production setup. In terms of debugging:
Also, if you are using TURN with webrtc-rs but not str0m that might be part of your problem |
Beta Was this translation helpful? Give feedback.
-
|
First assumption was correct. To make it work we just need to replace ip address only for answer with public IP address: let mut str0m_answer_str = str0m_answer.to_sdp_string();
if let Ok(public_ip_str) = std::env::var("PUBLIC_IP_ADDRESS") {
let re = Regex::new(
r"\b172\.(1[6-9]|2[0-9]|3[0-1])(?:\.(?:25[0-5]|2[0-4]\d|1?\d{1,2})){2}\b",
)
.unwrap();
str0m_answer_str = re
.replace_all(&str0m_answer_str, public_ip_str.as_str())
.into_owned();
}Such way it works. So initial proposal still relevant: to add method for that functionality |
Beta Was this translation helpful? Give feedback.
-
|
But you already added a host candidate with the public ip address with this? // <snip>
let mut candidate_address = socket_addr.clone();
candidate_address.set_ip(public_ip_address);
let candidate = Candidate::host(candidate_address, "udp").expect("a host candidate");
rtc.add_local_candidate(candidate).unwrap();Can you share the relevant parts of str0m's answer before and after your replacement? |
Beta Was this translation helpful? Give feedback.
-
You will not need any SDP mangling or this function. str0m should have no knowledge of your internal IP, only the public added as a "host" candidate. SDP that str0m produces will then only give you the public IP. The |
Beta Was this translation helpful? Give feedback.
-
In But client don't know about docker network and able only interract with server PUBLIC_IP, that's why before we will send candidate to the client it is required to replace internal docker bridge network IP addres with server PUBLIC_IP_ADDRESS. When a client sends data to the server, server knows how to handle it thanks to the docker settings |
Beta Was this translation helpful? Give feedback.
-
|
This is what the str0m documentation calls "NIC enumeration" – you need to know, or discover, the IP addresses you are going to use. It's the same as for a browser and with a Sans-IO implementation, we decided, so far, that this is a problem that isn't directly related to WebRTC. One way of discovering your public IP is to use a STUN (or allocate on a TURN) server, and although str0m talks STUN as part of being an ICE agent, using it to do NIC enumeration, is considered an IO-problem: it's about how your IO situation looks in your deployment. Someone recently asked about having STUN/TURN nic enumeration inside str0m, and I'm on the fence about that. You could weigh in on that issue with this specific case. Are you trying to use str0m as server or client or both? |
Beta Was this translation helpful? Give feedback.
-
|
I am trying to use Str0m as an SFU server. I am trying to create a room for 3 people and connect all of them. I have a TURN server, Coturn, so browser clients can discover their IP addresses and create correct ICE candidates. I already have a fully working SFU based on webrtc-rs. My current task is to compare the performance of webrtc-rs and Str0m. I hope Str0m will be significantly faster than webrtc-rs. The maximum number of users in my current SFU implementation is 50. However, the minimum reasonable amount for my current task is 120 simultaneous clients (10 rooms with 12 users each). That's why I need this comparison to understand if it makes sense to move to Str0m or not.
My initial suggestion concerned only one thing: providing a correct HOST candidate for the SFU server to send to the client when using Docker with bridge network settings. In that case, the candidate inside the Docker container must be the internal network IP address of the Docker network. However, when this candidate is sent to the client, it must be converted to the server's public IP address. This issue is specific to the str0m library and is not about how candidates are discovered, because the host candidate is created manually (as in the example). This manual creation process fully meets all the SFU's requirements and doesn't require any external STUN/TURN servers, if I understand it correctly. At least, it created the same way in all other webrtc libraries, which I touched, when we have to create SFU server |
Beta Was this translation helpful? Give feedback.
-
|
I had Claude rewrite the str0m example and make a simplified version running in Docker with bridge networking. This now works, the key thing is that you need to tweak the IP address when you hand off to str0m. You'll receive traffic ingress on Str0m will then form a host-prflx candidate pair with the remote candidate being So in total:
|
Beta Was this translation helpful? Give feedback.
-
|
So in summary for 1-to-1 NAT SFU scenario with str0m, we recommend:
|
Beta Was this translation helpful? Give feedback.
So in summary for 1-to-1 NAT SFU scenario with str0m, we recommend:
$PORTice-liteto avoid making connectivity checks from the server.hostcandidate with the public IP ($PUBLIC_IP:$PORT). This ensures the SDP correctly advertises the public IP.$PUBLIC_IP:$PORTin theReceive::destinationfield.