Feature #1252

Raise default rates

Added by merlijn about 2 years ago. Updated about 1 year ago.

Status:ClosedStart date:06/24/2012
Priority:NormalDue date:
Assignee:divVerent% Done:

100%

Category:Configuration
Target version:1.0

Description

By default we are still limiting servers and clients to a maximum rate of 20.000 bytes - which is starting to become too little especially for vehicles.

With high bandwidth internet being very common these days it makes sense to raise the limit to 50.000 bytes, which really should not be a problem for servers nor for users.

divVerent, do you think this is reasonable or could it cause problems with clients that have set a very high input rate and will be spamming the server?

History

#1 Updated by divVerent about 2 years ago

That will not really help. DP allows to send at most 1400 bytes per packet, and at most one packet per tic. So at our current settings, the highest rate we can ever use is 42000.

The current default is 20000, and there is a menu setting to increase it from there.

And IMHO we should really keep the default at less than half the theoretical maximum, so we notice when we get rate problems and can still decide to up the rate a notch as a temporary fix. But we then should STILL work on a permanent fix to get the rate down to half the max!

There is also another issue. Typically the required rate changes depending on how much is happening. Also, DP allows to temporarily exceed the rate limit if it is kept on average (so no matter which rate is defined, a 1400 bytes packet can be sent at any time - it will however cause a sending pause later to keep the rate limit). So, "rate" basically is the setting to govern the AVERAGE rate. What we CAN however do safely, is to work on improving the burst handling so there basically is some kind of buffer before delays kick in (currently that buffer is "one packet", maybe we can make that two or three).

And if the average is close to the theoretical max, then there is no space for bursts any more.

So I would really be against upping the rate without even trying to optimize the code. There is no technical reason why vehicles need that much bandwidth. The only reason that actually exists is that vehicles are currently composed of multiple entities on the server. To fix that, they need to be networked as a single entity, and displayed using multiple models on the client.

#2 Updated by divVerent about 2 years ago

I already found the place to define the burst handling:

        if (realtime > conn->cleartime)
                return true;
...
        // delay later packets to obey rate limit
        if (conn->cleartime < realtime - 0.1)
                conn->cleartime = realtime - 0.1;
        conn->cleartime = conn->cleartime + (double)totallen / (double)rate;
        if (conn->cleartime < realtime)
                conn->cleartime = realtime;

So:

- cleartime is always at least realtime
- the 0.1 seconds check can be ignored - usually cleartime will be >= realtime - sys_ticrate.value, except on time jumps (e.g. map change) or when no packet at all got sent for some reason (e.g. due to rate limiting)
- so, to allow bursts, we would simply change the first check to if (realtime > conn->cleartime - net_bursttime.value)
- then we would basically allow "infinite bursts" for that time; however, when the bandwidth use does not go down, we get the same behaviour as we have now, but once it "cools down", we basically get space for a new burst

Does this sound fine? Would it actually help, or do the vehicles make trouble even when no big action is happening? I.e. can I have a netgraph screenshot of when vehicles caused bandwidth problems?

#3 Updated by divVerent about 2 years ago

I made a DarkPlaces repo branch divVerent/ratelimit.

In this branch, please test the vehicles code with the following cvar settings:

Old CSQC behaviour: net_usesizelimit 1; net_test 1; net_bursttime 0
Old non-CSQC behaviour: net_usesizelimit 2; net_test 1; net_bursttime 0
Suggested new behaviour: net_usesizelimit 2; net_test 0; net_bursttime 0.1
Suggested new behaviour alternate: net_usesizelimit 1; net_test 0; net_bursttime 0.1

(possibly you may want net_bursttime higher than 0.1, maybe even 0.2 to 0.4)

In a test game on g-23 with 7 bots, average rate about 6000, I set rate to 8000. I get a lot of drops (more than 10 per sec) with the old CSQC behaviour configured. With the old non-CSQC behaviour configured, I still get about one drop per second, and I get about one drop every 5 seconds with the suggested new behaviour. The new alternate behaviour yields still about three drops per second.

The danger though is that net_usesizelimit 2 might break large (> 100 bytes) CSQC entities at low rates (but I don't think we have such large entities, and if we do, we should maybe make them smaller). So please compare new alternate behaviour too.

Note that this does not enable more bandwidth - it just distributes them over frames differently. Generally, this should make sure each frame updates at least SOMETHING, and total dropouts that even lack player position updates get minimized.

#4 Updated by merlijn about 2 years ago

  • Status changed from New to In Progress

We are now testing this branch on the overkill server, and when vehicles is empty we will restart that one too. I suppose it is difficult to really make sure it performs well, but so far after playing on overkill for a bit I surely do not see anything bad about it.

Let's see if the complaints about lag will go down, or do you have a better way of testing?

I'm not sure how to properly test the new "alternate" behaviour too, but possibly tZork does know?

#5 Updated by tZork about 2 years ago

I haven't spotted any issues either (tough mind you im always lagging due to my own bad connection, so.. i guess it don't mean much :P), as for alternate behavior, i guess that would be "net_usesizelimit 1; net_test 0; net_bursttime 0.1". I could set up some cmd's for switching between the two, tough i don't know it they "take" at once..?

#6 Updated by merlijn about 2 years ago

From a little more testing, I'd say this code is a big improvement - but could still be better. Perhaps the bursts can be a little bit higher and that will really fix it.

Also, what about the downloading of csprogs.dat? These files tend to be over 1MB and it seems wasteful to download it at 20 kb/s when pretty much all connections can handle much quicker speed. At 20 kb/s this would take nearly a minute to complete. Can the rate specifically for this be raised, or do we have to force people to rather pack it up in a pk3 and use the curl download way?

#7 Updated by shankar over 1 year ago

  • % Done changed from 0 to 50

#8 Updated by divVerent about 1 year ago

  • Status changed from In Progress to Closed
  • % Done changed from 50 to 100

Raising the rate for csprogs... no idea how. Maybe someone else.

Otherwise, feel free to play with the burst settings and suggest better values. Closing this for now.

Also available in: Atom PDF