By using a halfway decent router with properly configured QoS. It isn't hard. Between running lsof locally and rflow on DD-WRT, it's easy to isolate the ports used for your various backup services. Once you have that, it's trivial to put those services at the bottom of the queue for outbound bandwidth.
What if a visitor on your network is using a new service with different ports from your standard? What if you're looking at a sync service that runs within the web browser or otherwise uses shared ports?
Or perhaps you're (shock, horror) a mainstream user who has no idea what any of the words in your comment mean, starting with "router"?
Sure, a power user can prioritise traffic and deal with this, but I think the original point is valid for the average user. Perhaps the onus should be on backup software writers (and anything that syncs in large enough chunks) to ensure their traffic is smooth and not bursty, in effect enforcing their own QoS.
If the onus is on the authors of backup software to ensure good bandwidth usage practices, then we can rule out port-sharing with real-time application protocols (and HTTPS) as an unequivocally bad practice. Even if we don't, it's still easy to de-prioritize traffic from unknown hosts or on unknown ports or on connections that have transferred too much data. (The popular and easy to use Tomato firmware makes all of that trivial to set up.)
Also, mainstream desktop users do generally know what a router is. The people who are afraid to touch the boxes the cable guy installed and think that IE is the internet are well below average in terms of computer literacy.
Finally, backup software can't ensure that it is well behaved unless at some point it tries to estimate the quality of your connection (which is hard to do accurately), and even then it can't be well-behaved if it is used on more than one machine simultaneously.