TMM Rating Allowance Needs to Use Ladder 1v1 Matching (or close to it)
-
The question really is would you prefer having a potentially unbalanced game or no game at all. For people in the less active timezones, the former would likely be preferable to most people. Being from a less active timezone, I am very much against reducing the search range without any recourse. A reasonable middle-ground solution would be to implement a checkmark that asks "Would you like to increase your search range? It will increase your chance of getting a game but also increase the chance of getting an unbalanced game." that the user can tick or untick. If you tick it, you get reduced search range (e.g. +/- 300, or whatever is reasonable), and if you untick it, the search range can increase as it currently does in TMM.
This option should also apply to ladder 1v1, allowing your search range to increase until you find a game. The rating system should be able to handle the "imbalanced" matchups that result (e.g. if I face a 1400 as a 1900 on ladder, I would have a 95% chance to win, but get like 2 points or something from winning, -14 from drawing, and -30 from losing).
Also, from my anecdotal experience, I've personally been in quite a few games with a weaker teammate against 2 ~1500 opponents and come out victorious. I think it's very reasonable for a 1700+1000 to beat a 1400+1300, with the outcome probably being 50/50, if everyone is actually at their rating (separate issue). This is a little beside the point, but Swkoll showed me a replay the other day where his 800 global/700 ladder/700TMM teammate just crushed Inspektor_Kot and Wesh, who are both rated around 1.8k. It was very entertaining to watch.
-
Was this difference also not somewhat deliberate for lower rated players to learn faster?
Last weekend we played a game as 2x595 vs. 0 & 1007. At some point we sniped the 1007, but ultimately they won. The 0 rated made a point to thank the 1007 rated for his guidance. - (replay https://replay.faforever.com/14167584 for those interested)
Watching the replay from their point of view, I do see the frustration @George_W_Crush mentioned.Still obviously, go for best matches if there are 20 people in queue...
-
It has been my theory for awhile that the game quality calculation is innaccurate for teams wherein the highest and lowest rated players have higher differences in rating. IE as the OP said, a "balanced" game with an 1800 and a 1000 vs two 1400s will see the latter team win the clear majority of the time.
Sadly, I dont have the technical knowhow to verify this theory statistically
-
I am currently writing a new matching algorithm to make a 4v4 queue possible. From my understanding trueskill doesn't factor in rating disparity between players when calculating the game quality so in order to get onl y games with similarly rated players we need to introduce our own quality metric. I will now explain my first draft for this so you can discuss if you think that is a good formula and make suggestions to improve it.
Currently I calculate quality = uniformity * fairness + newbie bonus + time bonus
Uniformity goes from 0 to 1 and is 1 if all players in the game have the same rating and is zero if the standard deviation of the ratings if all players is greater than 300
Fairness works the same but looks at difference in total team skills, so it is 0 if the rating difference between the teams is higher than 400.
The newbie bonus is to faster match new players and is a flat bonus if a new player is in one of the teams.
The time bonus increases with every time you were not successfully matched.
The main thing I want to know is: should we use the standard deviation for uniformity or just the highest rating difference between players? The former would allow higher skill gaps in larger teams and smaller ones in 2v2 -
Do you use players mu - 3 * sigma or just mu for players rating?
I think that it would be best to use standard deviation for uniformity while setting a max difference in highest vs lowest rating for the Uniformity to even be considered or even saying that the quality is too low if the Highest - Lowest ratings is greater then X. The Highest - Lowest ceiling should also increase with each unsuccessful match up to some cap.
For example (considering 4vs4 here) you start with a ceiling of 400 and raise it by let's say 50 per unsuccessful match up to 600. -
I use mu - 3 * sigma for all calculations.
I failed to mention that I discard all possible games with a quality below a certain threshold. So in a way I am already doing what you proposed.What is an acceptable value for the difference in team strengths in your opinion?
-
I know that you discard games below certain quality and the quality of the same team matchup rises with each unsuccessful queue pop but what I am also suggesting is adding the extra element that prevents games that are decent enough using standard deviation but are not so great in reality because of the highest vs lowest rating. For example having players with ratings : 2000, 2x 1900, 1800, 2x 1400, 2x 1300 has a standard deviation of 282 which ain't great but they will still probably be matched while usually making for an unbalanced game.
TBH I thought it would be much worse and looking at it now after some trials with the calculator it's not bad but I think that adding a hard ceiling of I would say 500 - 600 will remove the troubling edge cases while not cutting down on the matches too much.
Concerning the value (total team rating right?) I would say that the difference should be no bigger then 100 initially raising to I would say 300 by 50 with each unsuccessful queue pop. -
Another thing popped up: we have no restrictions on premade teams. I noticed that when a 1500 queues up with a 500 it would drag down the quality score. Now we could say that we don't want to "punish" people that are friends but have a wide skill gap. On the other hand their individual ratings are probably still accurate and the rest of the players would still suffer from having such a skill variance in their games.
(I assume accurate ratings because newbies would get a matching bonus that alleviates this problem)
What do you guys think about this? -
Surely a simple and basic thing to implement - not sure from a technical pov but logically for sure - would be to just offer a little more information than "4 people in queue".
A lot of issues for people are not finding a game when they see 8 people in the queue, or they find a game and realise it is stupidly balanced and the people they are with are basically playing a different game and then the salt flows through their veins.
Why is it not possible to just offer more information on who the "4 people in queue" are regarding their rating. It is already fairly well known that there is a rating bracket system. <500, <1000, 1000-1500 etc etc, and these rating brackets/groups have access to their own maps etc.
So why cant their be a simple tally system under the "4 people in queue" that just says oh btw these 4 people are in the <500 bracket. then someone who is 1600 knows they probably shouldn't queue unless they want to wait 25 minutes to maybe get a game.
I don't know alot about how difficult this is, but as an idea it just seems far to simple and helpful to neglect. Just give people that tiny bit more information of what "8 people in queue" means, and i cant think of a simpler way than just a tally mark system for each Bracket......
-
This would also be inaccurate for edge cases where you are at the cusp or entering/leaving a bracket, new players that get artificial mu values set, and for premade teams which could have something like a 0 rated guy play with a 2000 rated guy.
-
What pre-made teams will have a 0 rated and a 2k rated player? There isn't even any 2k rated players now with the reset, and when there was, i think only 2 people were arguably active out of the like 4/5 people who got to 2K tmm?
I don't see how people being close to the rating is an issue? You either round up their score like global does when showing someones rating in lobby and use that.
Im sure within the idea there is some middle ground that could be reached which at least would give more information as to whether you should even bother queuing. People have said there was something similar in python for ladder at some point?
-
I think to give this discussion more structure you first of all need to define which problem you want to solve and why your solution would solve it.
You went from "people don't understand why they don't get a game" to "people want to know whether they should even bother queueing"
Btw, do you not just queue when you want to play? Why is the distribution of players relevant for this? -
Banning large differences can be a major problem. I play 2v2 with a friend without a 1v1 rating or really "undeveloped" one. If 3v3 opens up, we might both be 1000 rated 2v2 and invite a new friend, who knows?
I think blocking out teams based on 1 team member is a problem, if people are disqualified for 50% of matches then TMM will probably lose those players.
-
There will be no hard block, it will just take them longer to find a game because they produce lower game quality. It will take some time to compensate this with the "time waited" bonus
-
@BlackYps And what with people with undeveloped 1v1 rating? - Will they need to wait 3 rounds to be matched, just because they play 1v1 very rarely but not never?
-
They will get a newbie bonus to help them get matched
-
Okay, just so it is clear: what is the initial rating range allowed between two people within the first search cycle, second, third, etc?
And what is the quality of match allowance for first, second, third?
-
Well, the concrete values are what I am trying to get from this thread. 4head
The actual code is this:
newbie_bonus = 0 time_bonus = 0 ratings = [] for team in match: time_bonus += team.failed_matching_attempts * config.TIME_BONUS_WEIGHT if not team.has_top_player(): newbie_bonus += team.has_newbie() * config.NEWBIE_BONUS_WEIGHT for mean, dev in team.raw_ratings: rating = mean - 3 * dev ratings.append(rating) rating_imbalance = abs(match[0].cumulated_rating - match[1].cumulated_rating) fairness = max((config.MAXIMUM_RATING_IMBALANCE - rating_imbalance) / config.MAXIMUM_RATING_IMBALANCE, 0) deviation = stats.pstdev(ratings) uniformity = max((config.MAXIMUM_RATING_DEVIATION - deviation) / config.MAXIMUM_RATING_DEVIATION, 0) quality = fairness * uniformity + newbie_bonus + time_bonus
The preliminary config values are:
self.NEWBIE_MIN_GAMES = 10 self.TOP_PLAYER_MIN_RATING = 1600 self.MINIMUM_GAME_QUALITY = 0.5 self.MAXIMUM_RATING_IMBALANCE = 600 self.MAXIMUM_RATING_DEVIATION = 300 self.TIME_BONUS_WEIGHT = 0.1 self.NEWBIE_BONUS_WEIGHT = 0.2
Explanation:
@BlackYps said in TMM Rating Allowance Needs to Use Ladder 1v1 Matching (or close to it):
I am currently writing a new matching algorithm to make a 4v4 queue possible. From my understanding trueskill doesn't factor in rating disparity between players when calculating the game quality so in order to get onl y games with similarly rated players we need to introduce our own quality metric. I will now explain my first draft for this so you can discuss if you think that is a good formula and make suggestions to improve it.
Currently I calculate quality = uniformity * fairness + newbie bonus + time bonus
Uniformity goes from 0 to 1 and is 1 if all players in the game have the same rating and is zero if the standard deviation of the ratings if all players is greater than 300
Fairness works the same but looks at difference in total team skills, so it is 0 if the rating difference between the teams is higher than 600.
The newbie bonus is to faster match new players and is a flat bonus if a new player is in one of the teams.
The time bonus increases with every time you were not successfully matched.Additional explanation:
The deviation is roughly one third of the biggest rating difference if we assume a somewhat even rating distribution in the team.
If both uniformity and fairness are about 2/3 we barely reach the quality threshold of 0.5. That means that a game that has 200 team rating difference and a 300 difference between individual players is the borderline case of what is acceptable. One of these metrics can be worse if the other one is better. -
This post is deleted! -
Could you randomly generate some imaginary players with rating values/number of games, put them into random teams, and then calculate the quality? Without concrete examples its rally hard to judge such an algorithm.
Edit: And throw away examples where fairness is too far off.